content
stringlengths
275
370k
Canada’s Health Minister wants everyone to remember two simple numbers: “five” and “15.” You’ll hear a lot about five and 15 in the coming months. They’ll be in ads, on websites, on posters in the grocery store and on foods themselves. What does five and 15 mean? It has to do with how nutritious food is. Five means “a little” and 15 means “a lot.” When you look at the list of “Nutrition Facts” printed on food items in the grocery store, you’ll see that each nutrient is given a percentage. It tells you how much of that nutrient is in a product, compared to how much you should have of that nutrient for the entire day. For instance, if it says a product contains 4% Fat, it means it contains four per cent of the fat you should have in a whole day. According to the new five and 15 rules, if a nutrient is five per cent or under, the food contains “a little.” If it’s 15 per cent or more, the food contains “a lot.” So if you want more fibre in your diet, look for foods containing more than 15 per cent of the daily allowance of fibre. On the other hand, if you’re trying to cut down on sodium, look for foods that have less than five per cent of the daily allowance of sodium. The new guidelines will help consumers better understand how the foods they buy can affect their health. A new ad campaign about “five and 15” will begin in December. Do you think the new ad campaign will be effective and help people eat better? Why do you think so? Where should these advertisements be shown in order to be as effective as possible? “What questions do you ask yourself to make sure you understand what you are reading?” “How do you know if you are on the right track?” “When you come to a word or phrase you don’t understand, how do you solve it?” “How do you figure out what information is important to remember?” “What do you do when you get confused during reading?” Identify, initially with some support and direction, what strategies they found most helpful before, during, and after reading and how they can use these and other strategies to improve as readers (OME, Reading: 4.1). Identify the strategies they found most helpful before, during, and after reading and explain, in conversation with the teacher and/or peers, or in a reader’s notebook, how they can use these and other strategies to improve as readers (OME, Reading: 4.1). Grammar Feature: Number Words Number words. When writing, how do you know when to write the word for a number or the digits? The rule that most writers follow is: numbers less than 10 are written as words and numbers 10 or over are written as digits. “What does five and 15 mean?” “… four per cent of the fat you should have in a whole day.”
When the moon in its journey around the earth passes directly between the earth and the sun, it casts its shadow on the surface of the earth, and an eclipse of the sun takes place.An eclipse of the sun occurs only when the moon is new, for then the moon is on that side of the earth facing towards the sun. Then why isn't there an eclipse of the sun every time there's a new moon? The reason is that the path of the moon around the earth does not lie directly in line with the orbit of the earth about the sun. In its 29-day trip around the earth, the moon passes sometimes above and sometimes below the path of the earth.An eclipse of the sun can be total, anuular, or partial. If the moon hides the sun completely, the eclipse is total. But the moon is not always the same distance from the earth. Often, it is too far from the earth to hide the sun completely. Then , when an eclipse takes place, the moon is seen as a dark disk which covers the whole sun except a narrow ring around its edge. This thin circle of light is called " the annulus", meaning "ring". This is an annular eclipse. An eclipse is partial whwnever only a part of the disk of the moon comes between the sun and the earth.An eclipse of the moon occurs only when the moon is full, for then it is at the opposite side of the earth from the sun. When the moon comes directly behind the earth, as seen from the sun, it passes gradually into the great shadow-cone cast by the earth and disappears from view. A total eclipse of the moon then occurs. A partial eclipse takes place when the moon enters only partly into the shadow.In some years, no eclipse of the moon occur. In other years, there are from one to three. Every year, there must be at least two solar eclipses, and there may be as many as five. At any one place on the earth's surface, a total solar eclipse will be visible only once in about 360 years.
Pertaining to being between things, especially between things that are normally closely spaced. The word “interstitial” comes from the Latin “interstitium” which was derived from “inter” meaning “between” + “sistere” meaning “to stand' = to stand between. The word “interstitial” is much used in medicine and has specific meaning depending on the context. For instance, interstitial cystitis is a specific type of inflammation of the bladder wall. Interstitial radiation involves placing radioactive material directly into a tumor. Interstitial pneumonia is inflammation of the lung which involves the meshwork of lung tissue (alveolar septa) rather than the air spaces (alveoli).
In mineralogy, diamond (/daɪᵊmənd/; from the ancient Greek ἀδάμας – adámas “unbreakable”) is a metastable allotrope of carbon, where the carbon atoms are arranged in a variation of the face-centered cubic crystal structure called a diamond lattice. Diamond is less stable than graphite, but the conversion rate from diamond to graphite is negligible atstandard conditions. Diamond is renowned as a material with superlative physical qualities, most of which originate from the strong covalent bonding between its atoms. In particular, diamond has the highest hardness and thermal conductivity of any bulk material. Those properties determine the major industrial application of diamond in cutting and polishing tools and the scientific applications in diamond knives and diamond anvil cells. Because of its extremely rigid lattice, it can be contaminated by very few types of impurities, such as boron and nitrogen. Small amounts of defects or impurities (about one per million of lattice atoms) color diamond blue (boron), yellow (nitrogen), brown (lattice defects), green (radiation exposure), purple, pink, orange or red. Diamond also has relatively high optical dispersion (ability to disperse light of different colors). Most natural diamonds are formed at high temperature and pressure at depths of 140 to 190 kilometers (87 to 118 mi) in the Earth’s mantle. Carbon-containing minerals provide the carbon source, and the growth occurs over periods from 1 billion to 3.3 billion years (25% to 75% of the age of the Earth). Diamonds are brought close to the Earth’s surface through deep volcanic eruptions by a magma, which cools into igneous rocks known as kimberlites and lamproites. Diamonds can also be produced synthetically in a HPHT method which approximately simulates the conditions in the Earth’s mantle. An alternative, and completely different growth technique is chemical vapor deposition (CVD). Several non-diamond materials, which include cubic zirconia and silicon carbide and are often called diamond simulants, resemble diamond in appearance and many properties. Special gemological techniques have been developed to distinguish natural, synthetic diamonds and diamond simulants.
- There are five horse diseases for which the vaccines are considered core, or essential, for all horses in North America: Eastern equine encephalomyelitis (EEE); Western equine encephalomyelitis (WEE); Rabies; West Nile virus; and Tetanus. - The core vaccines are recommended for all horses due to the widespread and serious nature of those diseases. - Risk-based vaccinations vary depending on the horse’s age, use, and geographic location. In the month of October 2018, the New York State Department of Agriculture reported three confirmed cases of Eastern equine encephalitis (EEE). All three were in unvaccinated horses, ranging in age from 3 months to 8 years. Two of the three were euthanized; the third horse was still alive, but showing neurological signs, despite medical care. Texas reported two confirmed cases of rabies in horses in 2017, and in April of 2018, a horse in the Texas Panhandle tested positive for rabies and was euthanized. In print, these are statistics. In real life, they are individual horses with unique personalities and abilities. Some lost their lives, and each one was impacted by a disease that could likely have been prevented with routine vaccination. Unfortunately, not all horse owners take this simple precaution. Protecting Against Serious Disease The American Veterinary Medical Association (AVMA) defines core vaccinations as those “that protect from diseases that are endemic to a region, those with potential public health significance, required by law, virulent/highly infectious, and/or those posing a risk of severe disease.” In light of that definition, the American Association of Equine Practitioners (AAEP) notes that the following equine vaccines meet those criteria and are identified as “core” in their vaccination guidelines. These include: - Eastern equine encephalomyelitis (EEE) - Western equine encephalomyelitis (WEE) - West Nile virus These are not diseases you want to gamble with. WEE is fatal in 50 percent of cases, while EEE has a 90 percent fatality rate; both are transmitted by mosquitoes. West Nile virus, which is transmitted by mosquitoes that feed on infected birds, is about 33 percent fatal, but 40 percent of horses that survive the acute disease are left with permanent neurologic effects. Rabies isn’t common in horses, but it is 100 percent fatal, and puts at risk the humans who have handled the horse. AAEP recommendations are for all horses—regardless of where they live, their age or use—to be vaccinated against these five diseases every year. “These are endemic diseases with high mortality rates—diseases that every horse should be vaccinated for every spring, no matter how old the horse, what he does, or where he lives,” says Jacquelin Boggs, DVM, ACVIM, an internal medicine specialist and senior technical service veterinarian with Zoetis. “These diseases also tend to be ones that are the more difficult to mitigate [in terms of] risk of exposure to the disease-carrying vectors,” says John H. Tuttle, DVM, director of equine professional services at Boehringer Ingelheim Animal Health. Tuttle refers to the various ways these core diseases are spread, including mosquitoes, bacteria in soil, rabid animals, et cetera. You can be the most careful horse owner in the world, but you can’t guarantee your horse won’t come in contact with one of these vectors that could introduce a potentially life-threatening disease. Beyond the five core vaccines, there are a variety of risk-based vaccines, meaning your horse only needs them depending on his risk of exposure to those diseases. Examples of risk-based vaccines include: - Equine herpes virus (also referred to as rhinopneumonitis) - Equine viral arteritis (EVA) - Potomac horse fever (PHF) Your veterinarian can advise you as to which, if any, of these risk-based vaccinations your horse should receive in addition to the five core vaccines. Recommendations vary widely, depending on your horse’s age, living accommodations and travel/show routine. It can be difficult to limit your horse’s exposure to risk-based infectious diseases, which are often related to his use. For example, if your horse frequently travels to shows or events, his exposure to other horses increases his risk of respiratory disease, such as influenza. “Risk-based respiratory vaccinations, such as influenza and equine herpes, should be considered for horses that are regularly exposed to outside horses,” says Tuttle. “Even if a horse ‘never leaves the farm,’ there may be other horses that do (or perhaps neighboring horses that do), and they may act as a carrier of infection for the horse that is considered a ‘homebody.’” Protecting Your Horse Your veterinarian knows your horse and your region, and therefore is the best person to evaluate the timing of vaccines and their frequency. Some horse owners mistakenly think that certain diseases aren’t found in their part of the country, so they neglect vaccinating for them. “Every year there are cases of horses contracting one of the core equine diseases, and these most often occur in unvaccinated horses,” says Boggs. Occasionally, owners will neglect to vaccinate senior and retired horses. “A common misconception is that as a horse gets older, his need for vaccination decreases [because] he may have been ‘exposed’ to the disease before, or previous vaccinations should be sufficient,” says Tuttle. However, this isn’t the case. As a horse ages, his immune system experiences age-associated decline, a process known as “immunosenescence.” This may lead to decreased individual immune efficiency and should be taken into consideration when developing a vaccination protocol for senior equines. Bottom line: Older horses should continue to be vaccinated against all five core diseases. How Vaccines Protect The antigens contained in a vaccine are considered foreign by the horse’s immune system. In response, his body takes defensive action by producing antibodies to neutralize the antigens, building defenses against that specific disease. This doesn’t happen overnight. Generally speaking, it takes about two weeks after vaccination for the horse’s immune system to develop a sufficient level of protection against exposure to the actual disease. In the case of young horses or horses that have never been vaccinated, these inoculations will require boosters, often four to six weeks later. After the initial vaccination, all horses require annual booster shots to maintain protective levels of immunity. Depending on where the horse lives and his exposure, a second booster may be advised each year. For example, horses living in regions where mosquitoes are prevalent often receive boosters for core vaccinations in both spring and fall. Even if your horse is annually vaccinated for tetanus, you’ll want to give him another booster if he gets a wound and it’s been longer than six months since he had his last tetanus shot. Benefits of Working with Your Vet Sure, you can buy equine vaccines at your neighborhood farm supply store or online, but should you? A benefit of having your veterinarian out to vaccinate is feeling confident that they’ve properly handled, stored, and administered the vaccines. “Just having your veterinarian’s eyes on your horse once a year is good practice,” says Boggs. “The USDA requires rabies vaccine be given by a veterinarian, so you can’t go buy it at your local feed store.” And in the unlikely, but still possible, scenario that your horse has a negative reaction, your veterinarian is already familiar with the exact vaccine product that was given. If you haven’t scheduled your horse’s spring vaccinations yet, now is the time to call your vet and set up an appointment. In the scheme of things, vaccinations are relatively inexpensive. Besides, you can’t put a monetary value on the peace of mind you get from knowing your horse is protected. This article originally appeared in the March 2019 issue of Horse Illustrated magazine. Click here to subscribe!
After the largest election in history, India's government faces important decisions on how to respond to climate change, including preparing for its increasing impacts on the lives and livelihoods of millions of Indians. While climate and environmental issues such as deforestation, water stress and floods were largely absent in the election campaign, several Indian states provide the federal government with a compelling case for strong climate policies. This is a critical year for India in many ways. The country's 29 states are revising their five-year State Action Plans on Climate Change (SAPCC), which are intended to integrate climate change concerns into mainstream government planning processes. After drafting a National Action Plan on Climate Change in 2008, the Ministry of Environment, Forests and Climate Change recognized the importance of local insights and decision-making, thus creating the SAPCCs at the sub-national level. India, like other countries, may also choose to update its Nationally Determined Contribution (NDC) to the international Paris Agreement on climate change in 2020. These processes present excellent opportunities to ensure that a range of voices outside government—such as civil society organizations, the scientific community, media, think tanks and academics—are included in climate policy decisions. Climate policies ultimately affect the lives and livelihoods of communities, families and individuals. In order to make sure policies are effective and equitable, these actors must be involved. Addressing local concerns is also important because Indian states have primary jurisdiction over the country's water and agriculture sectors, two of the areas most impacted by climate change. Women farmers planting rice in Tamil Nadu, India. Photo by Michael Foley/Flickr. The following examples from around the world show how countries are creating more inclusive, coordinated and effective approaches to climate action. Indian state and federal policymakers can draw insights from these examples as they revise their climate action plans: 1. Mexico's Climate Law Strengthens Political Discourse, Government Capacity In 2012, Mexico became the first middle-income country to create a comprehensive national climate change law. Its law established a federal agency, the National System on Climate Change (SINACC), to develop policies and coordinate their implementation. NGOs, businesses and academics advise SINACC through a Consultative Council, while the National Institute on Ecology and Climate Change, a state research organization, provides technical support. The SINACC also includes representation from the federal Congress, as well as city and state governments. A recent study, based on interviews of Mexican stakeholders involved in SINACC, found that the law established clearer responsibilities across government, promoted the involvement of subnational and non-state actors, and strengthened political discourse and commitment towards long-term climate action. 2. On-the-Ground Expertise Improves France's Energy Transition From rural and urban development to public health and gender equality, climate change affects a range of social issues. Involving groups that work on these issues can improve climate policies. In 2015, France held a wide-ranging stakeholder consultation process while developing its Energy Transition for Green Growth Law. The group responsible for implementing the law, the Council for a National Energy Transition (CNTE), includes 50 members representing labor, business, environmental NGOs, consumer interest NGOs, locally elected authorities and members of parliament. It has guided the development and early implementation of France's Low-Carbon National Strategy by providing technical and policy expertise that the government wouldn't have on its own. 3. Brazilian Coalition of Non-State Actors Influences National Politics The Brazilian Coalition on Climate, Forests and Agriculture consists of more than 190 representatives from social and environmental groups, academic institutions and agri-businesses. The coalition develops action plans to support Brazil's climate commitments, establishes and sustains dialogue with government representatives, and delivers the message of enhanced climate action to relevant audiences. As a credible non-state representative, the coalition has not only established itself as a leader for Brazilian climate action, but also influenced policymakers. In the run up to Brazil's national election in 2018, the Coalition presented 28 proposals to the candidates on sustainable land use, and committed to support the new government in implementing these proposals. 4. Collaborative Climate Adaptation Efforts in India Two Indian states have already demonstrated the benefits of including stakeholders from inside and outside government in the policymaking process. Local Experts Help Madhya Pradesh Adopt Climate-Resilient Cattle In Madhya Pradesh, the Department of Animal Husbandry discovered that rising temperatures adversely affect non-native breeds of cattle, which have become increasingly popular and contribute heavily to the state's milk production. The department brought together veterinarians, data scientists, milk cooperatives and local governments to address the issue. Through these consultations, the department realized it needed to further invest in breed research and encourage farmers to rear indigenous cattle, which are more tolerant to heat. Madhya Pradesh reports that more farmers are now opting to rear native breeds—the number of indigenous cattle in the state increased from 3.9 million in 2012-2013 to 5 million in 2016-2017. A dairy farming family carrying their cows' milk in Punjab, India. Photo by P. Casier/CGIAR. Uttarakhand Forest Communities Benefit from Coordinated Government Efforts In Uttarakhand, the Forest Department has been working with forest communities to improve their resilience to climate change. Coordination among various agencies has been a long-standing challenge, which often led to ineffective and confusing communication with the communities. However, under the Forest Department's leadership, local villages and forest governing bodies (Gram and Van Panchayats), along with state-level departments of agriculture and water, were able to coordinate their disparate efforts on climate-resilient livelihoods. Now forest management, agricultural diversification and water security efforts complement one another so that communities get maximum benefits. Many rural communities in India practice agroforestry. Photo by James Anderson/WRI Both state governments found that a more participatory approach led to better climate resilience policies. These examples highlight that in order to successfully implement climate action, governments need to coordinate among various agencies, as well as consult with local experts, affected communities and those responsible for implementing policies on the ground. India isn't the only country grappling with climate governance—a term encompassing not just climate policies themselves but also the institutions, legal processes and people that produce and implement them. As India and other countries choose to update their NDCs over the next year, they should keep in mind that including diverse perspectives from multiple stakeholders, especially sub-national ones, can make their climate action more successful. WRI will release new research on climate governance, addressing the themes in this blog, at the end of 2019.
An antioxidant is a chemical that prevents the oxidation of other chemicals. In biological systems, the normal processes of oxidation (plus a minor contribution from ionizing radiation) produce highly reactive free radicals. These can readily react with and damage other molecules: in some cases the body uses this to fight infection. In other cases, the damage may continue to the body's own cells. The presence of extremely easily oxidizable compounds in the system can "mop up" free radicals before they damage other essential molecules. The following compounds have shown positive antioxidants effects. Each has a unique molecular structure. It is important to note that antioxidants themselves can become reactive, after donating an electron to a free radical. An excellent article in Scientific American reviews the issue of the good and bad for antioxidants. In line with research which shows a healthy balanced diet seems to work but high doses or individual anti-oxidants may not. The article reinforces the belief by many that mega-doses of antioxidant may not be the answer instead a proper combination of antioxidants similar to those found in several foods may be the way to go. Abundant Antioxidants in Food Polyphenols are the most abundant antioxidants in the diet. There are over 4,000 polyphenol compounds. Their total dietary intake could be as high as 1 g/d, which is much higher than that of all other classes of phytochemicals and known dietary antioxidants. This is about 10 times higher than the intake of vitamin C and 100 times higher that the intakes of vitamin E and carotenoids.. The main dietary sources for polyphenols are: fruits and plant-derived beverages such as fruit juices, tea, coffee, and red wine. Vegetables, cereals, chocolate, and dry legumes also contribute to the total polyphenol intake. Examples of polyphenols are: resveratrol, ellagitannin and Theaflavin-3-gallate. A very large list of polyphenols and their food source can be found at Phenol Explorer, a database of polyphenol content in food view the molecular structure and read about each of the following antioxidants |Vitamin C||Vitamin E| SEE ALSO : THE MOLECULAR BASIS OF TASTE
The history of the Netherlands is the history of seafaring people thriving on a lowland river delta on the North Sea in northwestern Europe. Records begin with the four centuries during which the region formed a militarized border zone of the Roman empire. This came under increasing pressure from Germanic peoples moving westwards. As Roman power collapsed and the Middle Ages began, three dominant Germanic peoples coalesced in the area, Frisians in the north and coastal areas, Low Saxons in the northeast, and the Franks in the south. During the Middle Ages, the descendants of the Carolingian dynasty came to dominate the area and then extended their rule to a large part of Western Europe. The region of the Netherlands therefore became part of Lower Lotharingia within the Frankish Holy Roman Empire. For several centuries, lordships such as Brabant, Holland, Zeeland, Friesland, Guelders and others held a changing patchwork of territories. There was no unified equivalent of the modern Netherlands. By 1433, the Duke of Burgundy had assumed control over most of the lowlands territories in Lower Lotharingia; he created the Burgundian Netherlands which included modern Belgium, Luxembourg, and a part of France. The Catholic kings of Spain took strong measures against Protestantism, which polarized the peoples of present-day Belgium and Holland. The subsequent Dutch revolt led to splitting the Burgundian Netherlands into a Catholic French and Dutch-speaking “Spanish Netherlands” (approximately corresponding to modern Belgium and Luxembourg), and a northern “United Provinces”, which spoke Dutch and were predominantly Protestant with a Catholic minority. It became the modern Netherlands. In the Dutch Golden Age, which had its zenith around 1667, there was a flowering of trade, industry, the arts and the sciences. A rich worldwide Dutch empire developed and the Dutch East India Company became one of the earliest and most important of national mercantile companies based on entrepreneurship and trade. During the 18th century the power and wealth of the Netherlands declined. A series of wars with the more powerful British and French neighbors weakened it. Britain seized the North American colony of New Amsterdam, turning it into New York. There was growing unrest and conflict between the Orangists and the Patriots. The French Revolution spilled over after 1789, and a pro-French Batavian Republic was established in 1795–1806. Napoleon made it a satellite state, the Kingdom of Holland (1806–1810), and later simply a French imperial province. After the collapse of Napoleon in 1813–15, an expanded “United Kingdom of the Netherlands” was created with the House of Orange as monarchs, also ruling Belgium and Luxembourg. The King imposed unpopular Protestant reforms on Belgium, which revolted in 1830 and became independent in 1839. After an initially conservative period, in the 1848 constitution the country became a parliamentary democracy with a constitutional monarch. Modern Luxembourg became officially independent from the Netherlands in 1839, but a personal union remained until 1890. Since 1890 it is ruled by another branch of the House of Nassau. The Netherlands was neutral during the First World War, but during the Second World War, it was invaded and occupied by Nazi Germany. The Nazis, including many collaborators, rounded up and killed almost all the Jews (most famously Anne Frank). When the Dutch resistance increased, the Nazis cut off food supplies to much of the country, causing severe starvation in 1944–45. In 1942, the Dutch East Indies was conquered by Japan, but first the Dutch destroyed the oil wells that Japan needed so badly. Indonesia proclaimed its independence in 1945. Suriname gained independence in 1975. The postwar years saw rapid economic recovery (helped by the American Marshall Plan), followed by the introduction of a welfare state during an era of peace and prosperity. The Netherlands formed a new economic alliance with Belgium and Luxembourg, the Benelux, and all three became founding members of the European Union and NATO. In recent decades, the Dutch economy has been closely linked to that of Germany, and is highly prosperous.
Quantity theory of money This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) In monetary economics, the quantity theory of money (QTM) states that the general price level of goods and services is directly proportional to the amount of money in circulation, or money supply. The theory was originally formulated by Polish mathematician Nicolaus Copernicus in 1517, and was influentially restated by philosophers John Locke and David Hume, and by economists Milton Friedman and Anna Schwartz in A Monetary History of the United States published in 1963. The theory was challenged by Keynesian economics, but updated and reinvigorated by the monetarist school of economics. Critics of the theory argue that money velocity is not stable and, in the short-run, prices are sticky, so the direct relationship between money supply and price level does not hold. In mainstream macroeconomic theory, changes in the money supply play no role in determining the inflation rate. In such models, inflation is determined by the monetary policy reaction function. Origins and developmentEdit The quantity theory descends from Nicolaus Copernicus, followers of the School of Salamanca like Martín de Azpilicueta, Jean Bodin, Henry Thornton, and various others who noted the increase in prices following the import of gold and silver, used in the coinage of money, from the New World. The "equation of exchange" relating the supply of money to the value of money transactions was stated by John Stuart Mill who expanded on the ideas of David Hume. The quantity theory was developed by Simon Newcomb, Alfred de Foville, Irving Fisher, and Ludwig von Mises in the late 19th and early 20th century. Henry Thornton introduced the idea of a central bank after the financial panic of 1793, although, the concept of a modern central bank was not given much importance until Keynes published "A Tract on Monetary Reform" in 1923. In 1802, Thornton published An Enquiry into the Nature and Effects of the Paper Credit of Great Britain in which he gave an account of his theory regarding the central bank's ability to control price level. According to his theory, the central bank could control the currency in circulation through book keeping. This control could allow the central bank to gain a command of the money supply of the country. This ultimately would lead to the central bank's ability to control the price level. His introduction of the central bank's ability to influence the price level was a major contribution to the development of the quantity theory of money. Karl Marx modified it by arguing that the labor theory of value requires that prices, under equilibrium conditions, are determined by socially necessary labor time needed to produce the commodity and that quantity of money was a function of the quantity of commodities, the prices of commodities, and the velocity. Marx did not reject the basic concept of the Quantity Theory of Money, but rejected the notion that each of the four elements were equal, and instead argued that the quantity of commodities and the price of commodities are the determinative elements and that the volume of money follows from them. He argued... The law, that the quantity of the circulating medium is determined by the sum of the prices of the commodities circulating, and the average velocity of currency may also be stated as follows: given the sum of the values of commodities, and the average rapidity of their metamorphoses, the quantity of precious metal current as money depends on the value of that precious metal. The erroneous opinion that it is, on the contrary, prices that are determined by the quantity of the circulating medium, and that the latter depends on the quantity of the precious metals in a country;this opinion was based by those who first held it, on the absurd hypothesis that commodities are without a price, and money without a value, when they first enter into circulation, and that, once in the circulation, an aliquot part of the medley of commodities is exchanged for an aliquot part of the heap of precious metals. John Maynard Keynes, like Marx, accepted the theory in general and wrote... This Theory is fundamental. Its correspondence with fact is not open to question. Also like Marx he believed that the theory was misrepresented. Where Marx argues that the amount of money in circulation is determined by the quantity of goods times the prices of goods Keynes argued the amount of money was determined by the purchasing power or aggregate demand. He wrote Thus the number of notes which the public ordinarily have on hand is determined by the purchasing power which it suits them to hold or to carry about, and by nothing else. In the Tract on Monetary Reform (1923), Keynes developed his own quantity equation: n = p(k + rk'),where n is the number of "currency notes or other forms of cash in circulation with the public", p is "the index number of the cost of living", and r is "the proportion of the bank's potential liabilities (k') held in the form of cash." Keynes also assumes "...the public,(k') including the business world, finds it convenient to keep the equivalent of k consumption in cash and of a further available k' at their banks against cheques..." So long as k, k', and r do not change, changes in n cause proportional changes in p. Keynes however notes... The error often made by careless adherents of the Quantity Theory, which may partly explain why it is not universally accepted is as follows. The Theory has often been expounded on the further assumption that a mere change in the quantity of the currency cannot affect k, r, and k', – that is to say, in mathematical parlance, that n is an independent variable in relation to these quantities. It would follow from this that an arbitrary doubling of n, since this in itself is assumed not to affect k, r, and k', must have the effect of raising p to double what it would have been otherwise. The Quantity Theory is often stated in this, or a similar, form. Now "in the long run" this is probably true. If, after the American Civil War, that American dollar had been stabilized and defined by law at 10 per cent below its present value, it would be safe to assume that n and p would now be just 10 per cent greater than they actually are and that the present values of k, r, and k' would be entirely unaffected. But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean will be flat again. In actual experience, a change in n is liable to have a reaction both on k and k' and on r. It will be enough to give a few typical instances. Before the war (and indeed since) there was a considerable element of what was conventional and arbitrary in the reserve policy of the banks, but especially in the policy of the State Banks towards their gold reserves. These reserves were kept for show rather than for use, and their amount was not the result of close reasoning. There was a decided tendency on the part of these banks between 1900 and 1914 to bottle up gold when it flowed towards them and to part with it reluctantly when the tide was flowing the other way. Consequently, when gold became relatively abundant they tended to hoard what came their way and to raise the proportion of the reserves, with the result that the increased output of South African gold was absorbed with less effect on the price level than would have been the case if an increase of n had been totally without reaction on the value of r. ...Thus in these and other ways the terms of our equation tend in their movements to favor the stability of p, and there is a certain friction which prevents a moderate change in v from exercising its full proportionate effect on p. On the other hand, a large change in n, which rubs away the initial frictions, and especially a change in n due to causes which set up a general expectation of a further change in the same direction, may produce a more than proportionate effect on p. Keynes thus accepts the Quantity Theory as accurate over the long-term but not over the short term. Keynes remarks that contrary to contemporaneous thinking, velocity and output were not stable but highly variable and as such, the quantity of money was of little importance in driving prices. The theory was influentially restated by Milton Friedman in response to the work of John Maynard Keynes and Keynesianism. Friedman understood that Keynes was like Friedman, a "quantity theorist" and that Keynes Revolution "was from, as it were, within the governing body", i.e. consistent with previous Quantity Theory. Friedman notes the similarities between his views and those of Keynes when he wrote... A counter-revolution, whether in politics or in science, never restores the initial situation. It always produces a situation that has some similarity to the initial one but is also strongly influenced by the intervening revolution. That is certainly true of monetarism which has benefited much from Keynes's work. Indeed I may say, as have so many others since there is no way of contradicting it, that if Keynes were alive today he would no doubt be at the forefront of the counter-revolution. Friedman notes that Keynes shifted the focus away from the quantity of money (Fisher's M and Keynes' n) and put the focus on price and output. Friedman writes... What matters, said Keynes, is not the quantity of money. What matters is the part of total spending which is independent of current income, what has come to be called autonomous spending and to be identified in practice largely with investment by business and expenditures by government. The Monetarist counter-position was that contrary to Keynes, velocity was not a passive function of the quantity of money but it can be an independent variable. Friedman wrote: Perhaps the simplest way for me to suggest why this was relevant is to recall that an essential element of the Keynesian doctrine was the passivity of velocity. If money rose, velocity would decline. Empirically, however, it turns out that the movements of velocity tend to reinforce those of money instead of to offset them. When the quantity of money declined by a third from 1929 to 1933 in the United States, velocity declined also. When the quantity of money rises rapidly in almost any country, velocity also rises rapidly. Far from velocity offsetting the movements of the quantity of money, it reinforces them. Thus while Marx, Keynes, and Friedman all accepted the Quantity Theory, they each placed different emphasis as to which variable was the driver in changing prices. Marx emphasized production, Keynes income and demand, and Friedman the quantity of money. Academic discussion remains over the degree to which different figures developed the theory. For instance, Bieda argues that Copernicus's observation Money can lose its value through excessive abundance, if so much silver is coined as to heighten people's demand for silver bullion. For in this way, the coinage's estimation vanishes when it cannot buy as much silver as the money itself contains […]. The solution is to mint no more coinage until it recovers its par value. The quantity theory of money preserved its importance even in the decades after Friedmanian monetarism had occurred. In new classical macroeconomics the quantity theory of money was still a doctrine of fundamental importance, but Robert E. Lucas and other leading new classical economists made serious efforts to specify and refine its theoretical meaning. For new classical economists, following David Hume's famous essay "Of Money", money was not neutral in the short-run, so the quantity theory was assumed to hold only in the long-run. These theoretical considerations involved serious changes as to the scope of countercyclical economic policy. Historically, the main rival of the quantity theory was the real bills doctrine, which says that the issue of money does not raise prices, as long as the new money is issued in exchange for assets of sufficient value. Fisher's equation of exchangeEdit In its modern form, the quantity theory builds upon the following definitional relationship. - is the total amount of money in circulation on average in an economy during the period, say a year. - is the transactions velocity of money, that is the average frequency across all transactions with which a unit of money is spent. This reflects availability of financial institutions, economic variables, and choices made as to how fast people turn over their money. - and are the price and quantity of the i-th transaction. - is a column vector of the , and the superscript T is the transpose operator. - is a column vector of the . Mainstream economics accepts a simplification, the equation of exchange: - is the price level associated with transactions for the economy during the period - is an index of the real value of aggregate transactions. The previous equation presents the difficulty that the associated data are not available for all transactions. With the development of national income and product accounts, emphasis shifted to national-income or final-product transactions, rather than gross transactions. Economists may therefore work where - is the velocity of money in final expenditures. - is an index of the real value of final expenditures. As an example, might represent currency plus deposits in checking and savings accounts held by the public, real output (which equals real expenditure in macroeconomic equilibrium) with the corresponding price level, and the nominal (money) value of output. In one empirical formulation, velocity was taken to be "the ratio of net national product in current prices to the money stock". Thus far, the theory is not particularly controversial, as the equation of exchange is an identity. A theory requires that assumptions be made about the causal relationships among the four variables in this one equation. There are debates about the extent to which each of these variables is dependent upon the others. Without further restrictions, the equation does not require that a change in the money supply would change the value of any or all of , , or . For example, a 10% increase in could be accompanied by a change of 1/(1 + 10%) in , leaving unchanged. The quantity theory postulates that the primary causal effect is an effect of M on P. In 2008 Andrew Naganoff proposed an integral form of the equation of exchange, where on the left side of the equation is under the integral sign, and on the right side is a sum from i=1 to . Generally, could be infinite. Economists Alfred Marshall, A.C. Pigou, and John Maynard Keynes (before he developed his own, eponymous school of thought) associated with Cambridge University, took a slightly different approach to the quantity theory, focusing on money demand instead of money supply. They argued that a certain portion of the money supply will not be used for transactions; instead, it will be held for the convenience and security of having cash on hand. This portion of cash is commonly represented as k, a portion of nominal income ( ). The Cambridge economists also thought wealth would play a role, but wealth is often omitted for simplicity. The Cambridge equation is thus: Assuming that the economy is at equilibrium ( ), is exogenous, and k is fixed in the short run, the Cambridge equation is equivalent to the equation of exchange with velocity equal to the inverse of k: The Cambridge version of the quantity theory led to both Keynes's attack on the quantity theory and the Monetarist revival of the theory. As restated by Milton Friedman, the quantity theory emphasizes the following relationship of the nominal value of expenditures and the price level to the quantity of money : The plus signs indicate that a change in the money supply is hypothesized to change nominal expenditures and the price level in the same direction (for other variables held constant). Friedman described the empirical regularity of substantial changes in the quantity of money and in the level of prices as perhaps the most-evidenced economic phenomenon on record.Empirical studies have found relations consistent with the models above and with causation running from money to prices. The short-run relation of a change in the money supply in the past has been relatively more associated with a change in real output than the price level in (1) but with much variation in the precision, timing, and size of the relation. For the long-run, there has been stronger support for (1) and (2) and no systematic association of and . The theory above is based on the following hypotheses: - The source of inflation is fundamentally derived from the growth rate of the money supply. - The supply of money is exogenous. - The demand for money, as reflected in its velocity, is a stable function of nominal income, interest rates, and so forth. - The mechanism for injecting money into the economy is not that important in the long run. - The real interest rate is determined by non-monetary factors: (productivity of capital, time preference). Decline of money-supply targetingEdit An application of the quantity-theory approach aimed at removing monetary policy as a source of macroeconomic instability was to target a constant, low growth rate of the money supply. Still, practical identification of the relevant money supply, including measurement, was always somewhat controversial and difficult. As financial intermediation grew in complexity and sophistication in the 1980s and 1990s, it became more so. To mitigate this problem, some central banks, including the U.S. Federal Reserve, which had targeted the money supply, reverted to targeting interest rates. Starting 1990 with New Zealand, more and more central banks started to communicate inflation targets as the primary guidance for the public. Reasons were that interest targeting turned out to be a less effective tool in low-interest phases and it did not cope with the public uncertainty about future inflation rates to expect. The communication of inflation targets helps to anchor the public inflation expectations, it makes central banks more accountable for their actions, and it reduces economic uncertainty among the participants in the economy. But monetary aggregates remain a leading economic indicator. with "some evidence that the linkages between money and economic activity are robust even at relatively short-run frequencies." John Maynard Keynes criticized the quantity theory of money in The General Theory of Employment, Interest and Money. Keynes had originally been a proponent of the theory, but he presented an alternative in the General Theory. Keynes argued that the price level was not strictly determined by the money supply. Changes in the money supply could have effects on real variables like output. Ludwig von Mises agreed that there was a core of truth in the quantity theory, but criticized its focus on the supply of money without adequately explaining the demand for money. He said the theory "fails to explain the mechanism of variations in the value of money". - Volckart, Oliver (1997). "Early beginnings of the quantity theory of money and their context in Polish and Prussian monetary policies, c. 1520–1550". The Economic History Review. Wiley-Blackwell. 50 (3): 430–49. doi:10.1111/1468-0289.00063. ISSN 0013-0117. JSTOR 2599810. - "Quantity theory of money". Encyclopædia Britannica. Encyclopædia Britannica, Inc. - Minsky, Hyman P. John Maynard Keynes, McGraw-Hill. 2008. p.2. - Nicolaus Copernicus (1517), memorandum on monetary policy. - "Martín de Azpilicueta" http://www.escolasticos.ufm.edu/index.php/Mart%C3%ADn_de_Azpilcueta - Jean Bodin, Responses aux paradoxes du sieur de Malestroict (1568). - John Stuart Mill (1848), Principles of Political Economy. - David Hume (1748), "Of Interest," "Of Interest" in Essays Moral and Political. - Simon Newcomb (1885), Principles of Political Economy. - Alfred de Foville (1907), La Monnaie. - Irving Fisher (1911), The Purchasing Power of Money, - von Mises, Ludwig Heinrich; Theorie des Geldes und der Umlaufsmittel [The Theory of Money and Credit] - Hetzel, Robert L. "Henry Thornton: Seminal Monetary Theorist and Father of the Modern Central Bank." Henry Thornton: Seminal Monetary Theorist and Father of the Modern Central Bank (n.d.): 1. July–Aug. 1987. - Capital Vol I, Chapter 3, B. The Currency of Money, as well A Contribution to the Critique of Political Economy Chapter II, 3 "Money" - Tract on Monetary Reform, London, United Kingdom: Macmillan, 1924 Archived August 8, 2013, at the Wayback Machine - "Keynes' Theory of Money and His Attack on the Classical Model", L. E. Johnson, R. Ley, & T. Cate (International Advances in Economic Research, November 2001) "Archived copy" (PDF). Archived from the original (PDF) on July 17, 2013. Retrieved June 17, 2013. Cite uses deprecated parameter |deadurl=(help)CS1 maint: archived copy as title (link) - "The Counter-Revolution in Monetary Theory", Milton Friedman (IEA Occasional Paper, no. 33 Institute of Economic Affairs. First published by the Institute of Economic Affairs, London, 1970.) "Archived copy" (PDF). Archived from the original (PDF) on 2014-03-22. Retrieved 2013-06-17. Cite uses deprecated parameter |deadurl=(help)CS1 maint: archived copy as title (link) - Milton Friedman (1956), "The Quantity Theory of Money: A Restatement" in Studies in the Quantity Theory of Money, edited by M. Friedman. Reprinted in M. Friedman The Optimum Quantity of Money (2005), 51-p. 67. - Volckart, Oliver (1997), "Early beginnings of the quantity theory of money and their context in Polish and Prussian monetary policies, c. 1520–1550", The Economic History Review, 50 (3): 430–49, doi:10.1111/1468-0289.00063 - Bieda, K. (1973), "Copernicus as an economist", Economic Record, 49: 89–103, doi:10.1111/j.1475-4932.1973.tb02270.x - Wennerlind, Carl (2005), "David Hume's monetary theory revisited", Journal of Political Economy, 113 (1): 233–37, doi:10.1086/426037 - Galbács, Peter (2015). The Theory of New Classical Macroeconomics. A Positive Critique. Contributions to Economics. Heidelberg/New York/Dordrecht/London: Springer. doi:10.1007/978-3-319-17578-2. ISBN 978-3-319-17578-2. - Roy Green (1987), "real bills doctrine", in The New Palgrave: A Dictionary of Economics, v. 4, pp. 101–02. - Milton Friedman & Anna J. Schwartz (1965), The Great Contraction 1929–1933, Princeton: Princeton University Press, ISBN 978-0-691-00350-4 - Froyen, Richard T. Macroeconomics: Theories and Policies. 3rd Edition. Macmillan Publishing Company: New York, 1990. pp. 70–71. - Milton Friedman (1987), "quantity theory of money", The New Palgrave: A Dictionary of Economics, v. 4, p. 15. - Summarized in Friedman (1987), "quantity theory of money", pp. 15–17. - Friedman (1987), "quantity theory of money", p. 19. - Jahan, Sarwat. "Inflation Targeting: Holding the Line". International Monetary Funds, Finance & Development. Retrieved 28 December 2014. - NA (2005), How Does the Fed Determine Interest Rates to Control the Money Supply?", Federal Reserve Bank of San Francisco. February,"Archived copy". Archived from the original on December 8, 2008. Retrieved November 1, 2007. Cite uses deprecated parameter |deadurl=(help)CS1 maint: archived copy as title (link) - R.W. Hafer and David C. Wheelock (2001), "The Rise and Fall of a Policy Rule: Monetarism at the St. Louis Fed, 1968-1986", Federal Reserve Bank of St. Louis, Review, January/February, p. 19. - Wicksell, Knut (1898). Interest and Prices (PDF). - Ludwig von Mises (1912), "The Theory of Money and Credit (Chapter 8, Sec 6)". - Fisher Irving, The Purchasing Power of Money, 1911 (PDF, Duke University) - Friedman, Milton (1987 ). "quantity theory of money", The New Palgrave: A Dictionary of Economics, v. 4, pp. 3–20. Abstract. Arrow-page searchable preview at John Eatwell et al.(1989), Money: The New Palgrave, pp. 1–40. - Hume, David (1809). Essays and treatises on several subjects in two volumes: Essays, moral, political, and literacy. Volume 1. printed by James Clarke for T. Cadell. - Humphrey, Thomas M..(1974). The Quantity Theory of Money: Its Historical Evolution and Role in Policy Debates. FRB Richmond Economic Review, Vol. 60, May/June 1974, pp. 2–19. Available at [SSRN: http://ssrn.com/abstract=2117542] - Laidler, David E.W. (1991). The Golden Age of the Quantity Theory: The Development of Neoclassical Monetary Economics, 1870–1914. Princeton UP. Description and review. - Mill, John Stuart (1848). Principles of Political Economy with Some of Their Applications to Social Philosophy. Volume 1. C.C. Little & J. Brown. - Mill, John Stuart (1848). Principles of Political Economy: With Some of Their Applications to Social Philosophy. Volume 2. C.C. Little & J. Brown. - Mises, Ludwig Heinrich Edler von; Human Action: A Treatise on Economics (1949), Ch. XVII "Indirect Exchange", §4. "The Determination of the Purchasing Power of Money". - Newcomb, Simon (1885). Principles of Political Economy. Harper & Brothers. - The Quantity Theory of Money from John Stuart Mill through Irving Fisher from the New School - "Quantity theory of money" at Formularium.org – calculate M, V, P and Q with your own values to understand the equation - How to Cure Inflation (from a Quantity Theory of Money perspective) from Aplia Econ Blog
A marine layer is an air mass which develops over the surface of a large body of water such as the ocean or large lake in the presence of a temperature inversion. The inversion itself is usually initiated by the cooling effect of the water on the surface layer of an otherwise warm air mass. As it cools, the surface air becomes denser than the warmer air above it, and thus becomes trapped below it. The layer may thicken through turbulence generated within the developing marine layer itself. It may also thicken if the warmer air above it is lifted by an approaching area of low pressure. The layer will also gradually increase its humidity by evaporation of the ocean or lake surface, as well as by the effect of cooling itself. Fog will form within a marine layer where the humidity is high enough and cooling sufficient to produce condensation. Stratus and stratocumulus will also form at the top of a marine layer in the presence of the same conditions there. In the case of coastal California, the offshore marine layer is typically propelled inland by a pressure gradient which develops as a result of intense heating inland, blanketing coastal communities in cooler air which, if saturated, also contains fog. The fog lingers until the heat of the sun becomes strong enough to evaporate it, often lasting into the afternoon during the "June gloom" period. An approaching frontal system or trough can also drive the marine layer onshore. A marine layer will disperse and break up in the presence of instability, such as may be caused by the passage of a frontal system or trough, or any upper air turbulence that reaches the surface. A marine layer can also be driven away by sufficiently strong winds. It is not unusual to hear media weather reporters discuss the marine layer as if it were synonymous with the fog or stratus it may contain, but this is erroneous. In fact, a marine layer can exist with virtually no cloudiness of any kind, although it usually does contain some. The marine layer is a medium within which clouds may form under the right conditions; it is not the layers of clouds themselves.
They roamed across modern-day China They lived 169-163 million years ago in the middle of the Jurassic Period. The Jurassic Period was the second of three segments which made up the Mesozoic Era also known as the age of the dinosaurs. Reptiles were the dominant species of the age. This period had some of the largest dinosaurs to have ever existed. Yandusaurus was 4 meters (13 ft) long and weighed 140kg These dinosaurs were herbivores When was Yandusaurus was discovered it was almost destroyed by a mechanical composter. Unfortunately, by the time workers realised what had happened its remains had already been heavily damaged. Some areas of it’s remains had been completely destroyed by the machine. It was quite large for an ornithopod (dinosaurs with cattle-like behaviour) and would have relied on its speed and agility to evade predators. They may have had feathers like their Russian cousin, Kulindadromeus. However, no skin impressions of this dinosaur have been recovered yet which means that we still don’t know if Yandusaurus was feathered or not. Its name means ‘ Zigong lizard’, after the city of Zigong in China.
Here are some common terms and organizations families and patients might encounter during their journey. Adverse Event: An undesired change in the health of a participant that happens during a clinical study or within a certain time period after the study has ended. This change may or may not be caused by the intervention being studied. Arthritis: Inflammation of one or more joints. The two most common types of arthritis are osteoarthritis (wear and tear related) and rheumatoid arthritis (see rheumatoid arthritis). Arthritis Foundation: Founded in 1948, the Arthritis Foundation is focused on finding a cure and championing the fight against arthritis with life-changing information, advocacy, science and community. They are boldly leading the fight against juvenile arthritis and childhood rheumatic diseases. The Arthritis Foundation is strongly committed to making sure you have easy access to the resources, community and care you need – and making each day another stride towards a cure. www.arthritis.org Autoinflammatory Syndrome: A set of disorders characterized by recurrent episodes of systemic and organ-specific inflammation. Blind/Double Blind Study: Blind means study participants do not know which treatment they are getting. Double blind means doctors and study participants do not know which treatment is being used. Treatment could also include a placebo (see placebo). CARRA: The Childhood Arthritis and Rheumatology Research Alliance was created to ease research aimed at finding the cause and cure for childhood rheumatic diseases. Nearly every pediatric rheumatologist in North America is a member of CARRA. CARRA’s vision is to have every patient participate in research in some way. www.carragroup.org CARRA Registry: A database of information collected from children and young adults with pediatric-onset rheumatic diseases such as juvenile idiopathic arthritis (JIA) and lupus. The CARRA Registry allows patients from all over North America to share information including clinical outcomes, patient reported outcomes, and blood sample collection. That research is then shared with researchers to better understand pediatric rheumatic diseases. Clinical Research: A type of research designed to inform treatment decisions by providing evidence that can be compared on the effectiveness, benefits, and harms of two or more treatment options. Clinical Study/Trial: A research study that uses human subjects to evaluate biological or health-related outcomes, or results. Examples include interventional trials and observational trials (see interventional and observational). Comparative Effectiveness Research (CER): A type of clinical research designed to inform treatment decisions by providing evidence that can be compared on the effectiveness, benefits, and harms of two or more treatment options. Research that compares treatment choices. For example: which medication will work better for my disease, drug X or drug Y? Consensus Treatment Plan (CTP): Commonly used treatment approaches for a condition. Developed by CARRA so treatments can be compared to each other. Controlled Study/Trial: A type of clinical trial in which the results of the group that received the intervention being studied (see intervention) are compared to those of the control group which received either the standard of care or no treatment. Cure JM Foundation®: Founded in 2003, Cure JM Foundation is the only organization solely dedicated to supporting Juvenile Myositis (JM) research and improving the lives of families affected by JM. Cure JM Foundation's mission is to increase awareness, provide support to the families battling this disease, and fund research into better treatments and an eventual cure for JM. www.curejm.org Electronic Health/Medical Record (EHR/EMR): An electronic version of a patient’s medical history that is maintained by the provider over time, such as past medical history and medications. Health Insurance Portability and Accountability Act (HIPPA): Passed in 2003, HIPAA is a law that protects privacy and patient medical records. HIPAA allows patients to control how their health facts are used and shared. Inclusion/Exclusion Criteria: Factors that allow someone to participate in a study. Informed Consent Form (ICF): A document that is signed by a research participant before joining a research study that documents willingness to participate and understanding of study and activities and its risks. Institutional Review Board (IRB): A group that reviews and approves research on people. The purpose of the IRB is to make sure that all human research is conducted in accordance with all federal, institutional, and ethical rules to protect patients. Intervention: Something taken or done to try to improve health outcomes. Interventions may be drugs, medical devices, procedures, vaccines, and other products that are either in a research/development phase or already approved by the Food and Drug Administration and available for use. Interventions can also include nonmedical approaches, such as educational programs. Juvenile Idiopathic Arthritis (JIA): The most common form of arthritis in children and adolescents. (“Juvenile” in this context refers to an onset before age 16, “idiopathic” refers to a condition with no defined cause, and “arthritis” is the inflammation of the joint lining.) Learning Healthcare System (LHS): Using technology to collect and share healthcare data from patients, clinicians, and the general population to drive better care with faster results. Localized Scleroderma: A form of scleroderma (see scleroderma) that involves isolated patches of hardened skin on the face, hands, and feet, or anywhere else on the body, with no internal organ involvement. Lupus: An autoimmune disease in which the body's immune system mistakenly attacks healthy tissue in many parts of the body. Common symptoms include painful and swollen joints, fever, chest pain, hair loss, mouth ulcers, swollen lymph nodes, feeling tired, and a red rash which is most commonly on the face. Often there are periods of illness, called flares, and periods of remission during which there are few symptoms. Lupus Foundation of America: Founded in 1977, the Lupus Foundation of America, Inc. is devoted to solving the mystery of lupus, one of the world’s cruelest, most unpredictable, and devastating diseases, while giving caring support to those who suffer from its brutal impact. They strive to improve the quality of life for all people affected by lupus through programs of research, education, support, and advocacy. www.lupus.org Observational Study: A type of clinical study in which the researcher studies a defined group but does not apply an intervention to or influence any aspect of the group’s care or activities (in contrast to an interventional study; see interventional). The researcher collects information/data and makes observations or draws inferences, often about cause and effect. Researchers and/or participants decide their treatment as opposed to being assigned a treatment. Outcome Measures: Study results that are evaluated according to a predefined measure. An outcome measure provides a tool or method for objective assessment of the effect of an intervention (see intervention). The outcome of interest is identified and the ways in which it will be measured are determined before a study begins. A way to check if an intervention has helped improve a patient’s health. PARTNERS: Patients, Advocates, and Rheumatology Teams Network for Research and Service. Building the framework to conduct research in pediatric rheumatology in collaboration with families of children with rheumatic disease. Through PARTNERS, parents of children with rheumatic diseases have the opportunity to share their opinions on what they feel is important in caring for their children. Patients and their families are directly involved in making decisions about research agendas and designing research studies because every family is an expert in caring for their child. PARTNERS’ research partners have a voice in every stage of research—from asking the research questions to ranking them in order of importance, from designing the study to enrollment strategies, and finally to sharing the results with all patients. Patient Reported Outcomes (PRO): Any report or information about a patient that comes directly from the patient. This is often a questionnaire filled out by the patient or the patient’s parent or caregiver. PCORI: Formed in 2010 through the Affordable Care Act, PCORI (Patient-Centered Outcomes Research Institute) is an independent, non-profit health research organization that funds and shares research comparing the effectiveness of the various choices patients have when choosing their care. PCORI’s vision is to provide the patient with the information they need to make decisions that reflect their desired health outcomes. PCORI funds comparative effectiveness research (CER) and PCORnet. www.pcori.org PCORnet: National Patient-Centered Clinical Research Network is a national patient data network funded by PCORI. It collects data from large multi-centered health care systems like hospitals and data from individual caregivers and people with disease. The large centers are called Clinical Data Research Networks (CDRNs) and the disease- specific programs are called Patient-Powered Research Networks (PPRNs). The data is owned and protected by the individual CDRNs and PPRNs. The goal of PCORnet is to improve the nation’s capacity to conduct comparative effectiveness research (CER). PCORnet brings together patients, care providers and health systems to improve healthcare and advance medical knowledge. With patients and researchers working side by side, PCORnet will be able to explore the questions that matter most to patients and their families. http://www.pcornet.org/ PR-COIN: The Pediatric Rheumatology Care and Outcomes Improvement Network is a network of rheumatologists, nurses, therapists, social workers and support staff at rheumatology centers who in partnership with families are all working together to transform how care is delivered to children with JIA. The aim of PR-COIN is to develop and evaluate specific disease management strategies to improve the care of children with JIA and to determine how best to incorporate these strategies into clinical practice. PR-COIN is creating a sustainable network that uses a registry database to measure performance, learn more about the health status of JIA patients as well as to inform future improvement projects. https://pr-coin.org/ Placebo: A substance that does not contain active ingredients and is made to be physically indistinguishable (that is, it looks and tastes identical) from the actual drug being studied. A harmless treatment that has no active effect on the condition being studied (see blind/double blind study). Primary Purpose: The main reason for conducting a clinical trial. Studies may also have a secondary purpose. Principal Investigator (PI): The lead researcher or person who is responsible for the management and execution of the study project according to the approved protocol (see protocol), which may include multiple study sites. PROMIS Measures: Surveys developed to evaluate well-being. These surveys are commonly used for Patient Reported Outcomes (PROs). Protocol: The written document that describes a clinical study and the plan for its execution. The details include the study's purpose (see primary purpose), objectives, design, and methodology. It may also include relevant scientific background and statistical information. A written set of instructions to perform a study developed to protect patients and answer a research question. Protected Health Information (PHI): Defined by the Health Insurance Portability and Accountability Act of 1996 as individually identifiable health information. PHI can also stand for personal health information. Randomization: The process where study participants are assigned to a treatment by chance. Refractory: When an illness does not respond to treatments. Research/Clinical Research: Research can be defined as the search for an answer to a question. Clinical research is medical research that involves people. Rheumatology: A branch of medicine devoted to the diagnosis and therapy of rheumatic diseases. Physicians who have undergone formal training in rheumatology are called rheumatologists. Rheumatologists deal mainly with immune-mediated disorders of the musculoskeletal system, soft tissues, autoimmune diseases, vasculitides, and heritable connective tissue disorders. Rheumatoid Arthritis: When the body's immune system attacks the lining of the joint capsule (a tough membrane that encloses all the joint parts). This lining, known as the synovial membrane, becomes inflamed and swollen. Scleroderma: A group of autoimmune diseases that may result in changes to the skin, blood vessels, muscles, and internal organs. The disease can be localized to the skin (see localized scleroderma) or involve other organs in addition to the skin (see systemic scleroderma). Self-advocacy: A patient’s ability to speak up for themselves and the things that are important to them. This includes the ability to ask for what they need and want and tell others about their thoughts and feelings. This also refers to knowing one’s rights and responsibilities, speaking up for those rights, and the ability to make choices and decisions that affect one’s life. The ultimate goal is for the patient to decide what they want, then develop and carry out a plan to help achieve it. Sjögren Syndrome: A chronic long-term autoimmune disease that affects the body's moisture-producing glands. This can include degeneration of the salivary and lacrimal glands, resulting in dryness of the mouth and eyes. Standard of Care: Treatment that is accepted by medical experts as a proper way to care for a condition. Uveitis: Inflammation of the uvea, the pigmented layer that lies between the inner retina and the outer fibrous layer of the eye composed of the sclera and cornea. The uvea is the pigmented middle of the three concentric layers that make up an eye, and includes the iris, ciliary body, and choroid. Vasculitis: The inflammation of blood vessels. It can affect arteries, veins, and capillaries.
Introduction: – In this tutorial you will learn about another kind of encryption technique that applies deterministic algorithm along with symmetric key for encryption purpose of plain text block. So let’s dig deep into it. What is Block cipher: – Block cipher is an encryption technique which inputs a block of bits of plain text & processes to create block of bits of cipher text. Block size remain the same (fixed) for any specific scheme and the block size has no adverse effect on the strength of encryption. Rather the encryption’s strength and quality depends on the key length. Block Size: – It is possible to incorporate any block size in encryption, but here are some of the points that one should keep in mind while selecting block size – - Try to avoid very small size of blocks. This means if you choose a size of block having n bits, then possible bit combination in plain text becomes 2 n . An encryption having large size of blocks makes dictionary attacks harder to penetrate. - An extremely large block size is also not a good idea to keep. This is because your cipher / or encryption mechanism will become unproductive to operate. - It is advisable to keep the block size a multiple of 8. This is because, it becomes easy to implement since most of the systems’ processor handles information in 8 bits’ multiple (16, 32, 64, 86 bits) so as the processors. The block ciphers takes blocks having fixed sizes (let suppose 64-bits). But the plain text length will not always be a multiple of the block size you have taken. Hence, there is a process to add bits in the last block to make it a multiple of block size which is called as padding. Various Block Ciphers: – There are a lot of popular block ciphers that are used on various applications and systems. Many of their algorithms are known to public. The most prominent of them are listed below – - 3DES or Triple DES - Twofish: It is a 128 bits block cipher having a variable that holds the key length. It came from the updating of Blowfish algorithm which was having a 64 bits block size. - AES (Will will learn about it in a separate chapter) - IDEA: It is a very strong cipher also having size of block 64 and a size of key to 128 bits. The popular PGP (Pretty Good Privacy) protocol used for secured mailing and in other applications uses this encryption mechanism earlier. Since it has been patented by some organisation so, it has limited usage in applications. - Serpent: is another block cipher with size of block – 128 bits but a varied key lengths of 128, 192 or 256 bits. It is slower as compared to others but one of the most secured encryption algorithm designed.
LONDON — DEEP-SEA drilling has confirmed that monsoons - the summer onslaught of rains around the Indian Ocean - have been a feature of the region's climate for millions of years. The extreme height and persistent snow cover of the Himalayas cause marked seasonal differences in heating between the mountains and the Indian Ocean. In summer, strong southwesterlies blow as warm air rises above the Indian subcontinent. In winter, as the land mass cools but the ocean remains warm, a reversal in wind direction influences the circulation of ocean waters. This, in turn, affects plankton productivity, which depends on the upwelling of nutrient-rich water. Researchers at Cambridge University in England and Edinburgh University in Scotland who are working with the international Ocean Drilling Program say they have identified evidence in sediment core from under the Indian Ocean that shows changes in plankton productivity and wind velocity. The scientists found that the strength of the monsoon varies in phase with changes in the Earth's orbit around the sun. These changes affect the intensity of incoming solar radiation. This process has been continuing ever since the first monsoon began with the upthrust of the Himalayas, 10 million years ago.
Common Core State Standards are a set of standards, not a curriculum. The Common Core State Standards are a common set of educational standards defining the mathematics and English language arts knowledge and skills all United States students in grades K-12 need to be successfully prepared for college and the workforce in the 21st century. The standards are not a step-by-step guide for classroom instruction; rather, they are an outline of the goals to be reached and skills to be mastered at every grade level and upon graduation. It is important to understand what educators mean when they use the word standards. Standards are the end goals for students. They are a list of skills and facts students need to acquire throughout the course of the school year. The Common Core provides an expected destination, and schools and teachers are free to chart their own course to that destination. This is the important distinction between standards and curriculum. If you imagine standards as a destination, the curriculum is the map to get there. A curriculum outlines the sequence of topics that teachers will cover on their way to the final goal of the standards, building from simpler tasks to more difficult and complex ones. For example, one of the Common Core math standards for eighth grade expects that at the end of that grade, a student should be able to: “Construct a function to model a linear relationship between two quantities. Determine the rate of change and initial value of the function from a description of a relationship or from two (x, y) values, including reading these from a table or from a graph. Interpret the rate of change and initial value of a linear function in terms of the situation it models, and in terms of its graph or a table of values.”In this example, teachers could have students model the relationship between any two variables (rainfall and wheat growth, age and height, inflation and GDP) and students would simply need to fit an equation to the data. Teachers could similarly build toward this end goal in any way they like, perhaps starting with the Cartesian coordinate plan and moving to reading tables and graphs, or vice versa, or even by an entirely different approach. In short, the Common Core provides a destination, and schools and teachers are free to chart their own course there.
A B cell is a type of white blood cell or lymphocyte that is part of the vertebrate adaptive immune system. B cells secrete antibodies in response to invasion by foreign pathogens. B cells originate in fetal liver and bone marrow in mammals, developing from hematopoietic stem cells (HSCs). HSCs become multipotent progenitor (MPP) cells, then common lymphoid progenitor (CLP) cells, and then go through several more stages of development to become a mature B cell. Multipotent hematopoietic progenitors of B cells migrate into the fetal liver and bone marrow, and take up residence in new environments to commence development along the B lineage pathway. Initially, rearrangements of genes generate precursors that express IgH antibodies. When IgH is able to pair with an IgL light chain, a pre-B cell receptor is formed. In the next stage, autoreactivity of the IgH chain is interrogated. Then IgM is displayed as a B cell receptor on the immature B cells, and these cells are presented with autoantigens. Those with high-affinity autoreactive B cell receptors are deleted, and those with low-affinity autoreactive B cell receptors leave the bone marrow and enter peripheral blood pools, particularly in the lung and gut lymphoid tissues. More than 85% of new, immature B cells die in the bone marrow due to autoantigen recognition. This process generates central tolerance to autoantigens. More detailed characterization of autoantigen presentation occurs at later stages of development. In the spleen, B cells are monitored in transition from immature to mature cells. Mature B cells in the peripheral blood are tested for their ability to recognize foreign antigens. B cells that respond are propagated and undergo a process called hypermutation, where their genes are rearranged to create a B cell receptor that better fits the foreign antigen. B cells that become autoreactive through the hypermutation process and are not destroyed can lead to autoimmune disease. Microenvironment of the bone marrow The bone marrow microenvironment crucially shapes the development of B cells. Bone marrow stromal cells create distinct niches that support B cell development.Factors required for B cell development are thought to be produced by cells of the bone marrow, and are delivered to the precursor B cells at an appropriate stage of development. Reticular cells, surrounding bone marrow sinuses, have been shown to be associated with B cells, and may play a role in forming niche environments for B cell development. Other cells that may provide cellular niches include osteoblasts and IL-7 expressing cells. B cell activation B cells are activated in the secondary lymphoid organs (SLOs), which are the lymph nodes, tonsils, spleen, Peyer's patches and mucosa associated lymphoid tissue (MALT). Upon maturation, B cells migrate to the SLO. The SLO receives a steady supply of antigens through circulating lymph. B cells are activated when they bind to an antigen through a B cell receptor. The surface receptor CD21 can enhance B cell activation in complex with CD19 and CD81. In that process, the B cell receptor binds an antigen tagged with a fragment of C3 complement protein, and CD21 binds the fragment. It then co-ligates with the bound B cell receptor and a signal transduction cascade is initiated that lowers the activation threshold of the cell. B cells can also undergo T cell dependent activation, which occurs when a B cell receptor binds a T cell-dependent antigen. The antigen is then taken up into the B cell and presented to T cells on the cell membrane. A T helper cell activated with the same antigen recognizes and binds the antigen complex, triggering a cascade of signals. Once the B cells receive the signals, they are activated, and undergo a two-step differentiation resulting in short-lived plasmablasts for immediate response and long-lived plasma cells and memory B cells for long term response. VIDEO Further Reading
Her enigmatic expression has been the topic of artistic debate for hundreds of years. But the reason the Mona Lisa’s mouth — part smile, part pursed lip — is so confounding has to do with the eyes, according to one Harvard scientist. More specifically, Leonardo da Vinci’s 16th-century masterpiece beguiles observers because of the way their gaze jumps around the picture — from the Mona Lisa’s mouth, to her eyes, to her forehead. Where a person focuses his or her eyes determines the extent of the subject’s smile, said Margaret Livingstone during a recent talk. While many art historians argue that the puzzling effect appears because the Mona Lisa’s “smile is blurry,” Livingstone contends it’s because of a fundamental difference between a person’s central and peripheral vision. “If you look at her eyes and then look at her mouth, doesn’t she seem cheerier when you are looking at her eyes?” Livingstone asked the crowd in an Emerson Hall lecture room on Wednesday. “That’s not your imagination, that’s something really low level, and it has to do with the fact that your acuity falls off from the center of gaze quite a lot.” In other words, a person’s peripheral vision is terrible at discerning fine detail. When a viewer looks at the Mona Lisa’s eyes instead of her mouth, their peripheral vision notices the shadows around her cheeks that seem to expand her smile and make her appear cheerier. “If you take home any message from my talk today, I hope you will take home the idea that vision is information processing, not image transmission,” said Livingstone, a Harvard Medical School professor of neurobiology who specializes in how the brain receives and processes visual information. During the Mind/Brain/Behavior 2012-2013 Distinguished Harvard Lecture, Livingstone delved into several masterworks to explore what happens when we look at a work of art. Livingstone, author of “Vision and Art: The Biology of Seeing,” noted how sections of certain paintings, particularly impressionist works, almost appear to move. The effect, she said, relates to the relative luminance the eyes are able to detect between the painting’s colors. The closer two colors are in luminance, said Livingstone, the more they seem to shimmer. “The impressionists achieved this sense of motion in some of their paintings by using … equal luminant paints,” said Livingstone. One of the fathers of impressionist painting, Claude Monet, captured the sense of moving, rippling water in many of his works because he filled them with “equal luminant patches of color.” Primates have an extraordinary ability to process faces, Livingstone said. She proved her point by rapidly flashing pictures of familiar faces on a screen at the front of the room. “You are seeing these images in a tenth of a second, and yet you are not only detecting faces, you are recognizing particular individuals.” Many in the crowd nodded in agreement. “We can do it better than any computer program,” she said. Humans are so adept at recognizing faces because particular regions of our temporal lobes are strictly dedicated to the task. In addition, “face cells code how any given face differs from the average face,” she said, and they can do so almost instantly. Livingstone said that artists “were way ahead of us on this.” They understood that a straightforward, vertical line drawing of a face is harder to recognize than a caricature. That’s because a caricature, explained Livingstone, is when “you take somebody, you compare them to the average, and then you exaggerate the differences.”
In his original paper, Kanner (1943) commented on the intelligent appearance of children with autism and observed that they did well on some parts of tests of intelligence. This view led to the impression that children with autism did not suffer from cognitive delay. Observed difficulties in assessment and low test scores were attributed to “negativism” or “untestability” (Brown and Pace, 1969; Clark and Rutter, 1977). As time went on, it became apparent that, although some areas of intellectual development were often relatively strong, many other areas were significantly delayed or deviant in their development and that probably a majority of children with autism functioned in the mentally retarded range. Various investigators (e.g., Rutter, 1983) began to emphasize the centrality of cognitive-communicative dysfunction. As noted by Sigman and colleagues (1997), studies of normal cognitive development have generally focused either on the process of acquisition of knowledge (emphasizing theories of learning and information processing) or on symbolic development, concept acquisition, and skill acquisition (a combined line of work often based on the theories of Piaget), as well as questions concerning the nature of intelligence. Various authors have summarized the large and growing literature on these topics in autism (e.g. DeMyer et al., 1981; Fein et al., 1984; Prior and Ozonoff, 1998). The interpretation of this literature is complicated by the association of autism with mental retardation in many individuals, by developmental changes in the expression of autism, and by the strong interdependence of various lines of development. For example, deficits in aspects of symbolic functioning may be manifest in problems with play at one time, and in language at a later time. In addition, individuals with autism may attempt compensatory strategies, either spontaneously or through instruction, so profiles of ability may also change over time. Children with autistic spectrum disorders have unique patterns of development, both as a group and as individuals. Many children with autistic spectrum disorders have relative strengths that can be used to buttress their learning in areas that they find difficult. For example, a child with strong visual-spatial skills may learn to read words to cue social behavior. A child with strong nonverbal problem-solving skills may be motivated easily by tasks that have a clear endpoint or that require thinking about how to move from one point to another. A child with good auditory memory may develop a repertoire of socially appropriate phrases from which to select for specific situations. Autistic spectrum disorders are disorders that affect many aspects of thinking and learning. Cognitive deficits, including mental retardation, are interwoven with social and communication difficulties, and many of the theoretical accounts of autistic spectrum disorders emphasize concepts, such as joint attention and theory of mind, that involve components of cognition, communication, and social understanding. Thus, educational interventions cannot assume a typical sequence of learning; they must be individualized, with attention paid to the contribution of each of the component factors to the goals most relevant for an individual child. COGNITIVE ABILITIES IN INFANTS AND VERY YOUNG CHILDREN Early studies on development in autism focused on basic capacities of perception and sensory abilities. Although children with autistic spectrum disorders appear to be able to perceive sensory stimuli, their responses to such stimuli may be abnormal (Prior and Ozonoff, 1998). For example, brainstem auditory evoked response hearing testing may demonstrate that the peripheral hearing pathway is intact, although the child’s behavioral response to auditory stimuli is abnormal. In infants and very young children, the use of infant developmental scales is somewhat limited, since such tests have, in general, relatively less predictive value for subsequent intelligence. Indeed, the nature of “intelligence” in this period may be qualitatively different than in later years (Piaget, 1952). Several studies have investigated sensorimotor intelligence in children with autism. The ability to learn material by rote may be less impaired than that involved in the manipulation of more symbolic materials (Klin and Shepard, 1994; Losche, 1990). Attempts made to employ traditional Piagetian notions of sensorimotor development have revealed generally normal development of object permanence, although the capacities to imitate gesture or vocalization may be deficient (Sigman and Ungerer, 1984b; Sigman et al., 1997). The difficulties in imitation begin early (Prior et al., 1975) and are persistent (Sigman and Ungerer, 1984b; Otah, 1987). The specificity of these difficulties has been the topic of some debate (Smith and Bryson, 1994), although it is clear that children with autism usually have major difficulties in combining and integrating different kinds of information and their responses (Rogers, 1998). Although sensorimotor skills may not appear to be highly deviant in some younger children with autism, aspects of symbolic play and imagination, which typically develop during the preoperational period, are clearly impaired (Wing et al., 1977; Riquet et al., 1981). Children with autism are less likely to explore objects in unstructured situations (Kasari et al., 1993; Sigman et al., 1986). Younger children with autism do exhibit a range of various play activities, but the play is less symbolic, less developmentally sophisticated, and less varied than that of other children (Sigman and Ungerer, 1984a). These problems may be the earliest manifestations of what later will be seen to be difficulties in organization and planning (“executive functioning”) (Rogers and Pennington, 1991). Thus, younger children with autism exhibit specific areas of deficiency that primarily involve representational knowledge. These problems are often most dramatically apparent in the areas of play and social imitation. As Leslie has noted (e.g., Leslie, 1987), the capacity to engage in more representational play, especially shared symbolic play, involves some ability for metarepresentation. Shared symbolic play also involves capacities for social attention, orientation, and knowledge, which are areas of difficulty for children with autism. STABILITY AND USES OF TESTS OF INTELLIGENCE IQ scores have been important in the study of autism and autistic spectrum disorders. To date, scores on intelligence tests, particularly verbal IQ, have been the most consistent predictors of adult independence and functioning (Howlin, 1997). IQ scores have generally been as stable for children with autism as for children with other disabilities or with typical development (Venter et al., 1992). Though fluctuations of 10– 20 points within tests (and even more between tests) are common, within a broad range, nonverbal IQ scores are relatively stable, especially after children with autism enter school. Thus, nonverbal intelligence serves, along with the presence of communicative language, as an important prognostic factor. Epidemiological studies typically estimate that about 70 percent of children with autism score within the range of mental retardation, although there is some suggestion in several recent studies that this proportion has decreased (Fombonne, 1997). This change may be a function of more complete identification of children with autism who are not mentally retarded, a broader definition of autism that includes less impaired individuals, and greater educational opportunities for children with autism in the past two decades in many countries. It will be important to consider the effects of these possible shifts on interventions. In school-age children, traditional measures of intelligence are more readily applicable than in younger (and lower functioning) individuals. Such tests have generally shown that children with autism exhibit problems both in aspects of information processing and in acquired knowledge, with major difficulties in more verbally mediated skills (Gillies, 1965; McDonald et al., 1989; Lockyer and Rutter, 1970; Wolf et al., 1972; Tymchuk et al., 1977). In general, abilities that are less verbally mediated are more preserved, so that such tasks as block design may be areas of relative strength. Tasks that involve spatial understanding, perceptual organization, and short-term memory are often less impaired (Hermelin and O’Connor, 1970; Maltz, 1981) unless they involve more symbolic tasks (Minshew et al. 1992). There may be limitations in abilities to sequence information cross modally, particularly in auditory-visual processing (Frith, 1970, 1972; Hermelin and Frith, 1971). There is also some suggestion that in other autistic spectrum disorders (e.g., Asperger’s syndrome) different patterns may be noted (Klin et al., 1995). In addition, the ability to generalize and broadly apply concepts may be much more limited in children with autism than other children (Tager-Flusberg, 1981; Schreibman and Lovaas, 1973). As for other aspects of development, programs have been implemented to maximize generalization of learning (Koegel et al., 1999), but this process cannot be assumed to occur naturally. In autism research, IQ scores are generally required by the highest quality journals in descriptions of participants. These scores are important in characterizing samples and allowing independent investigators to replicate specific findings, given the wide variability of intelligence within the autism spectrum. IQ is associated with a number of other factors, including a child’s sex, the incidence of seizures, and the presence of other medical disorders, such as tuberous sclerosis. Several diagnostic measures for autism, including the Autism Diagnostic Interview-Revised, are less valid with children whose IQ scores are less than 35 than with children with higher IQs (Lord et al., 1994). Diagnostic instruments often involve quantifying behaviors that are not developing normally. This means that it is difficult to know if the frequency of autism is truly high in severely to profoundly mentally retarded individuals, or if the high scores on diagnostic instruments occur as the result of “floor” effects due to the general absence of more mature, organized behaviors (Nordin and Gillberg, 1996; Wing and Gould, 1979). IQ scores have been used as outcome measures in several studies of treatment of young children with autism (Lovaas, 1993; Sheinkopf and Siegel, 1998; Smith et al., 2000). IQ is an important variable, particularly for approaches that claim “recovery,” because “recovery” implies intellectual functioning within the average range. However, these results are difficult to interpret for a number of reasons. First, variability among children and variability within an individual child over time make it nearly impossible to assess a large group of children with autism using the same test on numerous occasions. Within a representative sample of children with autism, some children will not have the requisite skills to take the test at all, and some will make such large gains that the test is no longer sufficient to measure their skills. This is a difficulty inherent in studying such a heterogeneous population as children with autistic spectrum disorders. The challenge to find appropriate measures and to use them wisely has direct consequences in measuring response to treatment. For example, there is predictable variation in how children perform on different tests (Lord and Schopler, 1989a). Children with autism tend to have the greatest difficulty on tests in which both social and language components are heavily weighted and least difficulty with nonverbal tests that have minimum demands for speed and motor skills (e.g., the Raven’s Coloured Progressive Matrices [Raven, 1989]). Comparing the same child’s performance on two tests, given at different times—particularly a test that combines social, language, and nonverbal skills, or a completely nonverbal test—does not provide a meaningful measure of improvement. Even within a single test that spans infant to school-age abilities, there is still variation in tasks across age that may differentially affect children with autism; this variation is exemplified in many standard instruments such as the Stanford-Binet Intelligence Scales (Thorndike et al., 1986) or Mullen Scales of Early Development (Mullen, 1995). Generally, IQ scores are less stable for children first tested in early preschool years (ages 2 and 3) than for those tested later, particularly when different tests are used at different times. In one study (Lord and Schopler, 1989a), mean differences between test scores at 3 years or younger and 8 years and older were greater than 23 points. These findings have been replicated in other populations (Sigman et al., 1997). Thus, even without special treatment, children first assessed in early preschool years are likely to show marked increases in IQ score by school age (Lord and Schopler, 1989b), also presumably reflecting difficulties in assessing the children and limitations of assessment instruments for younger children. Studies with normally developing children have indicated that there can be practice effects with developmental and IQ tests, particularly if the administration is witnessed by parents who may then, not surprisingly, subsequently teach their children some of the test items (Bagley and McGeein, 1989). Examiners can also increase scores by varying breaks, motivation, and order of assessment (Koegel et al., 1997). There are difficulties analyzing age equivalents across different tests because of lack of equality in intervals (Mervis and Robinson, 1999). Deviation IQ scores may not extend low enough for some children with autism, and low normative scores may be generated from inferences based on very few subjects. In the most extreme case, a young child tested with the Bayley Scales at 2 years and a Leiter Scale at 7 years might show an IQ score gain of over 30 points. This change might be accounted for by the change in test (i.e., its emphasis and structure), the skill of the examiner, familiarity with the testing situation, and practice on test measures—all important aspects of the measurement before response to an intervention can be interpreted. Because researchers are generally expected to collect IQ scores as descriptive data for their samples, the shift to reporting IQ scores as outcome measures is a subtle one. For researchers to claim full “recovery,” measurement of a posttreatment IQ within the average range is crucial and easier to measure than the absence of autism-related deficits in social behavior or play. IQ scores, at least very broadly, can predict school success and academic achievement, though they are not intended to be used in isolation. Indeed, adaptive behavior may be a more robust predictor of some aspects of later outcomes (Lord and Schopler, 1989b; Sparrow, 1997). Furthermore, an IQ score is a composite measure that is not always easily dissected into consistent components. Because of the many sources for their variability and the lack of specific relationship between IQ scores and intervention methods, IQ scores on their own provide important information but are not sufficient measures of progress in response to treatment and certainly should not be used as the sole outcome measure. Similar to findings with typically developing children, tests of intellectual ability yield more stable scores as children with autistic spectrum disorders become older and more varied areas of intellectual development can be evaluated. Although the process of assessment can be difficult (Sparrow, 1997), various studies have reported on the reliability and validity of appropriately obtained intelligence test scores (Lord and Schopler, 1989a). Clinicians should be aware that the larger the sampling of intellectual skills (i.e., comprehensiveness of the test or combination of tests), the higher will be the validity and accuracy of the estimate of intellectual functioning (Sparrow, 1997). GENERAL ISSUES IN COGNITIVE ASSESSMENT There are several important problems commonly encountered in the assessment of children with autism and related conditions. First, it is common to observe significant scatter, so that, in autism, verbal abilities may be much lower than nonverbal ones, particularly in preschool and school-age children. As a result, overall indices of intellectual functioning may be misleading (Ozonoff and Miller, 1995). Second, correlations reported in test manuals between various assessment batteries may not readily apply, although scores often become more stable and predictive over time (Lord and Schopler, 1989a; Sparrow, 1997). Third, for some older children with autism standard scores may fall over time, reflecting the fact that while gains are made, they tend to be at a slower rate than expected given the increase in chronological age. This drop may be particularly obvious in tests of intelligence that emphasize aspects of reasoning, conceptualization, and generalization. Approximately 10 percent of children with autism show unusual is-lets of ability or splinter skills. These abilities are unusual either in relation to those expected, given the child’s overall developmental level, or, more strikingly, in relation to normally developing children. The kinds of talents observed include drawing, block design tasks, musical skill, and other abilities, such as calendar calculation (Treffert, 1989; Shah and Frith, 1993; Prior and Ozonoff, 1998). Hermelin and colleagues (e.g., Hermelin and Frith, 1991) noted that these unusual abilities may be related to particular preoccupations or obsessions. Such abilities do not seem to be based just on memory skills; they may reflect other aspects of information processing (Pring et al., 1995). In summary, general measures of intellectual functioning, such as IQ scores, are as stable and predictive in children with autistic spectrum disorders as in children with other developmental disorders, but this does not mean that these measures do not show individual and systematic variation over time. Because IQ scores provide limited information and there are complex implications of test selection across ages and developmental levels, IQ scores should not be considered a primary measure of outcome, though they may be one informative measure of the development of the children who participate in an intervention program. Specific cognitive goals, often including social, communicative, and adaptive domains, are necessary to evaluate progress effectively. Direct evaluations of academic skills are also important if children are learning to read or are participating in other academic activities. THEORETICAL MODELS OF COGNITIVE DYSFUNCTION IN AUTISM Various theoretical notions have been advanced to account for the cognitive difficulties encountered in autism. The “theory of mind” hypothesis proposes that individuals with autism are not able to perceive or understand the thoughts, feelings, or intentions of others; i.e., they lack a theory of mind and suffer from “mind blindness” (Leslie and Frith, 1987; Leslie, 1992; Frith et al., 1994). Various experimental tasks and procedures used to investigate this capacity generally indicate that many somewhat more able (e.g., verbal) children with autism do indeed lack the capacity to infer mental states. This capacity is viewed as one aspect of a more general difficulty in “metarepresentation” (Leslie, 1987) that is presumed to be expressed in younger children by difficulties with understanding communicative gesture and joint attention (Baron-Cohen, 1991). While not all children with autistic spectrum disorders entirely lack a theory of mind (Klin et al., 1992), they may be impaired to some degree (Happe, 1994). There appear to be strong relationships between verbal ability and theory of mind capacities in autism (e.g., Ozonoff et al., 1991), though many language-impaired non-autistic children can normally acquire these skills (Frith et al., 1991). The theory of mind hypothesis has been a highly productive one in terms of generation of research, and in focusing increased attention on the social aspects of autism, including deficits in joint attention, communication, and pretense play (see Happe, 1995, for a summary). However, specific behaviors that evidence a deficit in theory of mind are not by themselves sufficient to yield a diagnosis of autism, which can be associated with other cognitive deficits. In addition, research in which theory of mind concepts were taught to individuals with autism did not result in general changes in social behavior, suggesting that links between theory of mind and sociability are not simple (Hadwin et al., 1997). A second body of work has focused on deficits in executive functioning, that is, in forward planning and cognitive flexibility. Such deficits are reflected in difficulties with perseveration and lack of use of strategies (see Prior and Ozonoff, 1998). Tests such as the Wisconsin Card Sort (Heaton, 1981) and the Tower of Hanoi (Simon, 1975) have been used to document these difficulties. In preschool children, the data on executive functioning deficits are more limited. McEvoy and colleagues (1993) used tasks that required flexibility and response set shifting, and noted that younger children with autism tended to exhibit more errors in perseveration than either mentally or chronologically age-matched control children. More recently, others did not find that the executive functioning in preschoolers with autistic spectrum disorders differed from that in other children (Griffith et al., 1999; Green et al., 1995). A third area of theoretical interest has centered on central coherence theory, in which the core difficulties in autism are viewed as arising from a basic impairment in observing meaning in whole arrays or contexts (Frith, 1996; Jarrold et al., 2000). As Frith (1996) has noted, it is likely that a number of separate cognitive deficits will be ultimately identified and related to the basic neurobiological abnormalities in autism. Neuropsychological assessments are sometimes of help in documenting sensory-perceptual, psychomotor, memory, and other skills. The util- ity of more traditional neuropsychological assessment batteries in children, especially in young children, is more limited than for adults. Extensive neuropsychological assessments may not provide enough useful information to be cost-effective. However, selected instruments may be helpful in answering specific questions, particularly in more able children. Exploring a child’s visual-motor skills or motor functioning can be of value for some children whose learning and adaptation appear to be hindered by deficits in these skills. (Motor and visual motor skills are discussed in detail in Chapter 8.) ACADEMIC INSTRUCTION AND OUTCOMES In addition to interventions that have been designed to improve intellectual performance (e.g., scores on IQ tests), there is a small literature on instructional strategies designed to promote the academic performance of young children with autism. Academic performance, for this discussion, refers to tasks related to traditional reading and mathematics skills. This literature consists primarily of single-subject design, quasi-experimental design, and descriptive observational research, rather than randomized clinical trials. The studies have usually included children with autism at the top of the age range covered in this report (i.e., ages 5–8), and the participant samples often include older children with autistic spectrum disorders as well. Notwithstanding these caveats, there is evidence that some young children with autistic spectrum disorders can acquire reading skills as a result of participation in instructional activities. There is very limited research on instructional approaches to promoting mathematics skills. A range of instructional strategies have involved children with autistic spectrum disorders. In early research, Koegel and Rincover (1974) and Rincover and Koegel (1977) demonstrated that young children with autism could engage in academic tasks and respond to academic instruction as well in small-group instructional settings as they did in one-to-one instruction with an adult. Kamps and colleagues replicated and extended these findings on small-group instruction of academic tasks to a wider range of children within the autism spectrum and other developmental disabilities (Kamps et al., 1990; Kamps et al., 1992). In another study, Kamps and colleagues (1991) first performed descriptive observational assessment of children with autism in a range of classroom settings. They used these data to identify the following commonly used instructional approaches: Incorporate naturally occurring procedures into intervention groups across classrooms. Include three to five students per group. Use individual sets of materials for each student. Use combination of verbal interaction (discussion format) and media. Use five-minute rotations of media/concept presentation. Use a minimum of three sets of materials to teach each concept. Use frequent group (choral) responding. Use fast-paced random responding. Use serial responding—three to five quick responses per student. Use frequent student-to-student interactions. They then conducted a series of single-subject designs that demonstrated experimentally (with treatment fidelity measures documenting implementation) the relationship between the instructional measures and the children’s performance on criterion-referenced assessments of academic tasks. This combination of instructional strategies (choral responding, student-to-student responding, rotation of materials, random student responding) was also found to be effective in teaching language concepts to elementary-aged children with autism in a later study (Kamps et al., 1994a). In their subsequent research, Kamps and colleagues (1994b) have examined the use of classwide peer tutoring (i.e., classmates provide instruction and practice to other classmates) with young children with autistic spectrum disorders. In a single-subject design study, these researchers found increased reading fluency and comprehension for children who received peer tutoring, as compared with those who received traditional reading instruction. Other strategies have also appeared in the literature. Using an incidental teaching technique, McGee and colleagues (1986) embedded sight-word recognition tasks in toy play activities and found that two children with autism acquired sight-word recognition skills and generalized those skills to other settings. Cooperative learning groups are another instructional approach. Provided tutoring by peers, a group of children with autistic spectrum disorders practiced reading comprehension and planned an academic game; the children increased their academic engagement in reading (Kamps et al., 1995). There is also some evidence that children with autism might benefit from computer-assisted instruction (CAI) in reading. Using a single-subject design, Chen and Bernard-Optiz (1993) compared delivery of academic tasks by an instructor or through a computer monitor and found higher performance and more interest from children in the CAI than the adult-delivered intervention. In a study conducted in Sweden, Heimann and colleagues (1995) used a CAI program and a traditional instructional approach to present lessons to students. Children with autism made significant gains in the CAI program (compared with traditional instruction), while typically developing children progressed similarly in both settings. These two studies suggest that a CAI format for presenting instruction to young children with autism may be useful, but the results are far from conclusive and require further study. FROM RESEARCH TO PRACTICE There is need for research on the development of more specific measures of important areas of outcome in cognition, including the acquisition and generalization of problem-solving and other cognitive skills in natural contexts (e.g., the classroom and the home) and the effects of these skills on families and other aspects of children’s lives. There is also a need for research to define appropriate sequences of skills that should be taught through educational programs for young children with autistic spectrum disorders, as well as methods for selecting those sequences, while developing programs for individual children.
Green living refers to an activity that contributes to minimizing or eliminating toxins from environment and improving personal health and energy. There are many activities that can have a positive impact on the environment, such as eating organically grown food, choosing paper bags instead of plastic bags, recycling beer cans, installing an environmentally friendly floor in your home or driving a fuel efficient car. A recent emerging trend is recycling of automobiles to achieve a greener environment. When the majority of cars have outlived their usefulness, they are taken to scrap yards where the cars are shredded and the remaining material (primarily comprising of iron and steel) is then recycled back into automobiles, appliances and other products. Automobiles make one of the most recycled products in the world as three out of four tonnes of new steel is made from recycled steel. Other car parts such as brake pads, shoes, oil filters, rubberized seals, polyurethane seat foam, seat covers, floor mats, rims, windshield glass and side windows glass can also be recycled for use in new automobiles. Automobile recycling confers countless benefits on the environment. It helps preserve natural resources and protects the environment from contamination by recycling usable components and parts. It helps reduce water and air pollution and saves landfill space. As automobile recycling minimizes the need for processing virgin materials, it helps reduce greenhouse gas emissions particularly sulfur dioxide which have detrimental effects on the environment and human health. Automobile recycling uses far less energy than that needed for car manufacturing. This, in turn, reduces the amounts of carbon dioxide, carbon monoxide and other carbon compounds released into the atmosphere and helps conserve valuable reserves of gas, coal and oil. Recycling vehicles and their spare parts can save as much as 80 million barrels of oil and 40000 tonnes of coal a year. Automobile recycling helps conserve energy as recycling one kilogram of steel saves enough energy to power a 60-watt light bulb for 85 hours. Recycling automotive glass (windshields and sunroofs) helps reduce water pollution by 45 percent and air pollution by 25 percent. Recycling metal saves up to 70 percent energy and 30 percent water consumption. Recycling one ton of oil filters not only saves 9 cubic yards of landfill space, but it also yields 1700 pounds of steel. Recycling oil filters helps prevent petroleum hydrocarbons from contaminating water, air and soil. It helps reduce water pollution by about 75 percent and air pollution by 80 percent. Recycling of brake pads and shoes produces a combination of synthetic materials and copper. Auto recyclers should control the recycling operations at the scrap yards to reduce the risk of releasing harmful petroleum compounds and toxic fluids into the environment. Heavy metals such as cadmium, lead, arsenic, mercury, aluminum and chromium should not be allowed to leach into the ground. Similarly, acids from solvents, batteries and degreasers should be properly disposed of as they can interfere with the chemistry of soil and create health hazards for marine life as well as humans.
1. Magnets Can Find and Attract Iron Ore Bring sand inside the classroom and place in a shallow tray. Have children use magnets to drag through the sand. The magnets will attract the iron particles in the sand and pick them up. Examine the iron filings under a magnifying glass. 2. Gravity Pulls Everything Back to Earth Introduce the concept of gravity and prepare the children to be scientists. Let them know they are going to do a scientific experiment. Take several balls and bean bags of varying sizes outside. Let children take turns throwing the balls and bean bags into the air. Can anyone make their ball or bean bag stay up in space? Why not? 3. All Things Feel the Pull of Gravity, Even Very Light Things Give each child a feather and see how long they can keep the feather afloat before gravity brings it back to earth. Have them blow on the feather or use a fan to do the same activity with air filled balloons. 4. Magnets Work on Certain Metal Provide children with a collection of interesting small things, like paper clips, nails and safety pins that respond to magnets. Using a magnet, have children sort items by things that the magnet can move, and then those things magnets cannot move. 5. Gravity Works On Everything in the World Roll a ball on the ground and see how far the ball travels before it stops. Take a ball and roll the ball down the playground slide. The ball also rolls until it stops. Now take a ball and roll it up the slide. Watch as it stops and then rolls on back. It is gravity that is pulling the ball back to earth. 6. Magnets Can Pull and Push Use two bar magnets. (Note: All bar magnets have a north pole and a south pole and most are marked by initials "n" and "s" or are color coded.) "Unlike" poles always attract each other. (North-South) "Alike" poles always repel each other. Have children experiment with the two magnets. They should try pushing "alike" poles together. Have them pull "unlike" poles apart. (Note: Do not drop magnets, because they will lose their magnetic strength.) 7. Some Magnets Are Strong Enough to Work Under Water Make a fishing pole by using a piece of wood dowel. Tie a strong magnet to the string. Attach the string to the pole. Cut small fish out of paper or plastic. Attach a large paper clip to each fish. Use a clear plastic shoe box filled with water as the lake. Add the fish. Let's go fishing. Magnets work through water. Reprinted with the permission of PBS. © PBS 2003 - 2008, all rights reserved.
Students learn that a scatter plot is a graph in which the data is plotted as points on a coordinate grid, and note that a "best-fit line" can be drawn to determine the trend in the data. If the x-values increase as the y-values increase, the scatter plot represents a positive correlation. If the x-values increase as the y-values decrease, the scatter plot represents a negative correlation. If the data is spread out so that it is not possible to draw a "best-fit line", there is no correlation. Students are then asked to create scatter plots using given data, and answer questions based on given scatter plots.
Day 7 available of the 12 Days of Christmas Freebies on Kindle from Knowledge Box Central! Merry Christmas and Happy New Year! The Natural Inquirer is a middle school science education journal! Scientists report their research in journals, which enable scientists to share information with one another. This journal, The Natural Inquirer, was created so that scientists can share their research with middle school students. Each article tells you about scientific research conducted by scientists in the USDA Forest Service.All of the research in this journal is concerned with nature, trees, wildlife, insects, outdoor activities and water. First students will "meet the scientists" who conduct the research. Then students read special information about science, and then about the environment. Students will also read about a specific research project, written in a way that scientists write when publishing their research in journals. Students become scientists when they do the Discovery FACTivity, learning vocabulary words that help in understanding articles. At the end of each section of Natural Inquirerarticles, students will find a few questions to help think about the research. These questions are not a test! They are intended to help students think more about research & can be used for class discussions. Would you like to win a FREE Lapbook that is based on the movie, FROZEN? Disney’s FROZEN has taken the world by storm. Take advantage of the opportunity by using this lapbook as a teaching tool. FROZEN is filled with life lessons and moral values that can be taught while children have fun working through and putting together this amazing hands-on lapbook! Designed for grades K-8. You must be a registered member of Teacher's Notebook to enter to win. |Cartoon from Facebook, by Todd Wilson, The Family Man|
The vertebrate record for most offspring has been set by the oceanic sunfish, which can produce a staggering 300 million eggs at a time These fish can reach 3.2 metres (10.5 feet) in height and live for 10 years. They spend most of their time basking in the warm sun that penetrates the top few metres of the sea, but dive deep to hunt jellyfish up to 40 times per day. The deepest recorded dive for this enormous fish is just under 650 metres (2100 feet), but sunfish need to surface quickly to warm up. Sunfish produce an enormous volume of eggs and release them into the open ocean for them to be fertilised by sperm released by a male. When the eggs hatch, spiny sunfish larvae are less than three millimetres (0.1 inches) long and are unable to propel themselves. They feed on smaller plankton and resemble a miniature puffer fish for half their lives before going through a phase of rapid transformation into their giant forms.
Siamese crocodile (Crocodylus siamensis) is a freshwater crocodile native to Indonesia (Borneo and possibly Java), Brunei, East Malaysia, Laos, Cambodia, Burma, Thailand, and Vietnam. The species is critically endangered and already extirpated from many regions. Its other common names include: Siamese freshwater crocodile, Singapore small-grain, cocodrilo de Siam, crocodile du Siam, buaja, buaya kodok, jara kaenumchued, and soft-belly. The Siamese crocodile is a small, freshwater crocodilian, with a relatively broad, smooth snout and an elevated, bony crest behind each eye. Overall, it is an olive-green colour, with some variation to dark-green. Young specimens measure 1.2–1.5 m (3.9–4.9 ft) and weigh 6–12 kg (13–26 lb), growing up to 2.1 m (6.9 ft) and a weight of 40–70 kg (88–154 lb) as an adult. The largest female specimens can measure 3.2 m (10 ft) and weight 150 kg (330 lb) Large male specimens can reach 4 m (13 ft) and 350 kg (770 lb) in weight. Most adults do not exceed 3 m (10 ft) in length, although hybrids in captivity can grow much larger. Distribution and habitat The historic range of the Siamese crocodile included most of Southeast Asia. This species is now extinct in the wild or nearly extinct from most countries except Cambodia. Formerly it was found in Cambodia, Indonesia (Borneo and possibly Java), Laos, Malaysia, Thailand, Vietnam, Brunei, and Burma. Biology and behaviour Despite conservation concerns, many aspects of C. siamensis life history in the wild remain unknown, particularly regarding its reproductive biology. Very little is known about the natural history of this species in the wild, but females do appear to build mound-nests constructed from scraped-up plant debris mixed with mud. In captivity, these crocodiles breed during the wet season (April to May), laying between 20 and 50 eggs, which are then guarded until they hatch. After incubation, the female will assist her young as they break out of their eggs and then carry the hatchlings to the water in her jaws. Pure, unhybridised examples of this species are generally unaggressive towards humans, and unprovoked attacks are unknown. Conservation status and threats It is one of the most endangered crocodiles in the wild, although it is extensively bred in captivity. Siamese crocodiles are under threat from human disturbance and habitat occupation, which is forcing remaining populations to the edges of their former range. Extinct from 99% of its original range, the Siamese crocodile is considered one of the least studied and most critically endangered crocodilians in the world. Although few wild populations remain, more than 700,000 C. siamensis are held on commercial crocodile farms in Southeast Asia. In 1992, it was believed to be extremely close to or fully extinct in the wild until in 2000 National Geographic's resident herpetologist Dr. Brady Barr, caught one while filming in Cambodia. Since then, a number of surveys have confirmed the presence of a tiny population in Thailand (possibly numbering as few as two individuals, discounting recent reintroductions), a small population in Vietnam (possibly less than 100 individuals), and more sizeable populations in Burma, Laos and Cambodia. In March 2005, conservationists found a nest containing juvenile Siamese crocodiles in the southern Lao province of Savannakhet. There are no recent records from Malaysia or Brunei. A significant population of the crocodiles is known to be living in East Kalimantan, Indonesia. Factors causing loss of habitat include: conversion of wetlands for agriculture, chemical fertilisers use, use of pesticides in rice production, and an increase in the population of cattle. The effects of warfare stemming from the conflicts in Vietnam, Laos, and Cambodia during the Vietnam War (from land mines to aerial bombardment) have also been factored. Many river systems, including those in protected areas, have hydroelectric power dams approved or proposed, which are likely to cause the loss of about half of the remaining breeding colonies within the next ten years. One cause for habitat degradation via hydrological changes, for the Siamese crocodile, is the implementation of dams on the upper Mekong River and its major tributaries. Potential impacts of dam construction include wetland loss and altered flooding cycle with a dry season flow 50% greater than under natural conditions. Exploitation and fragmentation Illegal capture of wild crocodiles for supply to farms is an ongoing threat, as well as incidental capture/drowning in fishing nets and traps. C. siamensis currently has extremely low and fragmented remaining populations with little proven reproduction in the wild. Siamese crocodiles have historically been captured for skins and to stock commercial crocodile farms. In 1945, skin hunting for commercial farms was banned by the French colonial administration of Cambodia. In the late 1940s, populations spurred the development of farms and harvesting wild crocodiles for stocking these farms. Protection was abolished by the Khmer Rouge (1975–79) but later reinstated under Article 18 of the Fishery Law of 1987, which "forbids the catching, selling, and transportation of...[wild] crocodiles..." Crocodile farming now has a huge economic impact in the provinces surrounding Tonle Sap, where 396 farms held over 20,000 crocodiles in 1998. Also, many crocodiles were exported from Cambodia since the mid-1980s to stock commercial farms in Thailand, Vietnam, and China. Despite legal protection, a profitable market exists for the capture and sale of crocodiles to farms since the early 1980s. This chronic overharvesting has led to the decline of the wild Siamese crocodile. Conservation and management The current situation of C. siamensis represents a significant improvement from the status reported in the 1992 Action Plan (effectively extinct in the wild), but poses major new challenges for quantitative survey and effective conservation action if the species is to survive. While the species remains critically endangered, there is a sufficient residual wild population, dispersed among many areas and countries, to provide a basis for recovery. If the pressures which have caused the virtual disappearance of this species in Thailand, Malaysia and Indonesia can be controlled or reversed, then the species is likely to survive. The Siamese crocodile is relatively unthreatening to people (compared to C. porosus), and the possibility of people and crocodiles coexisting in natural settings seems possible. The powerful economic force of the commercial industry based on C. siamensis also needs to be mobilised and channelled for conservation advantage. Considerable effort and action is still required, but the species has a reasonable chance of survival if the necessary actions can be implemented. Yayasan Ulin (The Ironwood Foundation) is running a small project to conserve an important wetland habitat in the area of East Kalimantan which is known to contain the crocodiles. Most of them, though, live in Cambodia, where isolated, small groups are present in several remote areas of the Cardamom Mountains, in the southwest of the country, and also in the Vireakchey National Park, in the northeast of the country. Fauna and Flora International is running a programme in the district of Thmo Bang, Koh Kong province, where villagers are financially encouraged to safeguard known crocodile nests. The Araeng River is considered to have the healthiest population of Siamese crocodiles in the world, although this may soon change after the completion of a massive dam in the river. Fauna and Flora international, in collaboration with several Cambodian government departments, is planning on capturing as many crocodiles as possible from this river and reintroducing them in another, ecologically suitable area, before the completion of the dam and the subsequent flooding of the whole area effectively renders their current habitat unsuitable. During the heavy monsoon period of June–November, Siamese crocodiles take advantage of the increase in water levels to move out of the river and onto large lakes and other local bodies of water, returning to their original habitat once water levels start receding back to their usual levels. A smaller population also is thought to exist in the Ta Tay River, and in the district of Thmo Bang. Poaching is a severe threat to the remaining wild population in the area, with the value of small specimens reaching hundreds of dollars in the black market, where they are normally taken into crocodile farms and mixed with other, larger species. The total wild population is unknown, since most groups are in isolated areas where access is extremely complicated. A number of captively held individuals are the result of hybridization with the saltwater crocodile, but several thousand "pure" individuals do exist in captivity, and are regularly bred at crocodile farms, especially in Thailand. Bang Sida National Park in Thailand, near Cambodia, has a project to reintroduce Siamese crocodile into the wild. A number of young crocodiles have been released into a small and remote river in the park, not accessible to visitors. The Phnom Tamao Wildlife Rescue Centre in Cambodia conducted DNA analysis of 69 crocodiles in 2009, and found 35 of them were purebred C. siamensis. Conservationists from Fauna and Flora International and Wildlife Alliance plan to use these to launch a conservation breeding program in partnership with the Cambodian Forestry Administration. The Wildlife Conservation Society (WCS) is working with the government of Lao PDR on a new programme to save this critically endangered crocodile and its wetland habitat. In August, 2011, a press release announced the successful hatching of a clutch of 20 Siamese crocodiles. These eggs were then incubated at the Laos Zoo. This project represents a new effort by WCS to conserve the biodiversity and habitat of Laos’ Savannakhet Province, promotes conservation of biodiversity for the whole landscape, and relies on community involvement from local residents. High priority projects include: - Status surveys and development of crocodile management and conservation programmes in Cambodia and Lao PDR: These two countries appear to be the remaining stronghold of the species. Identifying key areas and populations, and obtaining quantitative estimates of population size as a precursor to initiating conservation programs is needed. - Implementation of protection of habitat and restocking in Thailand: Thailand has the best-organized protected-areas system, the largest source of farm-raised crocodiles for restocking, and the most-developed crocodile management programme in the region. Although the species has virtually disappeared from the wild, re-establishment of viable populations in protected areas is feasible. - Protection of crocodile populations in Vietnam: A combination of habitat protection and captive breeding could prevent the complete loss of the species in Vietnam. Surveys, identification of suitable localities and the implementation of a conservation programme coordinated with the captive breeding efforts of Vietnamese institutions is needed. - Investigation of the taxonomy of the freshwater crocodiles in Southeast Asia and the Indo-Malaysian Archipelago: The relationships among the freshwater crocodiles in the Indo-Malaysian Archipelago are poorly understood. Clarification of these relationships is of scientific interest and has important implications for conservation. Other projects include: - Coordination of captive breeding, trade and conservation in the South east Asian region: Several countries in the region are already deeply involved in captive breeding programs for commercial use. Integration of this activity with necessary conservation actions for the wild populations (including funding surveys and conservation) could be a powerful force for conservation. A long term aim could be the re-establishment of viable wild populations and their sustainable use by ranching. - Maintain a stock of pure C. siamensis in crocodile farms: The bulk of the captives worldwide are maintained in several farms in Thailand where extensive interbreeding with C. porosus has taken place. Hybrids are preferred for their superior commercial qualities, but the hybridisation threatens the genetic integrity of the most threatened species of crocodilians. Farms should be encouraged to segregate genetically pure Siamese crocodiles for conservation, in addition to the hybrids they are promoting for hide production. - Survey and protection of Siamese crocodiles in Indonesia: Verification of the presence of C. siamensis in Kalimantan and Java is a first step to developing protection for the species within the context of the developing crocodile management strategy in Indonesia. In Pop Culture A Siamese crocodile stars as the titular monster in the 1978 Thailand film Crocodile. - Bezuijen, M., Simpson, B., Behler, N., Daltry, J. & Tempsiripong, Y. (2012). "Crocodylus siamensis". IUCN Red List of Threatened Species. Version 2013.2. International Union for Conservation of Nature. Retrieved 20 May 2014. Listed as Critically Endangered (CR A2cd v3.1) - Crocodilian Species List: (January 2009). "Crocodylus siamensis". - Steel, Rodney (1989). Crocodiles. London. - Experimental inoculation of broad-nosed caimans (Caiman latirostris) and Siamese crocodiles (Crocodylus siamensis) with Mycoplasma alligatoris - Adjuntament de Barcelona - Crocodile - Renegade reptiles by Tom Lee - Britton, A. "Crocodylus Siamensis (Schneider, 1801)". Retrieved 2011-11-29. - Simpson, Boyd; Bezuijen (2010). "Siamese Crocodile Crocodylus Siamensis". Crocodiles. Third Edition. Retrieved 2011-11-29. - Ross, R.P. "Crocodiles: Status Survey and Conservation Action Plan. Second Edition.". Retrieved 2011-11-29. - Alderton, D. (1991). Crocodiles and Alligators of the World. Blandford, London. - Cox, M.J. van Dijk, P.P, Nabhitabhata, J and Thirakhupt, K. (2009) A photographic guide to Snakes and other reptiles of Thailand and South-East Asia. Asia Books Co. Ltd. Bangkok - IUCN Red List (November 2011). "Crocodylus Siamensis". IUCN. Retrieved 2011-11-29. - Phiapalath, Phaivanh; Voladet, Hicks. "Wetland Priority Sites in Lao PDR - the top five priority sites". IUCN (Climate Change Impact and Vulnerability Assessment for the Wetlands of the Lower Mekong Basin for Adaptation Planning). Retrieved 2011-11-29. - Hogan, Z.S.; Moyle, May, Vander Zanden, Baird (2004). "The imperiled giants of the Mekong". American Scientist 92: 228–237. doi:10.1511/2004.3.228. - Lamberts, D (2001). "Tonle Sap fisheries: A Case Study on Floodplain Gillnet Fisheries in Siem Reap Cambodia.". RAP Publication (Bangkok: FAO REgional Office for Asia and the Pacific). - Thuok; Tana (1994). "Country report on crocodile conservation in Cambodia.". Crocodiles: Proceedings of the 12th meeting of the Crocodile Specialist Group. IUCN Publications: 3–15. - Kimura, W (1969). "Crocodiles in Cambodia, Research Report No. 3 Atawgawa Tropical Garden and Alligator Farm". - Thorbjarnarson, J (2001). "herpetology trip report: Cambodia. Report to Wildlife Conservation Society". - Associated Press:Endangered crocodiles hatched in Cambodia - "WCS Helps Hatch Rare Siamese Crocodiles in Lao PDR". Retrieved 2011-12-01. - Ross, James. "Crocodiles: Status Survey and conservation Action Plan. Second Edition. IUCN/SSC Crocodile Specilalist Group". Retrieved 2011-11-29. - Crocodile Specialist Group (1996). Crocodylus siamensis. 2006. IUCN Red List of Threatened Species. IUCN 2006. www.iucnredlist.org. Retrieved on 11 May 2006. Listed as Critically Endangered (CR A1ac v2.3) - Crocodylus siamensis - The Crocodile Specialist Group. - Crocodylus siamensis - from the Biodiversity Heritage Library - Action Plan for Crocodylus siamensis. IUCN/SSC Crocodile Specialist Group - Status Survey and Conservation Action Plan, 2nd edition. - "Rare crocs back from extinction (video)". BBC. 5 February 2009. - "New crocodile hope in Cambodia". BBC. 10 November 2009. - "Saving the last Siamese crocodiles". Fauna and Flora International.
lemurs live close to the people Lemur(Latin lemures, "nocturnal spirits"), common name for more than 40 species of primitive primates. Lemurs make up five closely related families within the primate order: the typical or true lemurs; the sportive lemurs; the dwarf lemurs and mouse lemurs; the indri, sifaka, and avahi; and the aye-aye. characterized by long tails Lemurs are confined to the island of Madagascar off the east coast of Africa, and true lemurs were introduced to the nearby Comoros islands within the past few hundred years. Ongoing habitat loss endangers nearly all lemur species. people feed them bananasLemurs resemble advanced primates chiefly in the structure of their hands and feet. The thumb of the hand and the great toe of the foot are well developed and opposable, which means that the tips of the thumb and great toe can touch the tips of the other fingers and toes in a way that is particularly useful for grasping tree branches and small objects. toes on the rock The fingers and toes of lemurs end in nails, although the second toe of the hind foot of many species has a long claw, which the lemur uses to comb and clean its soft, luxurious fur. Most lemurs have sharp, pointed muzzles and large eyes. The eyes face forward and give lemurs stereoscopic vision, or depth perception. Except for the indri, all lemurs have long tails, but the tail is never prehensile—it cannot be wrapped around branches and used as an extra grip. Lemurs are prosimians, a suborder of primates that also includes the galago, loris, and potto. Like all prosimians, they have a dental comb—their lower front teeth are fused and tilted forward, making a tool that is used to groom their fur. Lemurs range in size from the pygmy mouse lemur, only 30 g (1 oz) in weight and 20 cm (8 in) in length (including the tail), to the indri, which is about 75 cm (about 2.5 ft) tall and about 7 to 10 kg (15 to 22 lb) in weight. Generally, the dwarf and mouse lemurs are about the same size as squirrels, with a tail about as long as the body. True lemurs and sportive lemurs are larger, about the size of cats. The aye-aye is slightly larger than the true lemur and has a long tail that is about one and a half times as long as its body. All lemurs feed primarily on flowers, leaves, and fruit. The true lemurs and the indri, sifaka, and avahi are strictly vegetarian. The dwarf and mouse lemurs supplement their diet with insects, small frogs, birds, lizards, and bird eggs, and the aye-aye eat insect larvae. Some species eat tree sap or gum, and some eat insect secretions. Some species, such as the gentle lemurs and the sportive lemurs, are able to eat leaves or bamboo, both of which are hard to digest. The sportive lemurs pass the food through their stomachs several times, eating their feces in order to extract the available nutrients. Lemurs are forest creatures that are arboreal (live in trees), except for the ring-tailed lemur, which travels extensively on the ground, and the crowned lemur, which may spend part of its time on the ground. Most small lemurs are nocturnal, or active at night, while large lemurs are diurnal, or active during the day. Like the rest of the primates, lemurs display a wide variety of social structures. Most lemurs are social, living in small groups, although the aye-aye lives alone. Some lemurs live in family groups of a mated pair and their young, but other species live in matriarchal groups in which the females dominate the males. Females stay in the group they were born into, while males transfer from group to group. In some species that live in groups, the group members disperse each day to feed alone, while in other species, the group stays together while feeding. Most lemurs mate between April and June (the southern hemisphere’s fall season), and females give birth to one or two young in October or November (the southern hemisphere’s spring). However, mouse lemurs may give birth twice in a year, especially if the first litter dies. Dwarf lemurs have litters with up to four young. Lemurs probably first arrived on the island of Madagascar about fifty million years ago, when the island was closer to Africa. Madagascar was an ideal home for these early lemurs because it lacked many of the large predators that hunted lemur-like animals in Africa. With fewer predators and less competition, lemurs quickly spread throughout the island and diversified into a rich array of species. live with humans The first humans arrived between 1500 and 2000 years ago and quickly hunted at least 14 of the larger species of lemurs to extinction. Currently, the forests of Madagascar are rapidly being cut down, and as a result 12 species of lemur are endangered and another 20 species are listed as being vulnerable to extinction by the World Conservation Monitoring Centre. on the head of a womanScientific classification: The true lemurs make up the family Lemuridae and include the ring-tailed lemur, classified as Lemur catta, and the crowned lemur as Lemur coronatus. Gentle lemurs make up the genus Hapalemur within the Lemuridae family. Sportive lemurs make up the family Megaladapidae. The family Cheirogaleidae includes the dwarf lemurs and the mouse lemurs. The pygmy mouse lemur is classified Microcebus myoxinus. The family Indriidae includes the indri, which is classified as Indri indri; the avahi, which is classified as Avahi laniger; and the two sifaka species, classified as Propithecus verreauxi and Propithecus diadema. The aye-aye makes up the family Daubentoniidae and is classified as Daubentonia madagascariensis. Text from Microsoft Encarta Return to Madagascar page Return to Indian Ocean page Return to People and Places
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) Microsatellites, or Simple Sequence Repeats (SSRs), are polymorphic loci present in nuclear DNA that consist of repeating units of 1-4 base pairs in length. They are typically neutral, co-dominant and are used as molecular markers which have wide-ranging applications in the field of genetics, including kinship and population studies. Microsatellites can also be used to study gene dosage (looking for duplications or deletions of a particular genetic region). One common example of a microsatellite is a (CA)n repeat, where n is variable between alleles. These markers often present high levels of inter- and intra-specific polymorphism, particularly when tandem repeats number ten or greater. The repeated sequence is often simple, consisting of two, three or four nucleotides (di-, tri-, and tetranucleotide repeats respectively), and can be repeated 10 to 100 times. CA nucleotide repeats are very frequent in human and other genomes, and present every few thousand base pairs. As there are often many alleles present at a microsatellite locus, genotypes within pedigrees are often fully informative, in that the progenitor of a particular allele can often be identified. In this way, microsatellites are ideal for determining paternity, population genetic studies and recombination mapping. It is also the only molecular marker to provide clues about which alleles are more closely related. Microsatellites owe their variability to an increased rate of mutation compared to other neutral regions of DNA. These high rates of mutation can be explained most frequently by slipped strand mispairing (slippage) during DNA replication on a single DNA double helix. Mutation may also occur during recombination during meiosis. Some errors in slippage are rectified by proofreading mechanisms within the nucleus, but some mutations can escape repair. The size of the repeat unit, the number of repeats and the presence of variant repeats are all factors, as well as the frequency of transcription in the area of the DNA repeat. Interruption of microsatellites, perhaps due to mutation, can result in reduced polymorphism. However, this same mechanism can occasionally lead to incorrect amplification of microsatellites; if slippage occurs early on during PCR, microsatellites of incorrect lengths can be amplified. Amplification of microsatellitesEdit Microsatellites can be amplified for identification using Polymerase Chain Reaction (PCR), using templates of flanking regions (primers). DNA is denatured at a high temperature, separating the double strand, allowing annealing of primers and the extension of nucleotide sequences along opposite strands at lower temperatures. This process results in production of enough DNA to be visible on agarose or acrylamide gels; only small amounts of DNA are needed for amplification as thermocycling in this manner creates an exponential increase in the replicated segment. With the abundance of PCR technology, primers that flank microsatellite loci are simple and quick to use, but the development of correctly functioning primers is often a tedious and costly process. Development of microsatellite primersEdit If searching for microsatellite markers in specific regions of a genome; for example within a particular exon of a gene, primers can be designed manually. This involves searching the genomic DNA sequence for microsatellite repeats, which can be done by eye or by using automated tools such as repeat masker. Once the potentially useful microsatellites are determined (removing non-useful ones such as those with random inserts within the repeat region), the flanking sequences can be used to design oligonucleotide primers which will amplify the specific microsatellite repeat in a PCR reaction. Random microsatellite primers can be developed by cloning random segments of DNA from the focal species. These are inserted into a plasmid or phage vector, which is in turn implanted into Escherichia coli bacteria. Colonies are then developed, and screened with fluorescently–labelled oligonucleotide sequences that will hybridise to a microsatellite repeat, if present on the DNA segment. If positive clones can be obtained from this procedure, the DNA is sequenced and PCR primers are chosen from sequences flanking such regions to determine a specific locus. This process involves significant trial and error on the part of researchers, as microsatellite repeat sequences must be predicted and primers that are randomly isolated may not display significant polymorphism. Microsatellite loci are widely distributed throughout the genome and can be isolated from semi-degraded DNA of older specimens, as all that is needed is a suitable substrate for amplification through PCR. Limitations of microsatellitesEdit Microsatellites have proved to be versatile molecular markers, particularly for population analysis, but they are not without limitations. Microsatellites developed for particular species can often be applied to closely related species, but the percentage of loci that successfully amplify may decrease with increasing genetic distance. Point mutation in the primer annealing sites in such species may lead to the occurrence of ‘null alleles’, where microsatellites fail to amplify in PCR assays. Null alleles can be attributed to several phenomena. Sequence divergence in flanking regions can lead to poor primer annealing, especially at the 3’ section, where extension commences; preferential amplification of particular size alleles due to the competitive nature of PCR can lead to heterozygous individuals being scored for homozygosity (partial null). PCR failure may result when particular loci fail to amplify, whereas others amplify more efficiently and may appear homozygous on a gel assay, when they are in reality heterozygous in the genome. Null alleles complicate the the interpretation of microsatellite allele frequencies and thus make estimates of relatedness faulty. Furthermore, stochastic effects of sampling that occurs during mating may change allele frequencies in a way that is very similar to the effect of null alleles; an excessive frequency of homozygotes causing deviations from Hardy-Weinberg equilibrium expectations. Since null alleles are a technical problem and sampling effects that occur during mating are a real biological property of a population, it is often very important to distinguish between them if excess homozygotes are observed. When using microsatellites to compare species, homologous loci may be easily amplified in related species, but the number of loci that amplify successfully during PCR may decrease with increased genetic distance between the species in question. Mutation in microsatellite alleles is biased in the sense that larger alleles contain more bases, and are therefore likely to be mistranslated in DNA replication. Smaller alleles also tend to increase in size, whereas larger alleles tend to decrease in size, as they may be subject to an upper size limit; this constraint has been determined but possible values have not yet been specified. If there is a large size difference between individual alleles, the there may be increased instability during recombination at meiosis. In tumour cells, where controls on replication may be damaged, microsatellites may be gained or lost at an especially high frequency during each round of mitosis. Hence a tumour cell line might show a different genetic fingerprint from that of the host tissue. - genetic marker - mobile element - short interspersed repetitive element - long interspersed repetitive element - junk DNA - variable number tandem repeats - short tandem repeats - Trinucleotide repeat disorders - microsatellite instability - ↑ Turnpenny, P. & Ellard, S. (2005). Emery's Elements of Medical Genetics, 12th. ed. Elsevier, London. - ↑ 2.0 2.1 Queller, D.C., Strassman,,J.E. & Hughes, C.R. (1993). Microsatellites and Kinship. Trends in Ecology and Evolution 8: 285 – 288. - ↑ D. B. Goldstein, A. R. Linares, L. L. Cavalli-Sforza, and M. W. Feldman (1995). An Evaluation of Genetic Distances for Use With Microsatellite Loci. Genetics 139: 463-471. - ↑ Blouin, M.S., Parsons, M., Lacaille, V. & Lotz, S. (1996). Use of microsatellite loci to classify individuals by relatedness. Molecular Ecology 5: 393 - 401. - ↑ Griffiths, A.J.F., Miller, J.F., Suzuki, D.T., Lewontin, R.C. & Gelbart, W.M. (1996). Introduction to Genetic Analysis, 5th Edition. W.H. Freeman, New York. - ↑ 6.0 6.1 6.2 6.3 Jarne, P. & Lagoda, P.J.L. (1996). Microsatellites, from molecules to populations and back. Trends in Ecology and Evolution 11: 424 – 429. - ↑ Dakin, E.E. & Avise, J.C. (2004). Microsatellite null alleles in parentage analysis. Heredity 93: 504 – 509. Microsatellite DNA Methodology (http://www.bio.davidson.edu/COURSES/genomics/method/microsatellite.html) |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
The solar power tower, also known as 'central tower' power plants or 'heliostat' power plants or power towers, is a type of solar furnace using a tower to receive the focused sunlight. It uses an array of flat, movable mirrors (called heliostats) to focus the sun's rays upon a collector tower (the target). Concentrated solar thermal is seen as one viable solution for renewable, pollution-free energy. Early designs used these focused rays to heat water, and used the resulting steam to power a turbine. Newer designs using liquid sodium have been demonstrated, and systems using molten salts (40% potassium nitrate, 60% sodium nitrate) as the working fluids are now in operation. These working fluids have high heat capacity, which can be used to store the energy before using it to boil water to drive turbines. These designs also allow power to be generated when the sun is not shining.
11 April 2006 Plants' Capacity To Soak Up Carbon Limited by Kate Melville Plants do much less than previously thought to soak up carbon dioxide, say Bruce Hungate of Northern Arizona University and Kees-Jan van Groenigen of the University of California Davis. Their paper, appearing in the Proceedings of the National Academy of Sciences, suggests that plants are limited in their capacity to clean up excess carbon dioxide in the atmosphere. And unfortunately, even their current abilities may be diminishing. According to the paper, the limitation on carbon take-up stems from a dependence on nitrogen and other trace elements that are essential for photosynthesis; the process that removes carbon dioxide from the air and transfers it back into the ground. "Our paper shows that in order for soils to lock away more carbon as carbon dioxide rises, there has to be quite a bit of extra nitrogen available - far more than what is normally available in most ecosystems," explained Hungate. It was previously thought that rising carbon dioxide levels would also speed up the process of nitrogen fixation, where plants "pump" nitrogen back into the soil. But this process can only increase if higher levels of other essential nutrients, such as potassium, phosphorus and molybdenum are available. "The discovery implies that future carbon storage by land ecosystems may be smaller than previously thought, and therefore not a very large part of a solution to global warming," Hungate lamented. While plants may not save the planet, they still play an important role in reducing carbon dioxide levels. "We do know that CO2 in the atmosphere would be increasing faster were it not for current carbon storage in the oceans and on land," said Hungate. "But land ecosystems appear to have a limited and diminishing capacity to clean up excess carbon dioxide in the atmosphere. Reducing our reliance on fossil fuels is likely to be far more effective than expecting natural ecosystems to mop-up the extra CO2 in the atmosphere." Source: Northern Arizona University
Smallpox, caused by the Variola virus (pictured), was responsible for more than 300 million deaths in the 20th Century. In 1980, the World Health Organisation declared smallpox eradicated, following a global vaccination campaign. Currently, only two high-security labs, in Russia and the USA, have live Variola in storage. Since eradication of the disease, the stored virus has been studied extensively, but the World Health Assembly will be determining the fate of these last remaining stocks in the coming few days. While some scientists think that sufficient research has been carried out, and the stocks should be destroyed, others, like the authors of this PLOS paper, believe that we still have more to learn and continuing research is vital. Arguments for keeping the stocks include being able to develop better antivirals to combat the threat of a possible appearance of a vaccine-resistant smallpox virus and understanding why Variola uniquely targets humans. Written by Katie Panteli BPoD stands for Biomedical Picture of the Day. Managed by the MRC London Institute of Medical Sciences the website aims to engage everyone, young and old, in the wonders of biomedicine. Images are kindly provided for inclusion on this website through the generosity of scientists across the globe.
What is soap? In chemistry, a soap is a salt of a fatty acid. Household uses for soaps include washing, bathing, and other types of housekeeping, where soaps act as surfactants, emulsifying oils to enable them to be carried away by water. In industry they are also used in textile spinning and are important components of some lubricants. Metal soaps are also included in modern artists’ oil paints formulations as a rheology modifier. Soaps for cleaning are obtained by treating vegetable or animal oils and fats with a strong base, such as sodium hydroxide or potassium hydroxide in an aqueous solution. Fats and oils are composed of triglycerides; three molecules of fatty acids attach to a single molecule of glycerol. The alkaline solution, which is often called lye (although the term “lye soap” refers almost exclusively to soaps made with sodium hydroxide), induces saponification. In this reaction, the triglyceride fats first hydrolyze into free fatty acids, and then the latter combine with the alkali to form crude soap: an amalgam of various soap salts, excess fat or alkali, water, and liberated glycerol (glycerin). The glycerin, a useful byproduct, can remain in the soap product as a softening agent, or be isolated for other uses. Soaps are key components of most lubricating greases, which are usually emulsions of calcium soap or lithium soap and mineral oil. Many other metallic soaps are also useful, including those of aluminum, sodium, and mixtures of them. Such soaps are also used as thickeners to increase the viscosity of oils. In ancient times, lubricating greases were made by the addition of lime to olive oil. How soap is made? The industrial production of soap involves continuous processes, such as continuous addition of fat and removal of product. Smaller-scale production involves the traditional batch processes. The three variations are the cold process, wherein the reaction takes place substantially at room temperature; the semi-boiled or “hot process,” wherein the reaction takes place near the boiling point; and the fully boiled process, wherein the reactants are boiled at least once and the glycerol is recovered. There are several types of semi-boiled hot process methods, the most common being DBHP (Double Boiler Hot Process) and CPHP (Crock Pot Hot Process). Most soapmakers, however, continue to prefer the cold process method. The cold process and hot process (semi-boiled) are the simplest, and are typically used by small artisans and hobbyists producing handmade decorative soaps. The glycerol remains in the soap and the reaction continues for many days after the soap is poured into molds. The glycerol is left during the hot-process method, but at the high temperature employed, the reaction is practically completed in the kettle, before the soap is poured into molds. This simple and quick process is employed in small factories all over the world. Continue Reading To NEXT PAGE
Who Makes Up the Labor Force? A labor force is characterized by the variety and levels of skill of its members, as well as by their age, their sex, and in countries where appropriate to track this, its ethnic composition. The labor force can increase or decrease with the size of the total population and with changes in its age distribution.Many advanced economies (notably Japan and some European countries) are now experiencing severe population aging that is reducing the proportion of the population that is working. These countries are also experiencing an increasing age dependency ratio (as defined by the United Nations), which is the sum of the number of people under age 15 and over 64 divided by the number of working-age people, or those ages 15 to 64. This is important to predict the economic futures of countries that may not be able to fill the jobs needed by the economy. In Japan, the age dependency ratio is close to 64%; for every three workers, there are two dependents. In China, because of the former one child policy, this problem will become particularly acute in the decades to come. China today has a large population, but its population did not continue to grow, so there may be future difficulty in finding people to fill jobs. The labor force also grows or shrinks as the number of working-age people who make themselves available for work increases or decreases. During World War II and the long economic boom that followed it, many women who had not previously been in paid employment joined the labor force. The Labor Force Participation Rate Policymakers use the labor force participation rate to estimate the potential output of an economy, or the total quantity of goods created, as well as to formulate employment policies, determine training needs, and calculate the potential cost of social security and retirement pensions. The labor force participation rate of specific categories of worker can help identify problems that such groups face. For example, in many countries the labor force participation of women is extremely low and may vary with age and social status, as well as with fertility rates and educational levels. In Indonesia the labor force participation rate of educated women in 2000 was 94.4%. In Canada, for the same category, the rate was 58.9%. Thus, more educated women in Canada do not join in the workforce, perhaps because higher education for women is more compulsory in Canada than Indonesia. Labor force participation rates for people ages 15-24 tend to be affected by the availability of educational opportunities. For older workers the rates have been found to be linked to attitudes toward retirement and the existence of retirement pensions. In the 21st century, the labor force participation rate in developed countries like the United States has been falling, especially since the 2007-2009 economic crisis.
A balloon rises in the air because it is filled with a gas which is lighter than the air around it. But an airplane, being heavier than the air surrounding it, cannot float as does a balloon; consequently, it must get its lift in a different manner. In effect, it must do something to the air surrounding it to make the air support its greater weight. To act on the air, the airplane must be placed in motion. The majority of general aviation type airplanes employ a reciprocating engine to turn a propeller that "bites" into the air forcing the air backward while pulling the airplane forward. This leads our discussion to several basic laws of physics that help in explaining how an airplane flies. The English philosopher and mathematician Sir Isaac Newton is credited with having observed in 1687 that "for every action there is an equal and opposite reaction" (Fig. 3-1). This principle applies whenever two things act upon each other, such as the air and the propeller, or the air and the wing of an airplane. In short, the statement about "action and reaction" tells us how lift and propulsion of airplanes are produced. The only way air can exert a force on a solid body, such as an airplane's wing, is through pressure. In the 1700's, Daniel Bernoulli (a Swiss mathematician) discovered that if the velocity of a fluid (air) is increased at a particular point, the pressure of the fluid (air) at that point is decreased. The airplane's wing is designed to increase the velocity of the air flowing over the top of the wing as it moves through the air. To do this, the top of the wing is curved, while the bottom is relatively flat. The air flowing over the top travels a little farther (since it is curving) than the air flowing along the flat bottom. This means the air on top must go faster. Hence, the pressure decreases, resulting in a lower pressure (as Bernoulli stated) on top of the wing and a higher pressure below. The higher pressure then pushes (lifts) the wing up toward the lower pressure area (Fig. 3-2). |At the same time, the air flowing along the underside of the wing is deflected downward. As stated in Newton's theorem, "for every action there is an equal and opposite reaction." Thus, the downward deflection of air reacts by pushing (lifting) the wing upward (Fig. 3-2). This is like the planing effect of a speedboat or water skier skimming over the water. To increase the lift, the wing is tilted upward in relation to the oncoming air (relative wind) to increase the deflection of air. Relative wind during flight is not the natural wind, but is the direction of the airflow in relation to the wing as it moves through the air. The angle at which the wing meets the relative wind is called the angle of attack. These two natural forces, pressure and deflection, produce lift. The faster the wing moves through the air and the greater the two forces become, the more lift is developed. Another of Newton's theorems can be used to explain just how much air deflection is needed to lift an airplane. This law of motion says "the force produced will be equal to the mass of air deflected multiplied by the acceleration given to it." From this we can see the way lift, speed, and angle of attack are related. The amount of lift needed can be produced by moving a large mass of air through the process of making a small change in velocity, or by moving a small mass of air through the process of making a large change in velocity. For example, at a high speed where the wing affects a large amount of air, only a |If the airplane's speed is too slow, the angle of attack required will be so large that the air can no longer follow the upper curvature of the wing. This results in a swirling, turbulent flow of air over the wing and "spoils" the lift. Consequently, the wing stalls. On most airplanes this critical angle of attack is about 15 to 20 degrees (Fig. 3-3). Stalls will be thoroughly explained in the chapter on Proficiency Flight Maneuvers. As noted earlier, when the propeller rotates, it "bites" into the air, thereby providing the force to pull (or push) the airplane forward. This forward motion causes the airplane to act on the air to produce lift. The propeller blades, just like a wing, are curved on one side and straight on the other side. Hence, as the propeller is rotated by the engine, forces similar to those of the wing create "lift" in a forward direction. This is called thrust. Up to this point the discussion has related only to the "lifting" force. Before an understanding of how an airplane flies is complete, other forces must be discussed.
Is mathematics a consistent theory? Or, rather, is there a danger of finding a correct mathematical proof for a false statement like “0 = 1”? These questions became quite relevant at the end of the nineteenth century, when some mathematical truths dating back many centuries were shattered and mathematicians started to feel the need for completely rigorous and solid foundations for their discipline. Gödel’s incompleteness theorem is a famous result in mathematics that shows the limitation of mathematics itself. At the end of the nineteenth century and the beginning of the twentieth, mathematicians tried to find a complete and consistent set of axioms for mathematics. This goal is often referred to as Hilbert’s program, after the mathematician David Hilbert who posed it as the second problem in his famous list of open problems in mathematics. In 1931 Kurt Gödel proved that this goal is impossible to achieve. Gödel proved that for any system of axioms for mathematics there are true results that cannot be proved! This is referred to as Gödel’s first incompleteness result. One startling consequence is that it is impossible to precisely formulate the consistency of mathematics and therefore impossible to prove the consistency of mathematics. This is the content of Gödel’s second incompleteness theorem. Gödel’s theorem is one of the few results of mathematics that capture the imagination of people well beyond mathematics. The well-known book Gödel, Escher, Bach by Douglas Hofstadter discusses common themes in the works of mathematician Gödel, artist M. C. Escher, and composer Johann Sebastian Bach. Gödel’s theorem is the climax (and, paradoxically, the end) of the “foundational crisis” in mathematics. Gottlob Frege made an important attempt to reduce all mathematics to a logical formulation. However, Bertrand Russell found a simple paradox that demonstrated a flaw in Frege’s approach. The Dutch mathematician Luitzen E. J. Brouwer proposed an approach to mathematics, called intuitionism, which does not accept the law of excluded middle. This approach does not accept “Reductio ad absurdum,” or, in other words, mathematical proofs “by contradiction.” Most works in mathematics, including Brouwer’s own famous earlier work, do not live up to the intuitionistic standards of mathematical proofs. Brouwer’s ideas were regarded as revolutionary and, while on his lecture tours, he was received with an enthusiasm not usually associated with mathematics. Hilbert and Brouwer were the main players in a famous controversy in the editorial board of Mathematische Annalen, the most famous mathematical journal of the time. Hilbert, the editor-in-chief, eventually fired Brouwer from the editorial board. There are different accounts regarding the nature of the disagreement. Some scholars have claimed that Brouwer wanted to impose his intuitionistic proof standards. Other scholars strongly reject this story and claim that Hilbert wanted to remove Brouwer in an inappropriate way simply because he felt that Brouwer was becoming too powerful. Remark: My colleague Ehud (Udi) Hrushovski has a different view on the end of the foundational crisis. According to Udi one reason was the success of the Zermelo-Fraenkel axiomatization (ZFC) for set theory which replaced Frege’s failed axioms and seemed immune against difficulties of the kind discovered by Russell. Another reason was the various positive results, some by Gödel himself, which brought the foundation of mathematics quite close to Hilbert’s dream. Hrushovski’s view is that Hilbert’s main interest was towards completeness and not towards a mathematical proof of the consistency of mathematics. Update: For a technical discussions of Gödel’s completeness and compactness theorems, see this post by Terry Tao.
Wolves are carnivorous mammals that belong to the dog family. They are the largest members of this family and can weigh up to 175 pounds when fully grown. There are two types of wolves: gray wolves, which are the most common and are found throughout North America, and red wolves, which are only seen in a small area on the North Carolina coast.Continue Reading Gray wolves are mottled gray and brown in color. They have long fur, bushy tails, long snouts and pointed ears on the tops of their heads. Sub-species of gray wolves that live in the arctic have white coats. Red wolves are similar in appearance, but their coats are a reddish brown, and they are slightly smaller in size than gray wolves. Both types of wolves live in packs of between six and 10 animals, and a strict dominance hierarchy is established within each wolf pack. A dominant male typically leads the group. Wolves communicate with one another by howling. Although many people are afraid of wolves, they almost never attack humans, although they have been known to attack domestic animals. Wolves are often trapped and shot because of this fear, and this has contributed to the endangerment of their species. Habitat destruction is the other main factor that has contributed to declines in wolf populations.Learn more about Wolves
diapir , (from Greek diapeirein, “to pierce”), geological structure consisting of mobile material that was forced into more brittle surrounding rocks, usually by the upward flow of material from a parent stratum. The flow may be produced by gravitational forces (heavy rocks causing underlying lighter rocks to rise), tectonic forces (mobile rocks being squeezed through less mobile rocks by lateral stress), or a combination of both. Diapirs may take the shape of domes, waves, mushrooms, teardrops, or dikes. Because salt flows quite readily, diapirs are often associated with salt domes or salt anticlines; in some cases the diapiric process is thought to be the mode of origin for a salt dome itself. Click anywhere inside the article to add text or insert superscripts, subscripts, and special characters. You can also highlight a section and use the tools in this bar to modify existing content: Add links to related Britannica articles! You can double-click any word or highlight a word or phrase in the text below and then select an article from the search box. Or, simply highlight a word or phrase in the article, then enter the article name or term you'd like to link to in the search box below, and select from the list of results. Note: we do not allow links to external resources in editor. Please click the Websites link for this article to add citations for
Nasa revealed on Thursday last week that they made new discoveries on our neighboring planet – organic molecules. The discovery reaffirms the hope of finding traces of life on the planet. The US Space Agency, NASA’s unmanned car-sized robot Curiosity, which has among other things, drilled holes on Mars, been instrumental in making some new discoveries, NASA informed journalists at a press conference last week. The new findings consist of organic molecules extracted from three billion years old stone. It has also measured variations in the levels of the gas methane in the Mars atmosphere – something that has never been seen before. The discovery shows that the content of methane varies with the seasons, which may have biological causes. The discovery does not provide definitive proof of life itself but is a good indication that future expeditions to the planet surface can provide a deeper understanding of life that may have existed on Mars, and may still exist deep underground. Organic molecules contain carbon and hydrogen, and also may include oxygen, nitrogen and other elements. These are usually associated with life, but can also be created through non-biological processes. So far, the source of the molecules is unknown, but the finding is of great importance, according to NASA. “Whether it holds a record of ancient life, was food for life, or has existed in the absence of life, organic matter in Martian materials holds chemical clues to planetary conditions and processes.” – Jen Eigenbrode of NASA’s Goddard Space Flight Center “With these new findings, Mars is telling us to stay the course and keep searching for evidence of life,” “I’m confident that our ongoing and planned missions will unlock even more breathtaking discoveries on the Red Planet.” – Thomas Zurbuchen, associate administrator for the Science Mission Directorate at NASA Headquarters, in Washington. Another clue has added to the incentive of finding life is a separate study in last week’s Science, Curiosity scientists report that traces of methane in the Martian atmosphere rise and fall with the seasons. Nonbiological processes could explain the signal—but so could seasonally varying microbes. Fortunately, NASA’s next rover, Mars 2020, is set to collect some 30 rock cores for the return to Earth in subsequent missions. Jennifer L. Eigenbrode et al. Organic matter preserved in 3-billion-year-old mudstones at Gale crater, Mars DOI: 10.1126/science.aas9185 Christopher R. Webster et al. Background levels of methane in Mars’ atmosphere show strong seasonal variations DOI: 10.1126/science.aaq0131
Maybe you’ve heard the term “multisensory processing” before but do you know what it means and understand how it can help improve learning outcomes? Do you know how to incorporate multisensory into your child’s learning? These are the topics that we’re going to dive into in this article! “Multisensory processing refers to the interaction of signals arriving nearly simultaneously from different sensory modalities.” Multisensory refers to more than one sense. We are constantly bombarded with sensory information from our environment. This sensory information is not directed to just one single sense at a time. For example, when you eat breakfast you are not only experiencing the taste (gustatory), but also the smell (olfactory), the touch (tactile), and the visual (sight). Another example is when you are driving. You experience visual input as well as auditory (hearing traffic), vestibular (movement of not only the car but also when you turn your head to look at your surroundings), touch (touching the steering wheel and the seat), and more! Whoa! That’s a lot of sensory information - a lot of multisensory processing! Our children are also constantly experiencing multisensory input throughout their day. In the morning when they wake up and get ready for the day, at school with their friends and while learning, during sports or other extracurricular activities, at home while eating, dressing, bathing, etc., and while they get ready for bed at night. Our bodies receive the sensory input from our environment and our brains interpret that input which then produces a reaction. One of two things typically happens - our brain interprets the sensory input appropriately and our body is able to tolerate the input and react accordingly; or our brain interprets the sensory input inappropriately and our body is unable to tolerate the input and creates a large over-reaction. This second option can happen with multiple senses or just one at a time. This is where sensory processing challenges occur. But let’s stick with multisensory processing for now! In order for a child to successfully move through their day and their daily tasks, they must be able to process all different types of input simultaneously. Additionally, from a learning perspective, when we provide learning opportunities that incorporate multiple senses (multisensory!), the child stands a greater chance of success with memory, auditory processing, and more! How do you know if your child has challenges with multi-sensory processing? It’s going to look very similar or even identical to overall sensory processing challenges! But you may be able to pinpoint some differences. For example, a child with vestibular processing challenges may dislike and avoid movement based activities. However, a child with vestibular and visual processing challenges may struggle significantly with hand-eye coordination tasks. Another example is a child with auditory processing challenges may struggle to follow 2-3 step auditory instructions, while a child with auditory and visual processing challenges may struggle to follow both auditory and visual instructions equally. Because all 8 of our senses are connected to each other in one way or another, if a child struggles to process input from one system they are likely to also struggle with processing input from another system. For example, one research study found that, “Although the perception of speech is based on the processing of sound, what we actually hear can be influenced by visual cues provided by lip movements.” Therefore, if a child struggles with auditory processing and visual processing, their ability to understand and follow speech may be very challenging. Let’s briefly talk about why activities designed with multi-sensory components are so beneficial for all children. One research study reported that working memory and attention can be directly related to successful multi-sensory processing. Another study found that many cognitive abilities and processes are dependent on successful multi-sensory processing. If we break down the components of working memory, we take into account that working memory requires the use of the visual system, the auditory system, and oftentimes the tactile, vestibular, and proprioceptive systems (touch and movement!) - we recall what we see and what we hear, what we touch and what we do. If we break down attention, we can look at the fact that sustained attention to a task requires successful processing of the environment’s visual and auditory input, potentially olfactory input (any distracting smells), and tactile input - we can’t let any of those distract us from what we are attending to. Additionally, any time we are processing sensory information, especially more than one at a time, we are creating new neural pathways in the brain! This can potentially help us improve the above mentioned cognitive processes as well as processing speed and coordinating body movements. Plus, the more we do it, the better we get! How can you incorporate more mulitsensory activities for your child? Let’s go through some ideas and break them down by age! At birth, you will keep the sensory experiences fairly simple. A little bit of visual stimulation with high contrast books and slow visual tracking activities as babe’s eyes begin to work together (around 4 months). A little bit of vestibular input from walking with you, learning to roll, and riding in the car. Some great proprioceptive input during tummy time and tactile input from clothing, blankets, and snuggles. Throw in some auditory input with classical music and nursery rhymes as well as small amounts of olfactory input from diluted essential oils and being out in nature. Once babe is up and moving - crawling and then walking - they are naturally getting lots of vestibular, proprioceptive, tactile, and visual input all at once, just from moving their body! Incorporate other sensory components by providing a variety of simple cause-and-effect toys, different texture blankets and surfaces to move on, a wide variety of food items with different smells and textures, and continuing to provide simple auditory activities with music. This is where you can get really creative with your multi-sensory activities! Once your child is learning how to throw and catch playground balls, you can include that with movement activities - try throwing to a target while on a moving swing! Additionally, set up a simple obstacle course that your child can maneuver through on their scooter or tricycle (or try a strider bike) - set up cones that they have to steer through (lots of vestibular, proprioceptive, and visual input!). Obstacle courses are a great way to complete multi-sensory activities. Include crawling and jumping components, a sensory bin for some great tactile and visual input, music or practicing counting and the alphabet for some auditory input, and don’t forget about the olfactory and gustatory systems! Try some scratch ‘n sniff stickers; try stringing cheerios onto a necklace; get creative and see how many sensory systems you can include simultaneously! This is a great age to introduce your child to a metronome! You can download a free app or find a metronome beat on YouTube. Start by simply clapping to the beat set at 60 beats per minute (BPM). Once your child has mastered that, you can try different clapping and patting patterns - be sure to include not just the arms but also the legs! These activities are perfect for incorporating vestibular, proprioceptive, visual, tactile, and auditory pieces. Once your child is beginning to read and write, you can use the metronome to practice spelling and writing. Simply turn the metronome to 60 BPM and practice spelling words on the beat. You can even try writing them on the beat. This rhythm can help improve that working memory we talked about earlier! Incorporating full body movements into metronome activities can be beneficial as well! Try crawling on the beat at 60 BPM. Get the visual system going and play catch on the beat. The possibilities are endless! Keep using the metronome as your child gets older! Practice more complex spelling words, math problems, and even complete different movement sequences to the beat. Try changing the beat to 120 BPM and doing the task on every other beat. Combine all of it - a spelling word plus jumping jacks simultaneously. Talk about some seriously great multi-sensory processing! We have some great activities in our multisensory processing digital course. Check it out here! Don’t forget to try these activities yourself! Adults can also benefit from multi-sensory activities and can be used as great brain/sensory breaks throughout the day to help wake up your body and your mind! Once you start intentionally incorporating multi-sensory activities into your child’s day, you’ll not only be able to see the benefits but you’ll also get really good at doing it! Comments will be approved before showing up. Dysgraphia is a learning disability that affects writing. In our article, we’ll dive into the specific skills required for handwriting, spelling, and creative writing. We’ll also talk about strategies that can help if your child struggles with or is showing signs of dysgraphia.
What is absolute force? What is absolute force? n. 1 a unit of measurement forming part of the electromagnetic cgs system, such as an abampere or abcoulomb. 2 a unit of measurement forming part of a system of units that includes a unit of force defined so that it is independent of the acceleration of free fall. absolute value. How do you define an absolute? adjective. free from imperfection; complete; perfect: absolute liberty. not mixed or adulterated; pure: absolute alcohol. complete; outright: an absolute lie; an absolute denial. free from restriction or limitation; not limited in any way: absolute command; absolute freedom. What is the definition of absolute in science? In Physics, absolute is independent of arbitrary standards or of particular properties of substances or systems. It refers to a system of units, as the centimeter-gram-second system, based on some primary units of length, mass, and time. It pertains to a measurement base don an absolute zero or unit. What is force with example? Force is defined as an external cause that changes or tends to change the state of the body once applied, if the body is in motion it comes to rest and if at rest then will come to motion. Example: Pushing or pulling a door by applying force. What is relative force? Relative force is the ratio of one force against another one on the same object in the same environment. What is the product of mass and acceleration called? The product of mass times gravitational acceleration, mg, is known as weight, which is just another kind of force. What are the examples of absolute? Absolute is defined as something that is 100 percent complete with no exceptions. An example of absolute silence would be total silence with no noise at all. Why is force not relative? Energy and work are frame-relative quantities, not invariants. [The magnitude of] force, however, is invariant in classical physics. It does not matter what inertial frame you adopt, acceleration and force will not vary in magnitude from one reference frame to the next. Which is the highest level of absolute force? Absolute force, also know as absolute strength, is the most amount of force one can produce with no limit to the amount of time required to produce the force. Highest levels of absolute force can only be reached during an isometric or eccentric contraction. Which is the correct definition of absolute pressure? What is Absolute Pressure – Definition. 2019-06-03. 2019-05-22 by Nick Connor. Absolute Pressure. When pressure is measured relative to a perfect vacuum, it is called absolute pressure (psia). In engineering, it is important to distinguish between absolute pressure and gauge pressure. Thermal Engineering. How is the absolute force of a muscle measured? Absolute force can be measured by using a one-rep max. In such a case, the limiting factor in completion of movement will be the concentric force of a movement, not a single muscle group. What Is Rate of Force Development? Which is an example of a force in science? Force Definition. In science, force is the push or pull on an object with mass that causes it to change velocity (to accelerate). Force represents as a vector, which means it has both magnitude and direction. In equations and diagrams, a force is usually denoted by the symbol F. An example is an equation from Newton’s second law:
A company called EnChroma has built a pair of glasses that claims to restore color vision for the colorblind. Predictably, the internet has erupted with excitement. But it’s not the first instance in which a piece of technology has made this bold assertion, and the science behind color perception isn’t straightforward. We decided it was time to figure out what’s really going on. For some colorblind people, donning EnChroma lenses is nothing short of life-changing. For others, the experience is lackluster. To understand why, let’s take a deep dive into the science of color vision, some of the different forms of colorblindness, and what these glasses are actually doing. When people with normal color vision look at a rainbow, they see the whole swath of colors–from red to violet–within the part of the spectrum we call ‘visible light.’ But although every shade represents a specific wavelength of light, our eyes don’t contain unique detectors to pick out each and every wavelength. Electromagnetic spectrum. Image Credit: Wikimedia Instead, our retinas make do with only three types of color sensitive cells. We call them cone cells. They’re specialized neurons that fire off electrical signals in response to light, but they’re not actually very precise: a cone cell is sensitive to a wide range of colored light. But when the brain collects and aggregates the information collected by all three types of cone cell in the eye, it’s able to make fine discriminations between different shades of the same color. Here’s how that works. Cone cells contain a light-sensitive pigment that reacts to wavelengths of light from one segment of the spectrum. The photopigment is slightly different in each type of cone cell, making them sensitive to light from different parts of the spectrum: we may call them red, green, and blue cones, but it’s actually more accurate to say that each type detects either long (L), medium (M), or short (S) wavelengths of light. Typical light response curves for cones in a human eye. Image Credit: BenRG / Wikimedia The graph above, which shows how strongly each kind of cone cell responds to different wavelengths of light, makes that idea easier to visualize. You can see that each type of cone cell has a strong response–a peak–for only a narrow range of wavelengths. The ‘red’ L cones respond most strongly to yellow light, the ‘green’ M cones to green light, and the ‘blue’ S cones to blue-violet light. Cones are also triggered by a wide range of wavelengths on either side of their peaks, but they respond more weakly to those colors. That means there’s a lot of overlap between cone cells: L, M, and S cones actually respond to many of the same wavelengths. The main difference between the cone types is how strongly they respond to each wavelength. These features are absolutely critical to the way our eye perceives color. Image Credit: EnChroma Imagine you have a single cone cell. Make it an M cone if you like. If you shine a green light on the cell, it’s perfectly capable of sensing that light. It’ll even send an electrical signal to the brain. But it has no way to tell what color the light is. That’s because it can send out the same electrical signal when it picks up a weak light at a wavelength that makes it react strongly as when it detects a strong light at a wavelength that makes it react more weakly. To see a color, your brain has to combine information from L, M, and S cone cells, and compare the strength of the signal coming from each type of cone. Find the color of a beautiful cloudless blue sky on the graph, a wavelength around 475 nm. The S cones have the strongest reaction to that wavelength, but the red and green cones are weighing in with some signal action, too. It’s the relative strength of the signals from all three cone types that lets the brain say “it’s blue”! Each wavelength of light corresponds to a particular combination of signal-strengths from two or more cones: a three-bit code that lets the brain discriminate between millions of different shades. The three-bit code is sensitive, but a ton of things can mess it up. The gene for one of the three photopigments might go AWOL. A mutation could shift the sensitivity of a photopigment so it responds to a slightly different range of wavelengths. (Damage to the retina can cause problems, too.) In a colorblind person, the cone cells simply don’t work the way they’re supposed to; the term covers a huge range of possible perceptual differences. Cone cell responses in two forms of red-green color blindness. Image Credit: Jim Cooke The most common forms of inherited color blindness are red-green perceptual defects. One version is an inability to make L photoreceptors, another stems from a lack of M photoreceptors. People with these genetic defects are dichromats: they have only two working photoreceptors instead of the normal three. Their problem is actually pretty straightforward. Remember that the brain compares how strongly each type of cone responds to a given wavelength of light? Now disappear either the L or M curve in that photoreceptor response graph in your mind, and you can see how the brain loses a ton of comparative information. The problem is more subtle for people who have a version of the L or M photoreceptor that detect a slightly different range of wavelengths than normal. These people are anomalous trichromats: like someone with normal vision, their brains get information from three photoreceptors, but the responses of one type of photoreceptor are shifted out of true. Depending on how far that photoreceptor’s response curve has shifted, an anomalous trichromat may perceive reds and greens slightly differently than a person with normal vision, or be as bad at discriminating between the two as a dichromat. Fall colors, seen six different ways. Top left: Normal color vision. Bottom left: Deuteranomaly (Green weak). Top middle: Protanomaly (Red weak). Bottom middle: Tritanomaly (Blue weak). Top right: Deuteranopia (Green blind). Bottom Right: Tritanopia (Blue blind). But a child born with one of these color perception deficiencies has no way to tell the difference. Learning he sees the world differently from the people around him can be an enormous surprise. That was true for media consultant Carlos Barrionuevo, who first discovered he was colorblind when he was 17. “I didn’t really notice it when I was a kid.” he told Gizmodo. “And my parents didn’t pick up on it. I honestly did not know until I applied for the Navy. I went in for my physical, and they start flipping through this book and say ‘Just tell us what number you see.’ And I said, ‘What number? There’s a number?’” The book Barrionuevo mentions contained some version of the Ishihara test: circles made up of colored dots in a variety of sizes and shades that serve as a quick-and-dirty way to screen for colorblindness. The circle can contain a symbol or a number that is difficult if not impossible for someone with one form of color blindness to see. It can also be designed so the symbol is visible to the colorblind, but invisible to everyone else. The test below looks like a 74 to people with normal vision, but appears to be a 21 to people with red/green colorblindness. Ishihara color test plate. People with normal color perception can see the number 74. People with red/green colorblindness see a 21. Image Credit: Wikimedia Barrionuevo stresses that it’s really not a simple matter of not seeing red or green. “I can usually tell what’s green and what’s red, but different shades of red or green all look the same to me. I get very confused on certain colors. If I go in a paint store, a lot of those paint chips just look similar, and I can’t make distinctions between them.” If color perception is basically an intensity game, that raises an obvious question: Could we restore normal color vision, simply by tweaking the proportions of light a colorblind person’s eyes are exposed to? Andy Schmeder, COO of EnChroma, believes that we can. A mathematician and computer scientist by training, Schmeder began exploring color vision correction a decade ago, along with his colleague Don McPherson. In 2002, McPherson, a glass scientist, discovered that a lens he’d created for laser surgery eye protection caused the world to appear more vivid and saturated. For some colorblind people, it felt like a cure. Image Credit: Frameri / EnChroma With a grant from the National Institutes of Health, McPherson and Schmeder set about to determine whether the unusual properties of this lens could be translated into an assistive device for the colorblind. “I created a mathematical model that allows us to simulate the vision of a person with some kind of colorblindness,” Schmeder told Gizmodo. “Essentially, we were asking, if your eyes are exposed to this spectral information and your eye is constructed in this particular way, what does that do to your overall sense of color?” Using their model results, Schmeder and McPherson developed a lens that filters out certain slices of the electromagnetic spectrum; regions that correspond with high spectral sensitivity across the eye’s M, L, and S cones. “Essentially, we’re removing particular wavelengths of light that correspond to the region of most overlap,” Schmeder said. “By doing so, we’re effectively creating more separation between those two channels of information.” Spectral response of red, green, and blue cones, with gray regions indicating regions of “notch filtering” by the EnChroma glasses. Image Credit: EnChroma EnChroma doesn’t claim its lenses will help dichromats, those people who lack an M or L cone. It also isn’t claiming to have developed a cure. Rather, the company likes to call its product an “assistive device,” one that can help anomalous trichromats—those people with M or L cones that have shifted their wavelength sensitivities—discriminate colors in the red-green dimension. Many users report dramatic changes to their color vision while wearing EnChroma glasses. “Any color with red or green appears more intense,” one anonymous user reported in a product validation study. “In fact, almost everything I see looks more intense. The world is simply more interesting looking.” Another user writes: “I never imagined I would be so incredibly affected by the ability to see distinct vivid colors, once confusing and hard to differentiate.” If you’re curious about the experience, you can check out any one of EnChroma’s many promotional videos, in which a colorblind person dons the glasses and is immediately overwhelmed by the vibrancy of the world. But some wearers are underwhelmed. “It’s not like they were worse than regular sunglasses — there was a way in which certain things popped out — but not in the way that it felt like it was advertised,” journalist Oliver Morrison told Gizmodo. Morrison’s account of his experience with the glasses, which appeared in The Atlantic earlier this year, highlights the challenge of objectively evaluating whether a device of this nature works. Here’s an excerpt: I met Tony Dykes, the CEO of EnChroma, in Times Square on a gray, rainy day, our eyes hidden behind his glasses’ 100 reflective coatings... I described to Dykes what I saw through the glasses: deeper oranges, crisper brake lights on cars, and fluorescent yellows that popped. I asked him if that is what a normal person sees. Dykes, a former lawyer and an able salesman, answered quickly. “It’s not something where it’s immediate,” he said. “You’re just getting the information for the first time.” Maybe the glasses were working. Maybe exchanging the colors I was accustomed to for real colors just wasn’t as great an experience as I’d been expecting. Dykes asked if I could tell the difference between the gray shoelaces and the pink “N” on the side of my sneakers. “The ‘N’ is shiny,” I said. “So I don’t know if I can tell they’re different by the colors or because of the iridescence.” Although I’d never confused my shoelace with my shoe before, I realized then that, until he had told me, I didn’t know the “N” was pink. Jay Nietz, a color vision expert at the University of Washington, believes EnChroma is capitalizing on this lack of objectivity. “Since red-green colorblind people have never experienced the red and green colors a normal person sees, they are easily fooled,” Nietz told Gizmodo in an email. “If the glasses could add light, maybe it’d be different. But all they can do is block out light. It’s hard to give people color vision by taking things away.” Neitz, for his part, believes the only way to cure colorblindness is through gene therapy — by inserting and expressing the gene for normal M or L cones in the retinas of colorblind patients. He and his wife have spent the last decade using genetic manipulation to restore normal vision to colorblind monkeys, and they hope to move on to human trials soon. A monkey named Dalton, post gene therapy, performing a colorblindness test. Dalton used to be red-green colorblind. But if the glasses aren’t enabling people to see more colors, what could account for the positive testimonials? Nietz suspects the lenses are altering the brightness balance of reds and greens. “If somebody was totally colorblind, all the wavelengths of light in a rainbow would look exactly the same,” Nietz said. “If they went out in the real world and saw a green and red tomato, they’d be completely indistinguishable because they’re the same brightness to our eyes. Then, if that person put on glasses with a filter that blocked out green light, all of a sudden, the green tomato looks darker. Two things that always looked identical now look totally different.” “I wouldn’t claim that the EnChroma lens has no effect on brightness,” Schmeder said in response to Gizmodo’s queries. “Pretty much anything that’s strongly colored will suddenly seem brighter. It’s a side effect of the way the lens works.” But according to Schmeder, the lens’s neutral gray color maintains the balance of brightness between reds and greens. That is, all red things aren’t going to suddenly become brighter than all green things, he says. In the end, the best way to sort out whether the glasses are working as advertised is through objective testing. EnChroma has relied primarily qualitative user responses to evaluate the efficacy of its product. The company has also performed some clinical trials using the D15 colorblindness test, wherein subjects are asked to arrange 15 colored circles chromatically (in the order of the rainbow). In the 100 hue test, subjects arrange the colors within each row to represent a continuous spectrum of shade from one end to the other. Colors at the end of each row serve as anchors. Image Credit: Jordanwesthoff / Wikimedia In test results shared with Gizmodo, nine subjects all received higher D15 scores — that is, they placed fewer chips out of sequence — while wearing EnChroma glasses. “What is apparent from the study is that not everyone exhibits the same degree of improvement, nor does the extent of improvement correlate to the degree of [colorblindness] severity,” EnChroma writes. “However, everyone does improve, some to that of mild/normal from severe.” But there’s still the concern that wearing a colored filter while taking the D15 test will alter the relative brightness of the chips, providing a context cue that can help subjects score higher. For a more objective test, Nietz recommends the anomaloscope, in which an observer is asked to match one half of a circular field, illuminated with yellow light, to the other half of the field, which is a mixture of red and green. The brightness of the yellow portion can be varied, while the other half can vary continuously from fully red to fully green. Screenshot from an online color matching test that mimics the anomaloscope. Via colorblindness.com. “This is considered to be the gold standard for testing red-green color vision,” Nietz said. “The anomaloscope is designed in such a way that adjustments can be made so that colorblind people can’t use brightness as a cue so the brightness differences produced by the glasses would not help colorblind people cheat.” Whether EnChroma’s glasses are expanding the red-green color dimension, or simply creating a more saturated, contrast-filled world, there’s no doubt that the technology has had positive effects for some colorblind people. “The biggest point for me wearing this glasses is that I’m more inspired,” Cincinnati-based guitarist and EnChroma user Lance Martin told Gizmodo. Image Credit: Shutterstock Martin, who has been “wearing these things nonstop” for the last several months, says that ordinary experiences, like looking at highway signs or foliage while driving, now fill him with insight and awe. “I always interpreted interstate road signs as a really dark evergreen, but they’re actually a color green i’d never been able to see before,” he said. “I’ve been walking more, just to see the flowers. Inspiration fuels my career, and for me to be inspired by the mundane, everyday — that is mind-blowing.” The world of color is inherently subjective. Even amongst those who see “normally,” there’s no telling whether our brains interpret colored light the same way. We assume that colors are a shared experience, because we can distinguish different ones and agree on their names. If a pair of glasses can help the colorblind do the same — whether or not the technology causes them to see “normally”— that’s one less reason to treat this condition as a disadvantage. “People are looking for access to jobs where they’re being excluded because of colorblindness,” Schmeder said. “My belief is that if we really analyze this problem closely, we can come up with a reasonable accommodation that works for some situations. Even if we can’t help everyone, if we can elevate the level of discussion around this and help some people, that’d be amazing.” Top image: Frameri / En Chroma
The Zika virus has captured headlines for its horrific effects on fetuses. Everyone is well aware that the virus spreads through mosquito bites. However, less well known is that Zika can also spread through sexual contact and blood transfusions. Since transfused blood is a frequently utilized component of modern medicine, keeping the blood supply free from Zika is critical to prevent its spread. It is with this goal that the FDA has recently updated its recommendations stating “all donated units of blood should be screened for Zika Virus” (August 26, 2016). This prudent recommendation is just the latest attempt to keep the blood supply safe from nasty infections. Any donated blood is already being screened for diseases such as HIV, hepatitis B, hepatitis C, and West Nile virus among others. Since the basis of transfusion-transmitted disease is infected donors, it begs the question: Can we create synthetic blood and eliminate donor reliance? Every two seconds a blood transfusion is performed in the US. This astonishing demand is met by over 40 thousand volunteer donors rolling up their sleeves and baring their veins every day. Once drawn from a donor, blood begins a complicated and expensive journey before it can be transfused into a needing patient. Assuring a safe blood product is not cheap, the cost of producing a single, safe red blood cell unit is roughly $250, which equates to nearly $3 billion/year in production costs alone. The complexity and expense of obtaining a safe, constant supply of this life-saving product are due to three major attributes of blood itself. 1) Blood is not universally compatible among all people, and therefor transfusions require time- consuming tests to assure compatibility. Transfusion of the wrong blood type is often fatal. 2) Blood requires meticulous and extensive safety screening as unsafe blood can transmit many infectious diseases, including HIV 3) Blood degrades when out of the human body; limiting its shelf life. These characteristics limit our use of blood in the field (ambulances/battlefield), make blood transfusions inaccessible in developing countries, and result in occasional transfusion- transmitted diseases. These limitations could be overcome if a synthetic blood substitute were available. Fortunately, scientists and entrepreneurs are attempting to create such a product. To better understand the future of blood transfusions we need to delve into their past. Prior to the mid-17th century healers attempted to replace lost blood with animal blood, urine, milk, pitch, beer, and wine; and as expected patient survival was measured in minutes. In 1667 the first recorded successful human/human transfusion was made. However, at the time blood types were not understood and predictably the vast majority of transfusions resulted in a quick death. In 1907 the Czech physician Jan Jansky first described the four blood groups, leading to type-specific blood transfusion and the common use of this life saving procedure. During World War II the nation and military ran into massive shortages of available blood. This prompted the US armed forces to begin funding the development of synthetic blood substitutes. The early substitutes were abandoned after they proved damaging to healthy patients. The public became interested in blood substitutes again in the 1980s when the HIV epidemic hit and blood transfusions were shown to transmit the disease. Some 14,000 people acquired HIV through contaminated blood in the early 80s before testing became available. Faced with the possible end of donated blood, interest in blood substitutes spiked again. The research in creating synthetic blood resulted in two major synthetic products. The first is known as hemoglobin-based oxygen carriers (HBOCs). Red blood cells, the oxygen-carrying cells present in blood, are able to transport oxygen because they are full of the protein hemoglobin, which helps bind oxygen and allows 70x more effective oxygen transport than water. All the HBOCs use hemoglobin, freed from red blood cells, to carry oxygen. Intuitively, this makes perfect sense: if the human body is using hemoglobin packed in red cells, and hemoglobin can be made synthetically then a solution of hemoglobin should be able to replace blood. Were it that simple, we would already have synthetic blood. There are several problems with free hemoglobin in the blood. The first is instability. Free hemoglobin breaks into small protein sub-units that are filtered from the blood by the kidneys. Quick clearance from the body means only a short duration of oxygen delivery (compared to 120 days for donated blood). Secondly, hemoglobin interacts with another molecule, nitric oxide (NO), a molecule responsible to dilate small blood vessels. Normally, hemoglobin in red blood cells releases bound NO at the proper time to dilate blood vessels and improve blood flow. However, free hemoglobin diffuses into the blood vessel walls and strongly binds NO. This causes small blood vessels to severely constrict resulting in a massive spike in blood pressure (think about constricting end of a hose causing water to spray more forcefully). When this constriction happens in the kidneys it can irreversibly damage them. Furthermore, constriction of the small blood vessels in the heart leads to heart attacks. For these reasons, free hemoglobin is not a viable blood substitute. What if you could modify hemoglobin to keep its oxygen-carrying capacity but eliminate its aforementioned short comings? This is the research that surged in the 80s and has continued until today. The HBOCs accomplish this through different chemical modifications of free hemoglobin. These modifications include cross-linking the subunits of hemoglobin together (to stop them from splitting up), polymerizing multiple hemoglobin molecules together (increasing circulation time), and there are even chemical modifications to prevent hemoglobin from binding NO (reducing the blood pressure spike). These basic modifications were performed by several companies which showed their product was able to carry and deliver oxygen equivalently to blood in animal studies and in the late 90′s and the early 2000s they began a series of clinical trials. The first set of trials administered HBOCs to seemingly healthy patients intraoperatively to avoid transfusion with blood. The justification being that these healthy people need only a small increase in oxygen carrying capacity so why expose them to any possible infectious risk if all you need to do is bridge them for a few days. By all accounts, these trials were successful: Giving an HBOC to a patient undergoing an elective surgery reduced the need for blood transfusions without significant side effects. When these limited clinical trials were successful, the manufacturing companies went after the multi-billion-dollar prize… using a blood substitute where real blood was impossible to deliver. That is, delivery of blood to individuals in traumatic situations who could not receive cross-match compatible blood (i.e., in the ambulance, or in the battlefield). These trials opened up a whole can of worms. For ethical reasons, patients are required to give consent before beginning treatment with any unestablished method. However, when a patient is incapacitated from a severe trauma they cannot provide consent. Does that mean trauma clinical trials are impossible? Nope. The FDA created an “Exception from Informed Consent” for this very reason and the HBOC trials were the first treatments to undergo such “consent free” trials. These trials were extremely controversial before they started. Instead of consenting an individual, companies/researchers held public hearings in a region and put up billboards instructing individuals how to opt out by obtaining a wrist band. The trials took place in mostly poor and lower educated areas where residents were unlikely to fully understand the risks. The controversy only grew after the trials because the second set of trials were failures. Several HBOC products showed that they were significantly more lethal than standard treatment (just giving saline) in traumatic blood loss cases. The trial failures along with the hundreds of millions of lost development costs meant R&D funding in the space all but evaporated. In 2008, all previous HBOC trials were reviewed and it came to light that HBOCs showed increased mortality in traumatic settings across the board. A recommendation was made to the FDA to prohibit further phase III clinical trials of HBOCs until the science advanced. The science has been advancing over the past decade. The most recent generation of HBOCs now perform a chemical modification on hemoglobin known as PEGylation. This is the process of adding a polyethylene glycol (think non-reactive plastic) directly to a molecule to make it less reactive in the body. This has resulted in several new trials looking at PEGylated HBOCs to help reduce the need for transfusions and for use in patients where transfusions are not an option. These trials are more limited in scope than the large trials of the early 2000s. However, if Zika does infiltrate the blood supply in a meaningful way public interest and funding will flow into this space again. Of note in South Africa, Hemopure, a HBOC, cleared clinical trials and is available to prevent blood transfusions in elective surgery cases. One of the major reasons for passing clinical trials there is the frighteningly high rate of HIV in the South African adult population (18.8%!), which means obtaining safe blood is extremely difficult. Furthermore, your dog may get HBOC after trauma as Oxyglobin is approved for veterinary medicine in the United States. The other major synthetic blood product, known as perfluorocarbons (PFC), is not based on hemoglobin but instead completely synthetic. PFCs are simple in concept…they are a non-reactive liquid that has the ability to dissolve oxygen at concentration similar to air! This science fiction-like substance can dissolve oxygen so well that it is possible to be submerged in PFC and be able to breath normally. This concept, known as liquid breathing, is demonstrated here on a sedated mouse submerged in PFC. Treatment with PFCs does not include submerging a human in PFC liquid but rather injecting PFC mixture into their blood that allows them to take up additional oxygen by breathing 100% oxygen. In addition to being completely synthetic, PFCs are also very small, 1/40th the size of a red blood cell. This allows them to theoretically deliver oxygen at a site where blood cannot flow such as brain tissue beyond the site of a stroke. Sadly, the news is not all good with PFCs. During a phase III trial, the product Oxygent was shown to increase stroke and was subsequently not approved for sale by the FDA. Another product, Oxycyte, was not cleared to even undergo phase III trials after results from a phase II dose escalation trial in Switzerland. If, however you find yourself in Russia or Mexico with a traumatic bleed you could potentially receive a PFC as one is approved in those countries. In conclusion, making a synthetic blood product is tremendously difficult. The blood we carry in our veins is the result of millions of years of evolution and not quickly supplanted by a laboratory creation. However, there is promise in the two current strategies to reduce our reliance on donated blood. With a massive benefit to society and gigantic financial prize at stake, research should continue until synthetic blood is a reality. Although less studied, I believe a PFC solution is more likely to serve as a future blood substitute given the ease of manufacturing and biological inertness. But until we solve our limitations I strongly urge you to help out your community, roll up your sleeves, and donate blood today! http://www.americasblood.org/donate-blood.aspx
Some sentences assert but one fact; others, more. Some assert an independent, or a principal proposition; others, a secondary, or qualifying proposition. Hence, Prin. Sentences are distinguished as— Prin. The office of a word in a sentence, determines its position in the diagram, according to the following - The principal parts of a sentence are placed uppermost, and on the same horizontal line; as 1, 2, 3. - The Subject of a sentence takes the first place; as 1. - The Predicate is placed to the right of the subject—attached; as 2—7—11—26 - The Object is placed to the right of the predicate; as 3. The object of a phrase is placed to the right of the word which introduces the phrase; as 22 to the right of 21. - A word, phrase, or sentence, is placed beneath the word which it qualifies; as 4 and 5 qualify I ~—(25, 26, x) qualify 22 - A word used to introduce a phrase, is placed beneath the word which the phrase qualities—having its object to the right and connecting both; as 15 connecting 12 and 16—21 connecting 3 and 22. - A word used only to connect, is placed between the two words connected; as 10 between 7 and 11; and a word used to introduce a sentence, is placed above the predicate of the sentence, and attached to it by a line; as 0 above 2. - A word relating back to an other word, is attached to the antecedent by a line; as 6 attached to 1, and x to 22. OF THE ENGLISH LANGUAGE. WORDS, PHRASES, AND SENTENCES ARE CLASSIFIED ACCORDING TO THEIR OFFICES, AND THEIR RELATION TO EACH OTHER. A COMPLETE SYSTEM OF DIAGRAMS. “Speech is the body of thought.” BY S. W. CLARK, A. M., In the United States there are currently two major varieties of diagrams in use to represent sentence structure: traditional diagrams, used more or less exclusively in junior high school and high school classrooms, and tree diagrams, the most common method used by professional linguists.
Compare Pairs examines related samples and makes inferences about the differences between them. Related samples occur when observations are made on the same set of items or subjects at different times, or when another form of matching has occurred. If the knowing the values in one sample could tell you something about the values in the other sample, then the samples are related. There are a few different study designs that produce related data. A paired study design takes individual observations from a pair of related subjects. A repeat measures study design takes multiple observations on the same subject. A matched pair study design takes individual observations on multiple subjects that are matched on other covariates. The purpose of matching similar subjects is often to reduce or eliminate the effects of a confounding factor.
Students can read through each session, making summary notes as they go. Within the Supporting Documents section of the Teachers’ platform, there are worksheets named ‘Handout’ which work really well as a basis for creating mind maps. These sheets could be printed / emailed or displayed at the front of the classroom for Students to create their own mind maps. Throughout the sessions, there are Think First tasks – questions designed to get students thinking about the topic. The answers do not save in place, so students can test themselves time and again until they feel confident they understand. The Think First tasks can be used to spark a whole class discussion or, where appropriate, a class competition to give an answer. You could split the class into two teams and see which group can remember the most information. There are many interactive features throughout the sessions, such as clickable images, which expand to show more specific information. Students can test their knowledge, and each other’s, by working in pairs to discuss the definitions then clicking on the pictures to see if they’re right. End of session quiz The end of session quizzes can be found on the last page of each session, making them easy to locate to revisit each topic. Incorrect answers will bounce away from the blank spaces, meaning it can be great as a self-marking assessment. These quizzes could be projected to the front of the class in a teacher led approach, teachers could then either complete it themselves with students raising their hands to answer, or for a more interactive element, students could take it in turns to complete an answer. For classroom and individual learning, students are able to complete the quiz when logged into their learner account. If students are struggling to complete the session quiz, they can revisit the topics in the session and consolidate their learning. Practice papers can be found to the right of the Course Materials in the teaching account. There are also ‘Quiz Questions’ work sheets which can be found in the Supporting Documents of the teaching account and have exam-style questions for students to do even more practice.
Water logging occurs in soils when soil profile is completely saturated with water, and there is insufficient oxygen in the pore spaces for plant roots to be adequately respire. So this way ultimately, the level of carbon dioxide in soil is increased and effects on plants growth. There is no universal level of water logging in soil is identified because plant roots demand of oxygen in soil vary with growth stage. Waterlogged conditions cause reduction in root growth, ultimately reach to the plant root-rot disease so it effects to the plant till later. It is common observation for a plant that experienced to water logging condition, become deficient in phosphorus and nitrogen and especially sensitive, to the hot temperature. If these symptoms cannot properly observe, the yield loses can occur. Water logging may be temporary or permanent: temporary mostly occurs due to heavy rain fall, but on the other hand permanent waterlogging occurs due to rise in water table depth. Water logging occurs due to many reasons but main reason is fallowing; (1) Natural reason (2) Human induced. Natural reasons include Physiography of a watershed, Geology, weather, Soil type, Seepage inflows, Irrigation. And human induced water logging includes; over irrigation, seepage from canals, inadequate drainage, poor irrigation management, obstruction of natural drainage, land locked parches having no outlet, excessive rainfall, flat topography, occasional spills by floods, closed/contour. Water logging and salinity in our soils major problem: Ones salinity occurs, then waterlogging fallow it. 20% of cultivable land area 8% is severely affected. According to soil salinity survey of Wapda estimated, that this problem becomes more sewer if this continues and increases up to 30%. SCARP programmed also have been launched about 2/3 have been completed out of 40 stations. Water logging affects the crops in many ways as fallows; it causes delayed normal cultivation operations, which are adversely affects the presence of excess water in soil. Aquatics plants grow in soil that competes with our major crops. Sometimes in extreme water logging condition, only wild grow is there. It also causes disease decaying of roots, external symptoms on the foliage and fruits are common, and then ultimately reduced the yield of many crops. Cash crops cannot be cultivated and land is restricted to few crops. It also cause reduction in soil oxygen due to water filled soil pores and ultimately soil Co2 level increase and affects rot growth. Lack of aeration also cause of manganese precipitation that is toxic to plant. Water logging affects the soils in many ways as fallows; due to water logging soil slowly warm up, restrict the seedling, seed germination, root development, also biotic activity also reduced, and ultimately nitrogen fixation and others function disturbed. Water logging build up soil salinity when water table becomes high, leaving the salts on the surface and evaporates. High salinity cause deposition of sodium and other salts in upper layer, ultimately they destroy the soil structure and may become for toxic for plants and other soils biota. Soil microorganism involve in nitrogen fixation process also surely affected and reduced plant growth. Control measures of Water-Logging - Installation of tube-wells for irrigation and vertical drainage - Construction of surface-drains and tile-drains - Lining of water-channels - Planting of eucalyptus in water-logged areas. - Planning and designing of future canals on proper lines. - Providing artificial drainage system - Lowering of water-levels of canals. - Fixing cropping pattern - Control intensity of water - Optimum use of water Athar Mahmood, Muhammad Umer Chattha, M.Umair, M. Adnan and M Mahran Aslam
Viral vs. Antibody Test If you are considering getting a test for COVID-19, make sure you know the difference between a viral and antibody test. Also known as polymerase chain reaction (PCR) or molecular tests. A viral test tells you if you have a current infection and is usually done with a swab in your nasal passage. Also known as serology tests. An antibody test might tell you if you had a past infection. These tests are done by taking a blood sample. An antibody test should not be used to see if you currently are infected with COVID-19. It can take a few weeks after infection for your body to make antibodies. Scientists are still researching if having antibodies to the virus that causes COVID-19 provides protection from getting infected with the virus again. We do not yet know how much protection the antibodies might provide or how long this protection might last.
Cryptographic hash function This article needs additional citations for verification. (May 2016) |Secure Hash Algorithms| |hash functions · SHA · DSA| |SHA-0 · SHA-1 · SHA-2 · SHA-3 | A cryptographic hash function (CHF) is a mathematical algorithm that maps data of an arbitrary size (often called the "message") to a bit array of a fixed size (the "hash value", "hash", or "message digest"). It is a one-way function, that is, a function for which it is practically infeasible to invert or reverse the computation. Ideally, the only way to find a message that produces a given hash is to attempt a brute-force search of possible inputs to see if they produce a match, or use a rainbow table of matched hashes. Cryptographic hash functions are a basic tool of modern cryptography. A cryptographic hash function must be deterministic, meaning that the same message always results in the same hash. Ideally it should also have the following properties: - it is quick to compute the hash value for any given message - it is infeasible to generate a message that yields a given hash value (i.e. to reverse the process that generated the given hash value) - it is infeasible to find two different messages with the same hash value - a small change to a message should change the hash value so extensively that a new hash value appears uncorrelated with the old hash value (avalanche effect) Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital) fingerprints, checksums, or just hash values, even though all these terms stand for more general functions with rather different properties and purposes. Most cryptographic hash functions are designed to take a string of any length as input and produce a fixed-length hash value. A cryptographic hash function must be able to withstand all known types of cryptanalytic attack. In theoretical cryptography, the security level of a cryptographic hash function has been defined using the following properties: - Pre-image resistance - Given a hash value h, it should be difficult to find any message m such that h = hash(m). This concept is related to that of a one-way function. Functions that lack this property are vulnerable to preimage attacks. - Second pre-image resistance - Given an input m1, it should be difficult to find a different input m2 such that hash(m1) = hash(m2). This property is sometimes referred to as weak collision resistance. Functions that lack this property are vulnerable to second-preimage attacks. - Collision resistance - It should be difficult to find two different messages m1 and m2 such that hash(m1) = hash(m2). Such a pair is called a cryptographic hash collision. This property is sometimes referred to as strong collision resistance. It requires a hash value at least twice as long as that required for pre-image resistance; otherwise collisions may be found by a birthday attack. Collision resistance implies second pre-image resistance but does not imply pre-image resistance. The weaker assumption is always preferred in theoretical cryptography, but in practice, a hash-function which is only second pre-image resistant is considered insecure and is therefore not recommended for real applications. Informally, these properties mean that a malicious adversary cannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be very confident that they are identical. Second pre-image resistance prevents an attacker from crafting a document with the same hash as a document the attacker cannot control. Collision resistance prevents an attacker from creating two distinct documents with the same hash. A function meeting these criteria may still have undesirable properties. Currently, popular cryptographic hash functions are vulnerable to length-extension attacks: given hash(m) and len(m) but not m, by choosing a suitable m′ an attacker can calculate hash(m ∥ m′), where ∥ denotes concatenation. This property can be used to break naive authentication schemes based on hash functions. The HMAC construction works around these problems. In practice, collision resistance is insufficient for many practical uses. In addition to collision resistance, it should be impossible for an adversary to find two messages with substantially similar digests; or to infer any useful information about the data, given only its digest. In particular, a hash function should behave as much as possible like a random function (often called a random oracle in proofs of security) while still being deterministic and efficiently computable. This rules out functions like the SWIFFT function, which can be rigorously proven to be collision-resistant assuming that certain problems on ideal lattices are computationally difficult, but, as a linear function, does not satisfy these additional properties. Checksum algorithms, such as CRC32 and other cyclic redundancy checks, are designed to meet much weaker requirements and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in the WEP encryption standard, but an attack was readily discovered, which exploited the linearity of the checksum. Degree of difficulty In cryptographic practice, "difficult" generally means "almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important". The meaning of the term is therefore somewhat dependent on the application since the effort that a malicious agent may put into the task is usually proportional to their expected gain. However, since the needed effort usually multiplies with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a few dozen bits to the latter. For messages selected from a limited set of messages, for example passwords or other short messages, it can be feasible to invert a hash by trying all possible messages in the set. Because cryptographic hash functions are typically designed to be computed quickly, special key derivation functions that require greater computing resources have been developed that make such brute-force attacks more difficult. In some theoretical analyses "difficult" has a specific mathematical meaning, such as "not solvable in asymptotic polynomial time". Such interpretations of difficulty are important in the study of provably secure cryptographic hash functions but do not usually have a strong connection to practical security. For example, an exponential-time algorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial-time algorithm (e.g., one that requires n20 steps for n-digit keys) may be too slow for any practical use. An illustration of the potential use of a cryptographic hash is as follows: Alice poses a tough math problem to Bob and claims that she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing. Therefore, Alice writes down her solution, computes its hash, and tells Bob the hash value (whilst keeping the solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him before. (This is an example of a simple commitment scheme; in actual practice, Alice and Bob will often be computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution.) Verifying the integrity of messages and files An important application of secure hashes is the verification of message integrity. Comparing message digests (hash digests over the message) calculated before, and after, transmission can determine whether any changes have been made to the message or file. MD5, SHA-1, or SHA-2 hash digests are sometimes published on websites or forums to allow verification of integrity for downloaded files, including files retrieved using file sharing such as mirroring. This practice establishes a chain of trust as long as the hashes are posted on a trusted site – usually the originating site – authenticated by HTTPS. Using a cryptographic hash and a chain of trust detects malicious changes to the file. Non-cryptographic error-detecting codes such as cyclic redundancy checks only prevent against non-malicious alterations of the file, since an intentional spoof can readily be crafted to have the colliding code value. Signature generation and verification Almost all digital signature schemes require a cryptographic hash to be calculated over the message. This allows the signature calculation to be performed on the relatively small, statically sized hash digest. The message is considered authentic if the signature verification succeeds given the signature and recalculated hash digest over the message. So the message integrity property of the cryptographic hash is used to create secure and efficient digital signature schemes. Password verification commonly relies on cryptographic hashes. Storing all user passwords as cleartext can result in a massive security breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash. A password reset method is required when password hashing is performed; original passwords cannot be recalculated from the stored hash value. Standard cryptographic hash functions are designed to be computed quickly, and, as a result, it is possible to try guessed passwords at high rates. Common graphics processing units can try billions of possible passwords each second. Password hash functions that perform key stretching – such as PBKDF2, scrypt or Argon2 – commonly use repeated invocations of a cryptographic hash to increase the time (and in some cases computer memory) required to perform brute-force attacks on stored password hash digests. A password hash requires the use of a large random, non-secret salt value which can be stored with the password hash. The salt randomizes the output of the password hash, making it impossible for an adversary to store tables of passwords and precomputed hash values to which the password hash digest can be compared. The output of a password hash function can also be used as a cryptographic key. Password hashes are therefore also known as password-based key derivation functions (PBKDFs). A proof-of-work system (or protocol, or function) is an economic measure to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from the service requester, usually meaning processing time by a computer. A key feature of these schemes is their asymmetry: the work must be moderately hard (but feasible) on the requester side but easy to check for the service provider. One popular system – used in Bitcoin mining and Hashcash – uses partial hash inversions to prove that work was done, to unlock a mining reward in Bitcoin, and as a good-will token to send an e-mail in Hashcash. The sender is required to find a message whose hash value begins with a number of zero bits. The average work that the sender needs to perform in order to find a valid message is exponential in the number of zero bits required in the hash value, while the recipient can verify the validity of the message by executing a single hash function. For instance, in Hashcash, a sender is asked to generate a header whose 160-bit SHA-1 hash value has the first 20 bits as zeros. The sender will, on average, have to try 219 times to find a valid header. File or data identifier A message digest can also serve as a means of reliably identifying a file; several source code management systems, including Git, Mercurial and Monotone, use the sha1sum of various types of content (file content, directory trees, ancestry information, etc.) to uniquely identify them. Hashes are used to identify files on peer-to-peer filesharing networks. For example, in an ed2k link, an MD4-variant hash is combined with the file size, providing sufficient information for locating file sources, downloading the file, and verifying its contents. Magnet links are another example. Such file hashes are often the top hash of a hash list or a hash tree which allows for additional benefits. One of the main applications of a hash function is to allow the fast look-up of data in a hash table. Being hash functions of a particular kind, cryptographic hash functions lend themselves well to this application too. However, compared with standard hash functions, cryptographic hash functions tend to be much more expensive computationally. For this reason, they tend to be used in contexts where it is necessary for users to protect themselves against the possibility of forgery (the creation of data with the same digest as the expected data) by potentially malicious participants. Hash functions based on block ciphers The methods resemble the block cipher modes of operation usually used for encryption. Many well-known hash functions, including MD4, MD5, SHA-1 and SHA-2, are built from block-cipher-like components designed for the purpose, with feedback to ensure that the resulting function is not invertible. SHA-3 finalists included functions with block-cipher-like components (e.g., Skein, BLAKE) though the function finally selected, Keccak, was built on a cryptographic sponge instead. A standard block cipher such as AES can be used in place of these custom block ciphers; that might be useful when an embedded system needs to implement both encryption and hashing with minimal code size or hardware area. However, that approach can have costs in efficiency and security. The ciphers in hash functions are built for hashing: they use large keys and blocks, can efficiently change keys every block, and have been designed and vetted for resistance to related-key attacks. General-purpose ciphers tend to have different design goals. In particular, AES has key and block sizes that make it nontrivial to use to generate long hash values; AES encryption becomes less efficient when the key changes each block; and related-key attacks make it potentially less secure for use in a hash function than for encryption. Hash function design A hash function must be able to process an arbitrary-length message into a fixed-length output. This can be achieved by breaking the input up into a series of equally sized blocks, and operating on them in sequence using a one-way compression function. The compression function can either be specially designed for hashing or be built from a block cipher. A hash function built with the Merkle–Damgård construction is as resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a collision in the compression function. The last block processed should also be unambiguously length padded; this is crucial to the security of this construction. This construction is called the Merkle–Damgård construction. Most common classical hash functions, including SHA-1 and MD5, take this form. Wide pipe versus narrow pipe A straightforward application of the Merkle–Damgård construction, where the size of hash output is equal to the internal state size (between each compression step), results in a narrow-pipe hash design. This design causes many inherent flaws, including length-extension, multicollisions, long message attacks, generate-and-paste attacks, and also cannot be parallelized. As a result, modern hash functions are built on wide-pipe constructions that have a larger internal state size – which range from tweaks of the Merkle–Damgård construction to new constructions such as the sponge construction and HAIFA construction. None of the entrants in the NIST hash function competition use a classical Merkle–Damgård construction. Meanwhile, truncating the output of a longer hash, such as used in SHA-512/256, also defeats many of these attacks. Use in building other cryptographic primitives Hash functions can be used to build other cryptographic primitives. For these other primitives to be cryptographically secure, care must be taken to build them correctly. Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Luby-Rackoff constructions using hash functions can be provably secure if the underlying hash function is secure. Also, many hash functions (including SHA-1 and SHA-2) are built by using a special-purpose block cipher in a Davies–Meyer or other construction. That cipher can also be used in a conventional mode of operation, without the same security guarantees; for example, SHACAL, BEAR and LION. Pseudorandom number generators (PRNGs) can be built using hash functions. This is done by combining a (secret) random seed with a counter and hashing it. Some hash functions, such as Skein, Keccak, and RadioGatún, output an arbitrarily long stream and can be used as a stream cipher, and stream ciphers can also be built from fixed-length digest hash functions. Often this is done by first building a cryptographically secure pseudorandom number generator and then using its stream of random bytes as keystream. SEAL is a stream cipher that uses SHA-1 to generate internal tables, which are then used in a keystream generator more or less unrelated to the hash algorithm. SEAL is not guaranteed to be as strong (or weak) as SHA-1. Similarly, the key expansion of the HC-128 and HC-256 stream ciphers makes heavy use of the SHA-256 hash function. Concatenating outputs from multiple hash functions provide collision resistance as good as the strongest of the algorithms included in the concatenated result. For example, older versions of Transport Layer Security (TLS) and Secure Sockets Layer (SSL) used concatenated MD5 and SHA-1 sums. This ensures that a method to find collisions in one of the hash functions does not defeat data protected by both hash functions. For Merkle–Damgård construction hash functions, the concatenated function is as collision-resistant as its strongest component, but not more collision-resistant. Antoine Joux observed that 2-collisions lead to n-collisions: if it is feasible for an attacker to find two messages with the same MD5 hash, then they can find as many additional messages with that same MD5 hash as they desire, with no greater difficulty. Among those n messages with the same MD5 hash, there is likely to be a collision in SHA-1. The additional work needed to find the SHA-1 collision (beyond the exponential birthday search) requires only polynomial time. Cryptographic hash algorithms There are many cryptographic hash algorithms; this section lists a few algorithms that are referenced relatively often. A more extensive list can be found on the page containing a comparison of cryptographic hash functions. MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4, and was specified in 1992 as RFC 1321. Collisions against MD5 can be calculated within seconds which makes the algorithm unsuitable for most use cases where a cryptographic hash is required. MD5 produces a digest of 128 bits (16 bytes). SHA-1 was developed as part of the U.S. Government's Capstone project. The original specification – now commonly called SHA-0 – of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. Collisions against the full SHA-1 algorithm can be produced using the shattered attack and the hash function should be considered broken. SHA-1 produces a hash digest of 160 bits (20 bytes). Documents may refer to SHA-1 as just "SHA", even though this may conflict with the other Secure Hash Algorithms such as SHA-0, SHA-2, and SHA-3. RIPEMD (RACE Integrity Primitives Evaluation Message Digest) is a family of cryptographic hash functions developed in Leuven, Belgium, by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel at the COSIC research group at the Katholieke Universiteit Leuven, and first published in 1996. RIPEMD was based upon the design principles used in MD4 and is similar in performance to the more popular SHA-1. RIPEMD-160 has, however, not been broken. As the name implies, RIPEMD-160 produces a hash digest of 160 bits (20 bytes). Whirlpool is a cryptographic hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto, who first described it in 2000. Whirlpool is based on a substantially modified version of the Advanced Encryption Standard (AES). Whirlpool produces a hash digest of 512 bits (64 bytes). SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA), first published in 2001. They are built using the Merkle–Damgård structure, from a one-way compression function itself built using the Davies–Meyer structure from a (classified) specialized block cipher. SHA-2 basically consists of two hash algorithms: SHA-256 and SHA-512. SHA-224 is a variant of SHA-256 with different starting values and truncated output. SHA-384 and the lesser-known SHA-512/224 and SHA-512/256 are all variants of SHA-512. SHA-512 is more secure than SHA-256 and is commonly faster than SHA-256 on 64-bit machines such as AMD64. The output size in bits is given by the extension to the "SHA" name, so SHA-224 has an output size of 224 bits (28 bytes); SHA-256, 32 bytes; SHA-384, 48 bytes; and SHA-512, 64 bytes. SHA-3 (Secure Hash Algorithm 3) was released by NIST on August 5, 2015. SHA-3 is a subset of the broader cryptographic primitive family Keccak. The Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak is based on a sponge construction which can also be used to build other cryptographic primitives such as a stream cipher. SHA-3 provides the same output sizes as SHA-2: 224, 256, 384, and 512 bits. Configurable output sizes can also be obtained using the SHAKE-128 and SHAKE-256 functions. Here the -128 and -256 extensions to the name imply the security strength of the function rather than the output size in bits. BLAKE2, an improved version of BLAKE, was announced on December 21, 2012. It was created by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn, and Christian Winnerlein with the goal of replacing the widely used but broken MD5 and SHA-1 algorithms. When run on 64-bit x64 and ARM architectures, BLAKE2b is faster than SHA-3, SHA-2, SHA-1, and MD5. Although BLAKE and BLAKE2 have not been standardized as SHA-3 has, BLAKE2 has been used in many protocols including the Argon2 password hash, for the high efficiency that it offers on modern CPUs. As BLAKE was a candidate for SHA-3, BLAKE and BLAKE2 both offer the same output sizes as SHA-3 – including a configurable output size. BLAKE3, an improved version of BLAKE2, was announced on January 9, 2020. It was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. BLAKE3 is a single algorithm, in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7. Internally, BLAKE3 is a Merkle tree, and it supports higher degrees of parallelism than BLAKE2. Attacks on cryptographic hash algorithms There is a long list of cryptographic hash functions but many have been found to be vulnerable and should not be used. For instance, NIST selected 51 hash functions as candidates for round 1 of the SHA-3 hash competition, of which 10 were considered broken and 16 showed significant weaknesses and therefore did not make it to the next round; more information can be found on the main article about the NIST hash function competitions. Even if a hash function has never been broken, a successful attack against a weakened variant may undermine the experts' confidence. For instance, in August 2004 collisions were found in several then-popular hash functions, including MD5. These weaknesses called into question the security of stronger algorithms derived from the weak hash functions – in particular, SHA-1 (a strengthened version of SHA-0), RIPEMD-128, and RIPEMD-160 (both strengthened versions of RIPEMD). On August 12, 2004, Joux, Carribault, Lemuel, and Jalby announced a collision for the full SHA-0 algorithm. Joux et al. accomplished this using a generalization of the Chabaud and Joux attack. They found that the collision had complexity 251 and took about 80,000 CPU hours on a supercomputer with 256 Itanium 2 processors – equivalent to 13 days of full-time use of the supercomputer. In February 2005, an attack on SHA-1 was reported that would find collision in about 269 hashing operations, rather than the 280 expected for a 160-bit hash function. In August 2005, another attack on SHA-1 was reported that would find collisions in 263 operations. Other theoretical weaknesses of SHA-1 have been known: and in February 2017 Google announced a collision in SHA-1. Security researchers recommend that new applications can avoid these problems by using later members of the SHA family, such as SHA-2, or using techniques such as randomized hashing that do not require collision resistance. Many cryptographic hashes are based on the Merkle–Damgård construction. All cryptographic hashes that directly use the full output of a Merkle–Damgård construction are vulnerable to length extension attacks. This makes the MD5, SHA-1, RIPEMD-160, Whirlpool, and the SHA-256 / SHA-512 hash algorithms all vulnerable to this specific attack. SHA-3, BLAKE2, BLAKE3, and the truncated SHA-2 variants are not vulnerable to this type of attack. Attacks on hashed passwords A common use of hashes is to store password authentication data. Rather than store the plaintext of user passwords, a controlled access system stores the hash of each user's password in a file or database. When someone requests access, the password they submit is hashed and compared with the stored value. If the database is stolen (an all too frequent occurrence), the thief will only have the hash values, not the passwords. However, most people choose passwords in predictable ways. Lists of common passwords are widely circulated and many passwords are short enough that all possible combinations can be tested if fast hashes are used. The use of cryptographic salt prevents some attacks, such as building files of precomputing hash values, e.g. rainbow tables. But searches on the order of 100 billion tests per second are possible with high-end graphics processors, making direct attacks possible even with salt. The United States National Institute of Standards and Technology recommends storing passwords using special hashes called key derivation functions (KDFs) that have been created to slow brute force searches.: 184.108.40.206 Slow hashes include pbkdf2, bcrypt, scrypt, argon2, Balloon and some recent modes of Unix crypt. For KSFs that perform multiple hashes to slow execution, NIST recommends an iteration count of 10,000 or more.: 220.127.116.11 - Shai Halevi and Hugo Krawczyk, Randomized Hashing and Digital Signatures - Al-Kuwari, Saif; Davenport, James H.; Bradford, Russell J. (2011). "Cryptographic Hash Functions: Recent Design Trends and Security Notions". Cryptology ePrint Archive. Report 2011/565. - Schneier, Bruce. "Cryptanalysis of MD5 and SHA: Time for a New Standard". Computerworld. Archived from the original on 2016-03-16. Retrieved 2016-04-20. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography. - Katz & Lindell 2014, pp. 155–157, 190, 232. - Rogaway & Shrimpton 2004, in Sec. 5. Implications. - Duong, Thai; Rizzo, Juliano. "Flickr's API Signature Forgery Vulnerability". - Lyubashevsky et al. 2008, pp. 54–72. - Perrin, Chad (December 5, 2007). "Use MD5 hashes to verify software downloads". TechRepublic. Retrieved March 2, 2013. - Lucks, Stefan (2004). "Design Principles for Iterated Hash Functions". Cryptology ePrint Archive. Report 2004/253. - Kelsey & Schneier 2005, pp. 474–490. - Biham, Eli; Dunkelman, Orr (24 August 2006). A Framework for Iterative Hash Functions – HAIFA. Second NIST Cryptographic Hash Workshop. Cryptology ePrint Archive. Report 2007/278. - Nandi & Paul 2010. - Dobraunig, Christoph; Eichlseder, Maria; Mendel, Florian (February 2015). Security Evaluation of SHA-224, SHA-512/224, and SHA-512/256 (PDF) (Report). - Mendel et al., p. 145:Concatenating ... is often used by implementors to "hedge bets" on hash functions. A combiner of the form MD5 - Harnik et al. 2005, p. 99: the concatenation of hash functions as suggested in the TLS... is guaranteed to be as secure as the candidate that remains secure. - Joux 2004. - Finney, Hal (August 20, 2004). "More Problems with Hash Functions". The Cryptography Mailing List. Archived from the original on April 9, 2016. Retrieved May 25, 2016. - Hoch & Shamir 2008, pp. 616–630. - Andrew Regenscheid, Ray Perlner, Shu-Jen Chang, John Kelsey, Mridul Nandi, Souradyuti Paul, Status Report on the First Round of the SHA-3 Cryptographic Hash Algorithm Competition - XiaoyunWang, Dengguo Feng, Xuejia Lai, Hongbo Yu, Collisions for Hash Functions MD4, MD5, HAVAL-128, and RIPEMD - Alshaikhli, Imad Fakhri; AlAhmad, Mohammad Abdulateef (2015), "Cryptographic Hash Function", Handbook of Research on Threat Detection and Countermeasures in Network Security, IGI Global, pp. 80–94, doi:10.4018/978-1-4666-6583-5.ch006, ISBN 978-1-4666-6583-5 - Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu, Finding Collisions in the Full SHA-1 - Bruce Schneier, Cryptanalysis of SHA-1 (summarizes Wang et al. results and their implications) - Fox-Brewster, Thomas. "Google Just 'Shattered' An Old Crypto Algorithm – Here's Why That's Big For Web Security". Forbes. Retrieved 2017-02-24. - Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Arne Osvik, Benne de Weger, MD5 considered harmful today: Creating a rogue CA certificate, accessed March 29, 2009. - Swinhoe, Dan (April 17, 2020). "The 15 biggest data breaches of the 21st century". CSO Magazine. - Goodin, Dan (2012-12-10). "25-GPU cluster cracks every standard Windows password in <6 hours". Ars Technica. Retrieved 2020-11-23. - Claburn, Thomas (February 14, 2019). "Use an 8-char Windows NTLM password? Don't. Every single one can be cracked in under 2.5hrs". www.theregister.co.uk. Retrieved 2020-11-26. - "Mind-blowing GPU performance". Improsec. January 3, 2020. - Grassi Paul A. (June 2017). SP 800-63B-3 – Digital Identity Guidelines, Authentication and Lifecycle Management. NIST. doi:10.6028/NIST.SP.800-63b. - Harnik, Danny; Kilian, Joe; Naor, Moni; Reingold, Omer; Rosen, Alon (2005). "On Robust Combiners for Oblivious Transfer and Other Primitives". Advances in Cryptology – EUROCRYPT 2005. Lecture Notes in Computer Science. Vol. 3494. pp. 96–113. doi:10.1007/11426639_6. ISBN 978-3-540-25910-7. ISSN 0302-9743. - Hoch, Jonathan J.; Shamir, Adi (2008). "On the Strength of the Concatenated Hash Combiner When All the Hash Functions Are Weak". Automata, Languages and Programming. Lecture Notes in Computer Science. Vol. 5126. pp. 616–630. doi:10.1007/978-3-540-70583-3_50. ISBN 978-3-540-70582-6. ISSN 0302-9743. - Joux, Antoine (2004). "Multicollisions in Iterated Hash Functions. Application to Cascaded Constructions". Advances in Cryptology – CRYPTO 2004. Lecture Notes in Computer Science. Vol. 3152. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 306–316. doi:10.1007/978-3-540-28628-8_19. ISBN 978-3-540-22668-0. ISSN 0302-9743. - Kelsey, John; Schneier, Bruce (2005). "Second Preimages on n-Bit Hash Functions for Much Less than 2 n Work". Advances in Cryptology – EUROCRYPT 2005. Lecture Notes in Computer Science. Vol. 3494. pp. 474–490. doi:10.1007/11426639_28. ISBN 978-3-540-25910-7. ISSN 0302-9743. - Katz, Jonathan; Lindell, Yehuda (2014). Introduction to Modern Cryptography (2nd ed.). CRC Press. ISBN 978-1-4665-7026-9. - Lyubashevsky, Vadim; Micciancio, Daniele; Peikert, Chris; Rosen, Alon (2008). "SWIFFT: A Modest Proposal for FFT Hashing". Fast Software Encryption. Lecture Notes in Computer Science. Vol. 5086. pp. 54–72. doi:10.1007/978-3-540-71039-4_4. ISBN 978-3-540-71038-7. ISSN 0302-9743. - Mendel, Florian; Rechberger, Christian; Schläffer, Martin (2009). "MD5 Is Weaker Than Weak: Attacks on Concatenated Combiners". Advances in Cryptology – ASIACRYPT 2009. Lecture Notes in Computer Science. Vol. 5912. pp. 144–161. doi:10.1007/978-3-642-10366-7_9. ISBN 978-3-642-10365-0. ISSN 0302-9743. - Nandi, Mridul; Paul, Souradyuti (2010). "Speeding Up the Wide-Pipe: Secure and Fast Hashing". Progress in Cryptology - INDOCRYPT 2010. Lecture Notes in Computer Science. Vol. 6498. pp. 144–162. doi:10.1007/978-3-642-17401-8_12. ISBN 978-3-642-17400-1. ISSN 0302-9743. - Rogaway, P.; Shrimpton, T. (2004). "Cryptographic Hash-Function Basics: Definitions, Implications, and Separations for Preimage Resistance, Second-Preimage Resistance, and Collision Resistance". CiteSeerX 10.1.1.3.6200. - Paar, Christof; Pelzl, Jan (2009). "11: Hash Functions". Understanding Cryptography, A Textbook for Students and Practitioners. Springer. Archived from the original on 2012-12-08. (companion web site contains online cryptography course that covers hash functions) - "The ECRYPT Hash Function Website". - Buldas, A. (2011). "Series of mini-lectures about cryptographic hash functions". Archived from the original on 2012-12-06. - Open source python based application with GUI used to verify downloads.
Assessment is the process where information on each student’s performance is gathered and analyzed to identify what students know, understand, can do and feel at different stages in the learning process. Assessment is integral to all teaching and learning and the 3C curriculum uses thoughtful and effective assessment processes by guiding students through the SIX essential elements of learning: The displaying of Character, The understanding of Concepts The acquisition of knowledge, The mastering of Competence, The development of Content, And the decision to take responsible Action. Everyone concerned with assessment – students, teachers, parents, administrators, and board members – must have a clear understanding of the reasons for the assessment, what is being assessed, the criteria for success, and the method by which the assessment is made. The 3C describes the taught curriculum as the written curriculum in action. Using the written curriculum in collaboration with colleagues and students, the teacher generates questions which guide structured inquiry and instruction. These questions address the eight key concepts which help lead to productive lines of inquiry. Assessment focuses on the quality of student learning during the process of inquiry as well as instruction and on the quality of the products of that learning. Assessment is, therefore, integral to the taught curriculum. It is the means by which we analyze student learning and the effectiveness of our teaching and acts as a foundation on which we can base our future planning and practice. It is central to our goal of guiding the student through the learning process, from a novice to becoming an expert. The assessment component in the school’s curriculum can itself be subdivided into three closely related areas: Assessing – How we discover what the students know and have learned. Recording – How we choose to collect and analyze data. Reporting – How we choose to communicate information.
Researchers have now demonstrated the ability to create amorphous metal, or metallic glass, alloys using three-dimensional (3-D) printing technology, opening the door to a variety of applications - such as more efficient electric motors, better wear-resistant materials, higher strength materials, and lighter weight structures. "Metallic glasses lack the crystalline structures of most metals - the amorphous structure results in exceptionally desirable properties," says Zaynab Mahbooba, first author of a paper on the work and a Ph.D. student in North Carolina State University's Department of Materials Science and Engineering. Unfortunately, making metallic glass requires rapid cooling to prevent the crystalline structure from forming. Historically, that meant researchers could only cast metallic glasses into small thicknesses. For example, amorphous iron alloys could be cast no more than a few millimeters thick. That size limitation is called an alloy's critical casting thickness. "The idea of using additive manufacturing, or 3-D printing, to produce metallic glass on scales larger than the critical casting thickness has been around for more than a decade," Mahbooba says. "But this is the first published work demonstrating that we can actually do it. We were able to produce an amorphous iron alloy on a scale 15 times larger than its critical casting thickness." The technique works by applying a laser to a layer of metal powder, melting the powder into a solid layer that is only 20 microns thick. The "build platform" then descends 20 microns, more powder is spread onto the surface, and the process repeats itself. Because the alloy is formed a little at a time, it cools quickly - retaining its amorphous qualities. However, the end result is a solid, metallic glass object - not an object made of laminated, discrete layers of the alloy. "This is a proof-of-concept demonstrating that we can do this," says Ola Harrysson, corresponding author of the paper and Edward P. Fitts Distinguished Professor of Industrial Systems and Engineering at NC State. "And there is no reason this technique could not be used to produce any amorphous alloy," Harrysson says. "One of the limiting factors at this point is going to be producing or obtaining metal powders of whatever alloy composition you are looking for. "For example, we know that some metallic glasses have demonstrated enormous potential for use in electric motors, reducing waste heat and converting more power from electromagnetic fields into electricity." "It will take some trial and error to find the alloy compositions that have the best combination of properties for any given application," Mahbooba says. "For instance, you want to make sure you not only have the desirable electromagnetic properties, but that the alloy isn't too brittle for practical use." "And because we're talking about additive manufacturing, we can produce these metallic glasses in a variety of complex geometries - which may also contribute to their usefulness in various applications," Harrysson says.
AssessmentsMitral valve disease Mitral valve disease USMLE® Step 1 style questions USMLE USMLE® Step 2 style questions USMLE A 43-year-old man with a history of rheumatic fever comes to the primary care clinic for a check up. Cardiac examination reveals a late systolic crescendo murmur with midsystolic click best heard over the apex and loudest just before S2. Which of the following maneuvers will cause an earlier onset of the click/murmur? The mitral valve has two leaflets: the anterior leaflet and the posterior leaflet. Together, they separate the left atrium from the left ventricle. During systole the valve closes, which means blood cannot do anything but be ejected out of the aortic valve and into circulation. If the mitral valve doesn’t completely shut, blood can leak back into the left atrium; this is called mitral valve regurgitation. During diastole, the mitral valve opens and lets blood fill into the ventricle. If the mitral valve doesn’t open enough, it gets harder to fill the left ventricle; this is called mitral valve stenosis. Let’s start with mitral valve regurgitation. The leading cause of mitral valve regurgitation, and the most common of all valvular conditions, is mitral valve prolapse. When the left ventricle contracts during systole, a ton of pressure is generated so that the blood can be pumped out of the aortic valve; therefore, a lot of pressure pushes on that closed mitral valve. Normally, the papillary muscles and connective tissue, called chordae tendineae or heart strings, keep the valve from prolapsing, or falling back into the atrium. With mitral valve prolapse, the connective tissue of the leaflets and surrounding tissue are weakened; this is called myxomatous degeneration. Why this happens isn’t well understood, but it is sometimes associated with connective tissue disorders, such as Marfan syndrome and Ehlers-Danlos syndrome. Myxomatous degeneration results in a larger valve leaflet area and elongation of the chordae tendineae, which can sometimes rupture; this rupture typically happens to the chordae tendineae on the posterior leaflet, and can cause the posterior leaflet to fold up into the left atrium. The click is a result of the leaflet folding into the atrium and being suddenly stopped by the chordae tendineae. Although mitral valve prolapse doesn’t always cause mitral regurgitation, it often does. If the leaflets don’t make a perfect seal, a little bit of blood leaks backward from the left ventricle into the left atrium, causing a murmur. The mitral valve prolapse murmur is somewhat unique in that when patients squat down, the click comes later and the murmur is shorter, but when they stand or do a valsalva maneuver, the click comes sooner and the murmur lasts longer. This happens because squatting increases venous return, which fills the left ventricle with slightly more blood; this means that the left ventricle gets just a little bit larger. Therefore, the larger leaflets have more space to hang out, and as the ventricle contracts and gets smaller, it takes just a little longer for the leaflet to get forced into the atrium. Standing, on the other hand, reduces venous return, meaning there’s a little less blood in the ventricle and, by extension, a little less room to hang out; thus, the leaflet gets forced out earlier during contraction. The other heart murmur that follows this pattern is the one present in hypertrophic cardiomyopathy. So, in addition to mitral valve prolapse, another cause of mitral regurgitation is damage to the papillary muscles from a heart attack. If these papillary muscles die, they can’t anchor the chordae tendineae, which then cause the mitral valve to flop back and allow blood to go from the left ventricle to the left atrium.
How parents can assist in preventing bullying The following is from the website kidspot.com.au. Here is the link. How often do we hear that behaviour starts at home. In the case of bullying and society's attitudes to it, these often stem from the home environment. So the parental role in preventing and reducing bullying is vital. Some of the actions parents can take, whether bullying has impacted on you or your child's life or not, are: Tell your children regularly much you disapprove of bullying and why. Tell them you don't want them to take part in mistreating another student at any level, however small. Students who come from families that oppose bullying, accept that bullying is wrong and are less likely to bully others because they know their parents would disapprove. Do not allow any type of bullying at home and deal firmly with any attempts by siblings to bully one another. Encourage your child to see the positive side of other students rather than expressing contempt and superiority. Model and encourage respect Model and encourage respect for others as well as behaviours and values, such as compassion, cooperation, friendliness, acceptance of difference and respect. Explain rights of others Emphasise seeing things from another's point of view and the rights of others not to be mistreated. Report all incidents of bullying that you are aware of, not just incidents that happen to your child. Don't continue any child's silent nightmare by saying nothing. Develop protective behaviours and resilient social skills in your child, such as speaking assertively, negotiating, expressing their own opinion, using a confident voice and using firm eye contact. Practice regularly using dinner conversations and social encounters with acquaintances and new people. Respect and confidence are key Talk about respect and help children distinguish between people who care about their wellbeing and those who don't. Children require the confidence and skills to avoid people who don't treat them with respect. Help build friendships Help your child build and maintain caring and genuine friendships. This may mean taking an active role in encouraging social activities such as after-school plays and sleepovers. Deal with fear and anger Assist them to develop effective ways of dealing with fear and anger instead of internalising their feelings, taking them out on others or losing face in front of the peer group by allowing them to spill over.
This guide presents a variety of artworks, from the 17th century to the present, that highlight the presence and experiences of Black communities across the Atlantic world. Use the collections in the virtual gallery below to engage your students in conversation about the many narratives of everyday life, enslavement, and resistance that have been told through art. Lesson plans are provided to extend these conversations and help students consider the many and continuing legacies of the transatlantic slave trade. This Teacher’s Guide offers a collection of lessons and resources for K-12 social studies, literature, and arts classrooms that center around the experiences, achievements, and perspectives of Asian Americans and Pacific Islanders across U.S. history. Archival visits, whether in person or online, are great additions to any curriculum in the humanities. Primary sources can be the cornerstone of lessons or activities involving any aspect of history, ancient or modern. This Teachers Guide is designed to help educators plan, execute, and follow up on an encounter with sources housed in a variety of institutions, from libraries and museums to historical societies and state archives to make learning come to life and teach students the value of preservation and conservation in the humanities. The National Endowment for the Humanities has compiled a collection of digital resources for K-12 and higher education instructors who teach in an online setting. The resources included in this Teacher's Guide range from videos and podcasts to digitized primary sources and interactive activities and games that have received funding from the NEH, as well as resources for online instruction. This Teacher's Guide compiles EDSITEment resources that support the NEH's "A More Perfect Union" initiative, which celebrates the 250th anniversary of the founding of the United States. Topics include literature, history, civics, art, and culture. Our Teacher's Guide offers a collection of lessons and resources for K-12 social studies, literature, and arts classrooms that center around the achievements, perspectives, and experiences of African Americans across U.S. history. This Teacher's Guide will introduce you to the cultures and explore the histories of some groups within the over 5 million people who identify as American Indian in the United States, with resources designed for integration across humanities curricula and classrooms throughout the school year. Since 1988, the U.S. Government has set aside the period from September 15 to October 15 as National Hispanic Heritage Month to honor the many contributions Hispanic Americans have made and continue to make to the United States of America. Our Teacher's Guide brings together resources created during NEH Summer Seminars and Institutes, lesson plans for K-12 classrooms, and think pieces on events and experiences across Hispanic history and heritage.
These two characters read Hua Xia, another name for China People often have the impression that Chinese characters are extremely difficult to learn. In fact, if you were to attempt to learn how to write Chinese characters, you would find that they are not nearly as difficult as you may have imagined. And they certainly qualify as forming one of the most fascinating, beautiful, logical and scientifically constructed writing systems in the world. Each stroke has its own special significance. If you are familiar with the principles governing the composition of Chinese characters, you will find it very easy to remember even the most complicated looking character and never miss a stroke. The earliest known examples of Chinese written characters in their developed form are carved into tortoise shells and ox bones. The majority of these characters are pictographs. Archaeologists and epigraphers of various countries have learned that most early writing svstems went through a pictographic stage, as did the Egyptian hieroglyphics. Most writing systems, however, eventually developed a phonetic alphabet to represent the sounds of spoken language rather than visual images perceived in the physical world. Chinese is the only major writing system of the world that continued its pictograph-based development without interruption and that is still in general modern use. But not all Chinese characters are simply impressionistic sketches of concrete objects. Chinese characters incorporate meaning and sound as well as visual image into a coherent whole. In traditional etymology, Chinese characters are classified into six different methods of character composition and use these six categories are called the Liu Shu. The Liu Shu categories are: - (1)pictographs xiang xing; - (2)ideographs ji shi; - (3)compound ideographs hui yi; - (4)compounds with both phonetic and meaning elements xing sheng; - (5)characters which are assigned a new written form to better reflect a changed pronunciation quan qu; (6)characters used to represent a homophone or near-homophone that are unrelated in meaning to the new word they represent jia jie. There is a theoretical total of almost 50,000 written Chinese characters; only about 5,000 of these are frequently used. Among these 5,000, if you learn about 200 key words that are most often repeated in daily use, then you can say you know Chinese. Really learning to read and write Chinese is not nearly so formidable a task at all. Because there has long been a single method for writing Chinese and a common literary and cultural history, a tradition has grown up of referring to the eight main varieties of speech in China as dialects'. But in fact, they are as different from each other (mainly in pronunciation and vocabulary) as French or Spanish is from Italian, the dialects of the southeast being linguistically the furthest apart. The mutual unintelligibility of the varieties is the main ground for referring to them as separate languages. However, it must also be recognized that each variety consists of a large number of dialects, many of which may themselves be referred to as languages. The boundaries between one so-called language and the next are not always easy to define. The Chinese refer to themselves and their language, in any of the forms below, as Han - a name which derives from the Han dynasty (202 BC-AD 220). Han Chinese is thus to be distinguished from the non-Han minority languages used in China. There are over 50 of these languages (such as Tibetan, Russian, Uighur, Kazakh, Mongolian, and Korean), spoken by around 6% of the population. 100% Han Chinese and some non-Han minority Chinese write and read the same Chinese, unlike the situation with dialects in China.
The assumption of old age concerning Titan and the predictions that proceeded from it, here are the facts, they were wrong about a global ocean; they were wrong about huge lakes of liquid ethane; they were enormously surprised to discover sand dunes on Titan but what about geology? They are still gathering data from this amazing moon, and once again it doesn’t look good for old age assumptions. Scientists hoped to find volcanoes but a new paper concludes that Titan gets its geology from the outside, instead from the inside. If this is found to be true then its implication consists of the surface features being created by wind, impacts and weather rather than active geology. The hopeful cryovolcano announced last year was challenged by Moore and Pappalardo, authors of the new paper. Could the evidence be pointing to a geologically dead world on Titan? Planetary scientists previously have had an age conundrum with Titan. They know that the methane in the atmosphere is destroyed and converted to other compounds in a one-way process. This puts limits on the age of the atmosphere which indicates a far less 4.5-billion-year age assumed for the solar system. This is why they hoped to find a reservoir of methane under the surface which would erupt in cryovolcanoes to replenish the atmosphere. In another paper from the same source, it analyzed Titan’s equatorial sand dunes. These dunes, covering about 12.5% of the surface, were a surprise when discovered, because scientists were expecting large lakes or even a global ocean. Scientists also doubted that the winds were strong enough at the surface to move particles around. Dunes also exist on Mars, Venus, and of course, Earth, but on Titan, the average 300-foot-high dunes are nearly1.9 miles apart, and getting farther apart at higher latitudes. Unlike the silica sands on Earth, the particles in Titan’s dunes are thought to be composed of hydrocarbon dust and ice precipitated out of the atmosphere. All together, they constitute the largest known reservoir of organics on Titan, because the combined area of dunes is about as large as the United States. The dunes infringe upon the theories of Titan’s age. Because for one, they are among Titan’s most youthful features; for another, they indicate a lack of persistent liquid on Titan’s equator, even though liquid ethane should have been raining onto the surface throughout Titan’s history! The presence of dunes implies that much of Titan is extremely dry. If spread out evenly over the globe, the particles in this largest reservoir of organics (larger than all the observed lakes combined) would fail to cover Titan with the predicted accumulation of hydrocarbons that must have been produced in the assumed 4.5-billion-year age of the moon.
Spatial RDBMS is an RDBMS that can process spatial data. Popular RDBMSs, such as Oracle, offer their own Spatial RDBMS features or add-ons so that spatial data can be processed. Since each DBMS has a different architecture, it is difficult to show how it operates through a simple diagram. But we can explain at least the concept of a spatial DBMS through the following diagram. Spatial RDBMS allows to use SQL data types, such as int and varchar, as well as spatial data types, such as Point,Linestring and Polygon for geometric calculations like distance or relationships between shapes. RDBMS uses the B-Tree series or Hash Function to process indexes (see CUBRID Query Tuning Techniques for more explanation), basically to determine the size or redundancies of column values. In other words, only one-dimensional data can be processed. Since spatial data types are two or three dimensional: - R-Tree series or Quad Trees can be used in Spatial RDBMS to process such data; - Or it is necessary to transform two or three dimensional data to one dimensional data, then B-Tree can be used. Many benefits follow if the existing RDBMS is extended to process spatial data. First, even when conducting geo-spatial tasks, there will be many occasions when basic data types, such as numbers or characters, are used. Another benefit is that there will not be a burden of additional training, since SQL is already a verified solution which can successfully store the data. RDBMS is not the only database management system available. Likewise, spatial RDBMS is not the only spatial database management system available. Many databases, such as MongoDB, the document-oriented database, search engines such as Lucene or Solr, provide spatial data processing features. However, these solutions offer less features and do not provide high precision calculations. To understand what high precision calculations mean, we will take a closer look at the features a spatial DBMS provides. OpenGIS is a standard solution to process spatial data. The OGC (Open Geospatial Consortium), a consortium made up of 416 governmental organizations, research centers and companies from all over the world (as of 2011), legislates this standard. OpenGIS (Open Geodata Interpretability Specification) is a registered trademark of the OGC, and is a standard for geospatial data processing (this document does not differentiate spatial and geospatial). Out of the many standards in the OpenGIS, the one standard needed to understand the spatial DBMSS is Simple Feature. As mentioned above, Simple Feature is a standard needed to process the geospatial data. Geometry Object Model (spatial data type), Spatial Operation and Coordinate System (two and three dimension) are subject to the Simple Feature standard. Geometry Object Model are figures such as Point, LineString and Polygon. The Geometry Data Model (spatial data type) can exist in not only two dimensions but three dimensions as well. The area dealt by Simple Feature is Euclid. Therefore, spatial operation on intersects and touches are all Euclid geometry areas. In the Simple Feature Statement document, the Geometry Object Model is dealt like an Object-Oriented languages class. An actual UML is used to describe. The following is a Geometry Object Model Class Diagram, which provides a summary of the contents in a Simple Feature Specification. Point, LineString, and Polygon, all inherit this geometry. ‘Query’ and ‘analysis’ blocks are Spatial Operations. The standards document uses a formula to describe the operation. The Within calculation shown in the above Figure can be explained in the following formula where I() function is interior, E() function is exterior. Operations such as buffer() are used frequently. When a point unit geometry is given as an argument, then this is processed by buffer() and returned as a geometry that is in a form of a line that is surrounded for a certain distance. Buffer() for point would be a circle. If the center of a road is shown with a LineString, the buffer() can be used to identify the road type that can put the road width into consideration. Building near a certain road can be identified using touches(). The Simple Feature described so far is the “Simple Feature Common Architecture“. The other Simple Feature standards are Simple Feature CORBA, Simple Feature OLE/COM and Simple Feature SQL. Naturally, Spatial RDBMS is closely related to Simple Feature SQL. Simple Feature SQL includes Simple Feature Common Architecture, and deals with the standards Spatial RDBMS must have based on ANSI SQL 92. It deals with how to show the Geometry Object Model in DBMS data type and how to show the SQL function in spatial operation, etc. It also specifies the basic DBMS Table the Spatial DBMS must have. SPATIAL_REF_SYS, a table that contains the Spatial Reference ID is a good example. A famous open source library that implements the Simple Feature specifications is JTS Topology Suite, which is written in a Java, and GEOS, a C++ port of JTS. GEOS is used in PostGIS (package that is added on to the PostgreSQL to process Spatial data).
1. The earth The Earth is one of the nine planets in the Solar System . In the olden days, some people thought that the Earth was flat but other people imagined it was square. Nowadays, we know that its shape is similar to an orange, that is, as a sphere with flattened poles. This form is called the geoid. In the upper picture, we see that a vessel gets away from the coast with a man and a lighthouse. At the begining the man hides out and later the lighthouse is no longer seen. This shows that the earth's surface is circular. 2. Lines and circles of the globe To locate a particular point on the Earth's surface, It has been found that on the globe exist a number of imaginary points, lines and circles. These are: - The axis: it is the line where the Earth spins around. It runs from the North Pole to the South Pole and it is tilted with reference to the plane of the earth's orbit around the Sun. - The Equator: it is the great circle perpendicular to the earth's axis. It divides the earth into two equal parts called hemispheres: the northern or boreal hemisphere and the southern or austral hemisphere. - - The Meridians: they are semi-circles passing through the poles. They are vertical lines. - The parallels: they are circles parallel to the Equator . They have different sizes : the biggest are close to the Equator and the smallest are close to the Poles . The most important parallels are four: the Tropic of Cancer and the Arctic Circle in the northern hemisphere and the Tropic of Capricorn and the Antarctic circle in the Southern Hemisphere. 3. Answer with one of the imaginary lines: axis, Equator, meridian or parallel:
A neurone consists of a cell body (with a nucleus and cytoplasm), dendrites which carry electrical impulses to the cell, and a long axon which carries the impulses away from the cell. The axon of one neurone and the dendrites of the next neurone do not actually touch. The gap between neurones is called the synapse. There are 3 processes involved in nerve transmission: Generation of a nerve impulse (action potential) of a sensory neurone occurs as a result of a stimulus such as light, a particular chemical or stretching of a cell membrane by sound. Conduction of an impulse along a neurone occurs from the dendrites to the cell body to the axon. Transmission of a signal to another neurone across a synapse - A chemical transmitter substance is released across the synapse to allow the electrical impulses to pass from one neurone to the next. This substance causes the next neurone to be electrically stimulated and keeps the signal going along a nerve. The Central Nervous System The Central Nervous System comprises the parts that are enclosed and protected by bone - the Brain and the Spinal Cord. The Brain is composed of millions of interconnected neurones with short axons. It is protected within 3 membranes or meninges as well as the skull or cranium. The Spinal Cord is a bundle of nerve fibres made of many neurones. It is protected by the 3 meninges also as well as the vertebral column. Cerebro-spinal Fluid lies inside the meninges and acts as a buffer against hard knocks or jolts. 3 Parts of The Brain Cerebrum (Forebrain) - the largest section of the brain, which lets us think, interpret sensory messages, carry out voluntary muscle movements, remember and have consciousness Cerebellum (Midbrain) - helps us to keep our balance, and have repetitive muscle control Medulla Oblongata (Hindbrain or Brain Stem) - control the vital functions of heartbeat, breathing and blood pressure The hypothalamus is a small cluster of neurones deep within the brain. It plays a central role by regulating many vital processes (e.g. regulating body temperature, heart rate, water balance and blood pressure, carbohydrate and fat metabolism, appetite, sleep and sex drive). It also links the nervous system with the endocrine system, because it controls the pituitary gland which is the master gland of the endocrine system. The Peripheral Nervous System This is the part of the nervous system that does not include the brain and the spinal cord. There are 2 types of nerves - sensory and motor nerves. Sensory Nerves carry information about the surroundings from the sense receptors in the skin, eyes, ears, nose and tongue, along the spinal cord to the brain to be interpreted. Motor Nerves carry messages from the brain through the spinal cord to the muscles and other organs to produce an action. Some of the nerves of the peripheral nervous system are under voluntary control (e.g. controlling motor nerves and muscles when writing). Other nerves are involuntary or uncontrolled (e.g. regulating heartbeat). The Autonomic Nervous System The autonomic nervous system is not under voluntary control. It consists only of motor nerves transmitting to major organs such as the heart, lungs, digestive organs and skin. It is a double system with 2 parts that work together - Sympathetic Nervous System and Parasympathetic Nervous System. ACTION OF SYMPATHETIC NERVES ACTION OF PARASYMPATHETIC NERVES Strengthens and speeds up heartbeat Weakens and slows heartbeat Constricts arteries and raises blood pressure Dilates arteries and lowers blood pressure Slows peristalsis and decreases activity Speeds up peristalsis and increase activity Dilates passages, makes for easier breathing Causes erection of hair Causes hair to lie flat A Reflex Arc A reflex arc involves transmission of a nervous impulse or message from sensory receptors to the spinal cord and back to muscles. Later, the message also reaches the brain for interpretation. Example: touching your hand on a hot stove 5 main senses are: The sense of touch includes touch, pressure, heat, cold and pain. A stimulus is a factor in the surroundings that causes the sense receptors to function. Touch, pressure, heat, cold, pain Skin, joints, internal organs Internal or external force or temperature Taste buds on tongue The Path of Light through the Eye Functions of the Parts of the Eye Cornea - thin transparent layer at the front of the eye Aqueous Humour - watery substance that fills the cavity between cornea and lens Pupil - a hole to allow to pass to the lens Iris - a coloured circular muscle that contracts or relaxes to dilate or constrict the pupil Lens - a transparent elastic ball that focuses light rays onto the retina Ciliary Ligament - attached to lens, and contracts or relaxes to adjust the lens Ciliary Body - attaches the ciliary ligament to the eyeball, and produces both aqueous and vitreous humours Vitreous Humour - a more viscous fluid that fills the cavity behind the lens Retina - a hemispherical layer of light-sensitive cells (rods and cones) at the back of the eye Fovea - a small area of the retina which is directly in line with the centre of the cornea and the lens, and is concentrated with the colour-sensitive cones Optic Nerve - the nerve which connects the retina with the vision area of the brain 'Blind Spot' - the place on the retina where the optic nerve attaches; has no light-sensitive cones and rods Choroid Coat - sheet of cells next to the retina with a black pigment to absorb extra light, and blood vessels to nourish the retina Sclera - tough outer coat of the eyeball Light-Sensitive Sense Receptors - Rods and Cones Rods are the more numerous cells that detect shades of black, grey and white light. They are more prevalent in the periphery of the eye. Cones are cells that detect coloured light. They are more prevalent in the centre of the retina, particularly the fovea. There are no rods or cones in the 'Blind Spot' where the optic nerve meets the retina. Focus of the Lens - When looking at distant objects, the lens is long and thin in shape. When looking at close objects, the lens is short and wide. Binocular Vision - Two eyes are important in judging distance and depth. Pupil Size - In bright light, the iris muscle relaxes and the pupil decreases in size so that less light enters the eye. In dim light, the iris muscle contracts and the pupil increase in size to allow more light to enter. Short-sightedness (Myopia) - This is a condition where the person can see close objects well, but not distant objects. Light focuses in front of the retina. It is corrected with concave lenses in spectacles. Long-sightedness (Hyperopia) - This is a condition where the person can see distant objects well, but not close objects. Light focuses behind the retina. It is corrected with convex lenses in spectacles. Astigmatism - This is a condition where the cornea is curved unevenly, so that different light rays focus in different places. It is corrected with spectacles. Did You Know That...? The average person can see for a distance of 2 million light-years. When we look at something very colourful, we must look directly at it. This is because the colour-sensitive cones in our retinas are concentrated in the fovea which is directly in line with the front of the eye. When we are in the dark, we see shades of black white and grey but not colour. This is because our colour-sensitive cones require a higher stimulus of light to function than do rods. Path of Sound through the Ear Pinna (OUTER EAR) Auditory Canal (OUTER EAR) Eardrum (OUTER EAR) 3 Earbones or Ossicles (MIDDLE EAR) Oval Window (MIDDLE EAR) Cochlea (INNER EAR) Functions of the Parts of the Ear Pinna - funnel-shaped visible flap that directs sound waves into the auditory canal Auditory Canal - canal that carries the sound waves to the eardrum Eardrum - a thin membrane which is vibrated by sound waves 3 Earbones or Ossicles (Hammer, Anvil and Stirrup) - These are the smallest bones in the body. The eardrum vibrates, and this vibrates the hammer, the anvil and the stirrup one after another. The stirrup then vibrates the Oval Window. Cochlea - This is a spirally coiled tube containing fluid and the actual organ of hearing, the Organ of Corti. Each Organ of Corti contains thousands of hairs that are vibrated by the sound waves. The hairs then initiate nervous impulses in the Auditory Nerve which is connected to the auditory areas on the sides of the brain. Pitch and Amplitude of Sound High-pitched sound causes intense stimulation of the hairs in the cochlea, whereas low-pitched sound causes less stimulation. Loud sounds stimulate a greater number of hairs in the cochlea, than do quiet sounds. Other Features of the Ear Semi-Circular Canals - These are 3 fluid-filled canals that detect the position of the head in 3 dimensions. Impulses are sent through via the auditory nerve to the brain. Eustachian Tube - There is a tube that connects each middle ear to the pharynx to equalise air pressure within the middle ear.
Being alive in this century presents a puzzling dilemma: Even minute actions of the human population can lead to the extermination of species, and even entire ecosystems. But similarly small actions could help preserve these treasures for us and our children. If only we took them Humans have a surprisingly opinionated perception of planet ‘Earth’. Nearly 71% of its surface is covered in Water, and a view from space of the Pacific side reveals hardly any land at all. We have no ingrained understanding of the Oceans processes –as a terrestrial species, we are familiar with the structures of land, because it supplies us with food and other resources. The Oceans harbor a peculiar little molecule: Water Life on Earth probably originated in the oceans, because of the unique properties of this liquid marvel. With few exceptions, water is the only naturally occurring fluid on this planet, it defines the physical limits and the distribution of earth’s Flora and Fauna. It is the fundamental element of life, as we know it. Water not only provides habitats for an immense diversity of life forms, but also a base for essential chemical processes. Present condition of the Oceans Today, the oceans are in a perilous state. After centuries of human activities, commercial fish species are facing extinction, ecosystems deteriorate continuously and ocean governance is still highly fragmented. In recent years, tens of thousands of kilometers of Coral Reefs have bleached, as a direct result of increased sea surface temperatures. Even though the capabilities of humans to massively influence marine systems should have been known since the late 1800s, when the Steller’s Sea Cow was hunted to extinction, marine conservation did not become an issue until the 1950s. Ending the tragedy of the commons Our response to the loss of the marine environment has been slow, reactive and intermittent. Most Oceans are still viewed as a global commons resource, so there is almost no incentive for any one nation to initiate costly, but necessary changes. Early efforts were, and still are, focused on single species protection (e.g. whales, sharks), or single out a particular threat (e.g. oil, mercury). Fortunately, more and more management efforts are focusing on ecosystem based approaches, integrated coastal zone management, and marine protected areas; which promise to integrate human activities, sustainable development and ecological systems1. Percentage of Earth covered by Ocean Percentage of Earth inhabited by Humans Importance of Oceans Globally, the oceans are the: - main reservoir of water. 71% of planet Earth is covered by Ocean, only 0.5% is freshwater - major habitat for organisms; they make up over 99% of the habitable volume of Earth - main reservoir of oxygen - probably the main producer of oxygen (phytoplankton) - main thermal reservoir and regulator - prime reservoir of carbon dioxide (CO2) - home for a stunning diversity of life, from bacteriophages to whales - the only place you’ll find coral reefs1 - Percentage of World’s Coral Reefs threatened 75% 75% - Percentage of World’s fisheries in need of restoration (3) 85% 85% Importance of Coral Reefs Coral reefs are globally essential, because they - cover less than 0.1% of the oceans, but are home to 25% of the marine biodiversity - generate up to 29.8 Billion U$ of products and services - provide sustenance and livelihoods for 500 Million people - the main primary producers in nutrient deprived regions - are the source for many pharmaceuticals - protect countless kilometers of coastline - are spawning and nursing ground for commercial fish species4 NOT ALL HOPE IS GONE Just keep scrolling… Marine species listed as endangered (5) Global Marine Conservation Efforts While your news feed and inboxes are probably overflowing with doomsday messages, and donation pleas; there is good news. Change is happening, and the awareness of environmental issues is constantly rising. In 2014, 1,27 million square kilometers have been declared Marine Protected Area in the Pacific, and more than 100 MPAs with the size of 15.7 million hectares have been declared in Indonesia, just to name a few. The current scientific consensus2 states that ecosystemolocical or biodiversity based approaches are our most promising option to halt the present decline of marine systems. Single-species fish stock protection is primarily based on financial interests, and, at best, only addresses symptoms, but never the cause. Integrating the needs of the people depending directly on the oceans, with the ‘needs’ of the ecosystem itself, is currently the best way to a sustainable coexistence. Marine Conservation in Thailand The people of Thailand have established 28 Marine Protected Areas, encompassing biodiversity hot-spots in the Andaman Sea and the Gulf of Thailand5. In 1934, the Department for Marine and Coastal Resources was established in Bangkok to develop plans, policies and techniques to tackle and overcome the current challenges, and provide a sustainable future for the Thai people. One of their main objective is the collaboration with international partners, which is why we’re here. JOIN US on Koh Phangan CORE sea’s field station on Koh Phangan (Ko Pha Ngan), opened it’s doors in 2011. Together with our partners from the DMCR, Thai- and International universities, local NGOs, and our volunteers, we have researched countless kilometers of reef and ‘peculiar pinnacles’. We developed a concept to increase effectiveness, abundance and size of Thailand’s MPAs, that we currently call ‘Community Led Micro Marine Protected Areas’, which is a bit of a mouthful, so we decided to call the project ‘Not All Hope Is Gone’. You can read all about it here. We’re small, we’ll need a lot of help to make this happen. JOIN US IN OUR MISSION The oceans are the planet's last great living wilderness, man's only remaining frontier on Earth, and perhaps his last chance to prove himself a rational species ― John L. Culliney (1)Marine Conservation Ecology, John Roff and Mark Zacharias, Earthscan 2011 (2)lme.edc.uri.edu/index.php?option=com_jresearch&view=publication&task=show&id=46&Itemid=65 (3)worldwildlife.org/threats/overfishing (4)coralreef.noaa.gov/aboutcorals/values/
Scientists have claimed a potential breakthrough in the search for a cure for tinnitus. Tinnitus is a common sensation, where a person hears noises in their ears. This condition affects around 10 per cent of people. Research findings suggest that treating inflammation in the sound processing region of the brain could lead to a treatment or even a cure for tinnitus and other hearing loss-related disorders. In mice that had noise-induced hearing loss, the study showed inflammation in the sound processing region of the brain-controlled ringing in the ears. Lead author, professor Shaowen Bao at the University of Arizona said: “Hearing loss is a widespread condition that affects approximately 500 million individuals and is a major risk factor for tinnitus – the perception of noise or ringing in the ears.” The results indicate noise-induced hearing loss is associated with elevated levels of molecules called proinflammatory cytokines and the activation of non-neuronal cells called microglia – two defining features of neuroinflammatory responses – in the primary auditory cortex. Although the therapy was successful in animals, further studies to investigate potential adverse effects need to take place before human trials can commence. The most common cause of tinnitus is damage and loss of the tiny sensory hair cells in the cochlea of the inner ear. This happens more as people age and it can also be as a result from prolonged exposure to excessively loud noise. Christian, B – Evening Standard. (2019). Accessible: https://www.standard.co.uk/news/world/scientists-make-breakthrough-in-search-for-tinnitus-cure-a4171026.html. Last Accessed: 24 June 2019. Daily Mail. (2019). Drug to reduce brain inflammation CURED tinnitus in mice – paving the way towards a pill for humans. Accessible: https://www.dailymail.co.uk/health/article-7155337/Drug-reduce-brain-inflammation-CURED-tinnitus-mice-paving-way-pill-humans.html. Last Accessed: 25 June 2019.
True, False, Not Given questions False means that the information in the question is factually wrong. Not Given means that the information in the statement is impossible to check because it is not mentioned in the text. Use the questions to help guide you through the reading passage. Look for clues in the questions to find the correct part of the passage then read this section carefully. One of the most useful strategies for linking points between sentences is to use the demonstrative this or these. This or these can be used either on its own or followed by a summary word which captures the main point of the preceding sentence. Study the example bellow: In the past, many people believed that people over the age of sixty-five were too old to work.This view is no longer widely held. You can always exploit the vocabulary in the questions. For example, you might be asked about a time when you won a game, and then you can use the vocabulary to talk about how you played and won a match. Or you may be asked to describe a famous person you admire. Then you could describe a sporting hero and talk about their skill in their sport and a time when they an opponent. Always think about how you can transfer vocabulary you have learnt to other exam questions.
First Steps Toward Reading - Toddler play can be imaginative and use items easily found around your house. Empty boxes (cereal, stove, boxes from packages), wooden spoons, adult shirts or shoes, or pots and pans is all it will take for your toddler to use their imagination and create their own play scenario. Playing with everyday household items teaches your toddler how to interact with others peacefully and helps him to understand his world. - It is important to have your toddler feel independent and confident about him/herself. Send your toddler on different errands or tasks around the house, asking him to find his shoes, bring you the ball, clean up the playroom or put his cup on the counter. Besides letting him practice his receptive language skills by following directions, this activity lets him show you how much he can accomplish by himself. - Sensory play is so important for young children. Sand, playdough, slime, mud, or water play are great activities for your child. To avoid a mess, you can play outside or fill a large tub with water or sand, and give your child free rein to dig, pour, scoop, and more. Play along with him/her and narrate what they are doing. “Wow, you are dumping the sand, now the bucket is empty.” This gives your toddler exposure to great sensory activities as well as new words and concepts like full/empty, soft/hard, etc.
What is sexual identity? I’m so glad you asked because I just received a PDF of the short article I wrote for the The Wiley Blackwell Encyclopedia of Family Studies. Sexual identity, according to yours truly, “can be defined as a label that helps signify to others who a person is as a sexual being and includes the perceptions, goals, beliefs, and values one has in regard to his/her sexual self.” (p. 1). Sexual identity is a multidimensional construct; it is not just gay or straight. It involves many factors such as gender identity, sexual orientation, sexual attraction, sexual behaviors, and even fantasies & desires. Sexual identity exploration is 100% normal and is an expected aspect of human development. Understanding sexual identity is pretty important in today’s political climate. Youth is a time of identity exploration, and for many, that includes sexual identity exploration. The issue of sexual identify is often the difference between inclusion and exclusion. Many youth that identify as a sexual minority, which includes orientations such as lesbian, gay, bi, pan, etc., experience exclusion. They are bullied, made fun of, and have laws passed that exclude their protection from such negative behaviors. These youth are four-times more likely to attempt suicide than there straight peers. That is why I can’t help but worry about how sexual minority youth might be feeling about North Carolina’s new law, HB2, a law that limits protections for LGBTQ+ populations. The tone of this law is exclusive, and I know that youth that are exploring their sexual identity are negatively impacted by what they are hearing and seeing. Understanding sexual identity is a first step, but as adults in the lives of youth, we have an obligation to teach inclusion and kindness. In many ways, this can be a matter of life and death. Parents and caregivers are the number 1 most important protective factor for youth. When the adults closest to these youth love and protect them, their chances of success are greatly improved. All adults can play a part in building inclusive environments that are accepting and supportive. In fact, these environments are essential if we want to promote mental and physical health of our future. Not just for sexual minority youth, but for all youth. If you would like to learn more or read the article in full, check it out here: wbefs073 Allen, K. (2016). Sexual identity. In C. Shehan (Ed.), The Encyclopedia of Family Studies. New York: John Wiley & Sons.
- Oxy-Acetylene is a Fusion welding process. - The joint edges are heated until the metals melt. - The molten metals join and fuse. - Oxidation is prevented by the envelope of products of combustion. Acetylene would explode if directly compressed. Instead cylinders are filled with porous acetone which can absorb 25times the amount of acetylene. This is known as The gas cylinders are colour coded for safety as follows: Flashback arrestors are fitted to the regulators to prevent the feedback of flame through the hose. - There are 2 gauges on each cylinder. - The low pressure gauge shows supply in the torch. - The high pressure gauge shows pressure in cylinder.
漂洋过海:林偕春与新加坡的 林太师信仰 Across the Seas: Lin Xiechun and the Lim Tai See Belief in Singapore Singapore is a multi-racial and multi-cultural country, her population in the early days was mainly formed by migrants. During the 19th century, popular religion was introduced together with the inflow of migrants from southern part of China. Lin Xiechun, also known as Lim Tai See, was a Ming official who was widely respected by local people with the same surname in Yunxiao county, Fujian, for his loyalty and virtues. The Hoon Sam Temple that devoted to him in Singapore was built in 1902. Through the passage of time and societal progression, its followers also expanded to include other surnames and regions. While it is exceptionally rare to have a street named after a deity of a temple, Jalan Lim Tai See holds the record of being one of the few. This paper aims to trace the formation and migration of a popular belief, and its localisation in Singapore. Keywords: Lin Xiechun, Lim Tai See, Hoon Sam Temple, Fujian Yunxiao, Singapore
original in fr John Perr fr to en John Perr Linux user since 1994; he is one of the French editors of LinuxFocus. Mechanical Engineer, MSc Sound and Vibration Studies This is Basic acoustics and Signal Processing for musicians and computer scientists. If you ever dreamed of making your own recordings or fiddling around with sound on your computer, then this article is just for you. This article intends to be educational. It hopes to provide the reader with a basic knowledge of sound and sound processing. Of course music is one of our concerns but all in all, it is just some noise among other less pleasant sounds. First the physical concepts of sound are presented together with the way the human ear interprets them. Next, signals will be looked at, i.e. what sound becomes when it is recorded especially with modern digital devices like samplers or computers. Last, up to date compressions techniques like mp3 or Ogg vorbis will be presented. The topics discussed in this paper should be understandable by a large audience. The author tried hard to use "normal terminology" and particularly terminology known to musicians. A few mathematical formulas pop up here and there within images, but do not matter here (phew! what a relief...). Physically speaking, sound is the mechanical vibration of any gaseous, liquid or solid medium. The elastic property of the medium allows sound to propagate from the source as waves, exactly like circles made by a stone dropped in a lake. Every time an object vibrates, a small proportion of its energy is lost in the surroundings as sound. Let us say it right now, sound does not propagate in a vacuum. Figure 1a shows how a stylus connected to a vibrating source, like a speaker for example, converts to a wave when a band of paper moves under it. |z: Vibrating stylus of amplitude x: band speed at speed c w: Resulting wave Figure 1a: Vibrating stylus on a moving paper band As far as air is concerned, sound propagates as a pressure variation. A loudspeaker transmits pressure variations to the air around it. The compression (weak) propagates through the air. Please note that only the pressure front moves, not the air. For the circles in water mentioned earlier, waves do move whereas water stays in the same place. A floating object only moves up and down. This is why there is no "wind" in front of the loudspeaker. Sound waves propagate at about 344 meters per second, in air at 20°C, but air particles only move a few microns back and We know now, from the above drawings, that sound waves have a sine shape. Distance between two crests is called wavelength and the number of crests an observer sees in one second is called frequency. This term used in physics is nothing but the pitch of a sound for a musician. Low frequencies yield bass tones whereas high frequencies yield high pitched tones. Figure 2 gives values for both frequency and wavelength of sound waves propagating through the air: Another characteristic of sound is its amplitude. Sound can be soft or loud. Through the air it corresponds to small or large variation in pressure depending on the power used to compress air. Acousticians use decibels to rate the strength of sound. Decibel is a rather intricate unit as shown on drawings 3a and 3b. It has been chosen because figures are easy to handle and because this logarithmic formula corresponds to the behavior of the human ear as we shall see in the next chapter. Undoubtedly you are doing math without knowing it: |Figure 3a: Noise level and pressure||Figure 3b: Noise level and power| Up to now, we only need to know that dB are related to the power of sound. 0 dB corresponds to the lower threshold of human hearing and not to the absence of noise. Decibels are a measure of noise relative to human capabilities. Changing the reference (Po or Wo) above will change the dB value accordingly. This is why the dB you can read on the knob of your Hi-Fi amplifier are not acoustic levels but the amplifier electrical output power. This is a totally different measure, 0 dB being often the maximum output power of your amplifier. As far as acoustics is concerned, the sound level in dB is much greater, otherwise you would not have bought that particular amplifier, but it also depends on the efficiency of you loud speakers... Figure 4 helps you locate a few usual sound sources both in amplitude and frequency. The curves represents levels of equal loudness as felt by the human ear; we shall detail The array below shows levels in decibels and watts of a few usual sound sources. Please note how the use of decibels eases notation: |Power (Watt)||Level dB||Example||Power (W)| |100 000 000||200||Saturn V Rocket 4 jet air liner |50 000 000 |1 000 000||180| |0.000 001||60||Conversational speech||20x10-6| |0.000 000 01||40| |0.000 000 000 1||20||Whisper||10-9| |0.000 000 000 001||0| |Sound power output of some typical sound sources| Sound amplitude can be measured in different ways. This also applies to other wave signals as Figure 5 demonstrate it: |Aaverage||Average Amplitude||Arithmetic average of positive signal| |ARMS||Root mean square||Amplitude proportional to energy content| |Apeak||Peak Amplitude||Maximal positive amplitude| |Apeak-peak||Amplitude peak to peak||Maximal positive to negative amplitude| The average amplitude is only a theoretical measure and technically not used. On the other hand, the root mean square value is universally adopted to measure equivalent signals and especially sine waves. For instance, the equivalent current available on one of your home plugs is rated 220 volts sinusoidal varying at a constant frequency of 50 Hz. Here the 220 volts are RMS volts so that the voltage is really oscillating between -311 volts and +311 volts. Using the other definitions, this signal is 311 volts peak or 622 Volts peak to peak. The same definitions apply for the output of power amplifiers, fed to speakers. An amplifier able to yield 10 watts RMS will have a peak value of 14 watts and a peak to peak value of 28 watts. These peak to peak measurements of sine waves are called musical watts by audio-video retailers because the figures are good selling arguments. As it does with music, time plays a fundamental role with acoustics. A very close relationship binds time to space because sound is a wave that propagates into space over time. Taking this into account, three classes of acoustic signals are defined: The diagrams of figure 6 show some sound signals. We take advantage of these diagrams to introduce the notion of spectrum. The spectrum of a signal shows the different "notes" or pure sounds that make a complex sound. If we take a stable periodic signal like a siren or a whistle, the spectrum is stable over time and only shows one value (one line on figure 6a). This is because it is possible to consider each sound as the combination of pure sounds which are sine waves. We shall see later on that this decomposition of periodic signals into sine waves has been demonstrated by a French mathematician named Fourier in the 19th century. This will also allow us to talk about chords as far as music is concerned. Meanwhile, I shall stick to sine waves because it is a lot easier to draw than solos from Jimmy Hendrix. In order to be able to process sound with a computer, we have to acquire it. This operation will allow us to transform the variation in pressure of the air into a series of numbers that computers understand. To do so, one uses a microphone which converts pressure variations into electrical signals and a sampler which convert the electric signal into numbers. A sampler is a general term and ADC (Analog to Digital Converter) is often used by electricians. This task is often devoted to the sound card of personal computers. The speed at which the sound card can record points (numbers) is called the sampling frequency. Figure 7 below shows the influence of the sampling frequency on a signal and its spectrum calculated by mean of the Fourier transform. Formula are here for the math addicts: This demonstrates (please believe me) that the transformation of a continuous wave into a series of discrete points makes the spectrum periodic. If the signal is also periodic, then the spectrum is also discrete (a series of points) and we only need to compute it at a finite number of frequencies. This is a good news because our computer can only compute numbers and not waves. So we are now faced with the case of fig. 7d where a sound signal and its spectrum are both known as a series of points which where the fluctuate over time and in the frequency domain between 0 Hz and half the sampling frequency. All these figures lining up have finally lost some part of the original sound signal. The computer only knows the sound at some precise moments. In order to be sure that it will be played properly and without any ambiguity, we have to be careful while sampling it. The first thing to do is to be sure that no frequency (pure sounds) greater than half the sampling frequency is present in the sampled signal. If not, they will be interpreted as lower frequencies and it will sound awful. This is shown in figure 8: This particular behavior of sampled signals is best known as the Shannon theorem. Shannon is the mathematician who has demonstrated this phenomenon. A similar effect can be observed on the cart wheels in films, e.g westerns. They often appear to run backward because of the stroboscopic effect of films. For daily use of sound acquisition, this means that you need to eliminate all frequencies above half the sampling frequency. Not doing so will mangle the original sound with spurious sounds. Take for instance the sampling frequency of compact discs (44.1 KHz); sounds above 22 KHz must be absent (tell your bats to keep quite because they chat with ultra sounds). In order to get rid of the unwanted frequencies, filters are used. Filter is a widely used term that applies to any device able to keep or transform partial sound. For example, low pass filters are used to suppress high frequencies which are not audible but annoying for sampling (the gossiping of the bats). Without going into details, the following diagram shows the characteristics of a filter: A filter is a device that changes the signal both the time and spectrum of the sound wave. A 100 Hz square wave low pass filtered at 200 Hz will become a sine wave because the upper part of its spectrum is removed (see figure 6c). Similarly, a note at 1000 Hz played by a piano (C 6) will sound like a whistle if it is filtered at 1200 or 1500 Hz. The lower frequency of the signal is called fundamental frequency. The others are multiples and are called In the time domain, a filter introduces modifications of the wave called distortions. This is mainly because of the delay taken by each harmonic relatively to the others. In order to show the influence of a filter on a signal, let us consider a simple square pulse (figure 10a),the amplitude of its spectrum (figure 10b) and the phase of its spectrum (figure 10c). This square pulse acts as a filter allowing sound to go through between t=0 and T seconds. The spectrum of the pulse represents the frequency response of the filter. We see that the higher the frequency of the signal, the bigger is the delay between the frequency components and the lower is their amplitude. Figure 11 represents the influence of the rectangular filter on a simple signal like a sine wave. Cutting sound abruptly at time T introduces new frequencies in the spectrum of the sine wave. If the filtered signal is more complex, like the square wave of figure 6c, the frequency components will lag giving a distorted signal on the output of the filter. For a better understanding of acoustics and sound, let us focus on the part we use to receive sound: the ear. Figure 12 shows a cross section of the ear. Sound is collected in the pinna and channeled through the auditory canal toward the ear drum which acts more or less like a microphone. The vibrations of the ear drum are amplified by three small bones acting like levers and named the hammer, the anvil and the stirrup. |a) Outer ear b) Middle ear c) Inner ear e) Ear Canal f) Ear Drum j) Oval Window k) Round Window l) Eustachian Tube m) Scala Tympani n) Scala vestibuli p) Nerve Fiber q) Semicircular canal |Figure 12: The main parts of the ear| The movements of the stirrup are transmitted via the oval window to the cochlea. The cochlea contains two chambers separated by the basilar membrane which is covered with sensitive hair cells linked to the auditory nerve (As shown on figure 13 and 14 below). The basilar membrane acts as spatial filter because the various parts of the cochlea are sensitive to the various frequencies thus allowing the brain to differentiate the pitch of notes. |f) Ear Drum j) Oval Window k) Round Window m) Scala Tympani n) Scala vestibuli r) Basilar Membrane R) Relative response F) Frequency response D) Distance along membrane |Figure 13: Longitudinal section of the cochlea| |m) Scala Tympani n) Scala vestibuli p) Auditive Nerve r) Basilar Membrane t) Scala media u) Hair cell |Figure 14: Section across the cochlea| The brain plays a very important role because it does all the analysis work in order to recognize sounds, according to pitch of course, but also according to duration. The brain also performs the correlation between both ears in order to locate sound in space. It allows us to recognize a particular instrument or person and to locate them in space. It seems that most of the work done by the brain is learned. Figure 15 shows how we hear sounds according to frequencies. The curves above have been drawn for an average population and are a statistical result for people aged between 18 and 25 and for pure tones. The differences between individuals are explained by many factors among which: Figure 16 shows the influence of age on hearing loss for different frequencies. According to the sources the results are different. This is explained easily by the large variations that can be observed in a population and because these studies cannot easily take only age into account. It is not rare to find aged musicians with young ears as well as there are young people with important hearing loss due to long exposure to strong noises like those found in concerts or night clubs. Noise induced hearing loss depends both on the duration of exposure and on the noise intensity. Note that all sounds are considered here as "noise" and not only the unpleasant ones. Thus, listening to loud music with headphones has the same effect on the auditive cells as listening to planes taking off on the end of Figure 17 shows the effect of exposure to noise on hearing. Notice that the effects are not the same as those induced by age where the ear loses sensitivity at high frequencies. On the other hand, noise induced hearing loss makes the ear less sensitive around 3-4 Khz. At those frequencies, the ear is the most sensitive. This kind of hearing loss is usually seen among firearms users. If you take a look at the chapter about decibels and their calculation, you will notice that about ten decibels corresponds to a very large acoustic pressure change. Having a linear decibel scale is equivalent to an exponential pressure scale. This is because the ear and the brain are able to handle very large variation of both amplitude and frequency. The highest frequency sound the healthy human can ear is 1000 times the frequency the lowest, and the loudest can have a sound pressure one billion times that of the quietest that can be heard (an intensity ratio of 1012 Doubling pressure only represents 3 dB. This can be heard but an increase of 9 dB of the sound intensity is needed for the human being to have a subjective feeling of double volume. This is an acoustic pressure 8 times stronger! In the frequency domain, changing octave is equivalent to doubling the sound frequency. Here too, we hear linearly the exponential increase of a physical phenomenon. Do not rush to your calculator yet, calculating the pitches of notes will be done later on. Recording sound using an analog device like a tape recorder or a vinyl disc is still a common operation even if it tends to be overcome by digital systems. In both cases, transforming a sound wave into magnetic oscillations or digital data, introduces limits inherent to the recording device. We quickly talked earlier about the effects of sampling on the spectrum of sound. Other effects can be expected when recording sound: "Dynamic" is the term used for a recording device in order to express the difference between the minimum and maximum amplitude the device can record. It generally starts with the microphone, converting sound into an electrical signal, up to the recording medium, disc, tape or computer... Remember that decibels do express a ratio. As far as dynamic range is concerned, the value given is relative to the minimum of the scale which is 0 dB. Here are a few examples: A symphonic orchestra can play sounds ranging up to 110 dB. This is why disc editors use dynamic compression systems so that louder sounds are not clipped and quieter ones not lost into noise. In addition to being less capable than the human ear, recording devices also have the drawback of generating their own noise. It can be the rubbing of the diamond on the vinyl disc or the snoring of the amplifier. This kind of noise is usually very low but do not allow for quietest sounds to be recorded. It is best heard most of the time with good quality headphones and sounds like a waterfall because it has a very wide spectrum including many frequencies. We saw earlier that filters have an important effect on the phase of a spectrum because they shift signals according to their frequency. This type of signal distortion is called harmonic distortion because it affects the harmonic frequencies of the Every single device recording a signal behaves like a filter and thus induces signal distortions. Of course, the same happens when you play any recorded sound again. Additional distortion and noise will be added. More and more, compression algorithms like mp3 or Ogg Vorbis are used to gain precious disk space on our recording media. These algorithms are said to be destructive because they do eliminate part of the signal in order to minimize space. Compression programs use a computer model of the human ear in order to eliminate the inaudible information. For instance, if two frequency components are close to each other, the quietest can be safely taken off because it will be masked by the louder one. This is why tests and advice can be found on the Internet in order to best use this software, i.e. keep the best part of the signal. According to those read by the author, most mp3 compressions do low pass filter sounds at 16 KHz and do not allow for bit rates higher than 128 KiloBits/seconds. These figures are most of the time unable to maintain sound at CD quality. On the other hand, compression software like gzip, bzip2, lha or zip do not alter data but have lower compression ratios. Moreover, it is necessary to uncompress the whole file before listening to it, which is not what is needed for a walkman or any other sound playing device. In order to settle things, here is a comparison of terms used by music and science. Most of the time, these comparisons have limits because the terms used by music lovers describe human hearing and not physical phenomenons. A note is defined, amongst others, by its pitch and this pitch can be assumed to be the fundamental frequency of the note. Knowing this, the frequencies of notes can be calculated with the following formula: If we use REF for A at 440 Hz from octave 4 as base, we can compute the others for tones ranging from 1 to 12 from C to B: The true music lovers will notice that we do not make any distinction between diatonic or chromatic half tones. With minimal changes, the same calculations can be done using commas as a subdivision instead of half tones... Assuming notes are frequencies is far from being enough to distinguish a note played by one instrument from another one.. One also needs to take into account how the note is played (pizzicato or legato), from which instrument it comes, not counting all possible effects like glissando, vibrato, etc... For this purpose, notes can be studied using their sonogram which is their spectrum across time. A sonogram allows all the harmonic frequencies to be viewed along time. |Figure 18: A sonogram| |T: Time||A: Amplitude||F: Frequency| Nowadays, electronic sound recording and playing uses totally artificial devices like synthesizers for creating sounds out of nothing or samplers to store sound and play it at different pitches with various effects. It is then possible to play a cello concerto replacing the instruments with sampled creaking of chairs. Every body can do it, no need to be able to play any instrument. The characteristics of a single note are given in the diagram below: |Figure 19: Characteristics of a note: envelop| |1: Attack||A: Positive Amplitude| |2: Sustain||T: Time| The curve shows the evolution of the global loudness of sound along time. This type of curve is called an envelop because it does envelop the signal (grey part of the drawing). The rising part is called the attack and can differ tremendously according to the type of instrument. The second part is called sustain and is the body of the note and is often the longest part except for percussion instruments. The third part can also change shape and length according to the instrument. Moreover, instruments allow musicians to influence each of the three parts. Hitting differently the keys of a piano will change the attack of the note whereas the pedals will change the decay. Each of the three parts can have its individual spectrum (color) which make the sound diversity infinite. Harmonic frequencies do not change at the same rate. Bass frequencies tend to last longer so that the color of the sound is not the same at the beginning and at the end of the note. According to its definition, the frequency range of a device can be associated to the range of an instrument. In both cases the terms describe a range of frequencies or pitches an instrument can play. However, the highest pitch an instrument can play is equivalent to the fundamental frequency given in the array above. In other words, recording a given instrument requires a device having a frequency range much higher to the highest note the instrument can play if the color of the notes are to be recorded. A short frequency range will low pass filter all the upper harmonics of the higher pitch notes and this will change sonority. In practice, a device with the frequency range of the human ear, i.e. 20hz to 20Khz, is needed. Often it is necessary to go higher than 20 Khz, because devices introduce distortion well before the cut off frequency. Analysing the frequency array of notes above, musicians will find similarities between harmonic frequencies and notes making a Harmonic frequencies are multiples of the fundamental frequency. So for a C 1 at 32,7 Hz The harmonic frequencies are: Here we see why a chord is known as perfect (C-E-G-C) or seventh (C-E-G-Bb): The frequencies of the notes within the chord are aligned with the harmonic frequencies of the base note (C). This is where the magic is. Without much going into the details, we have looked at the physical, human and technical aspects of sound and acoustics. This being said, your ear is will always be the best choice criterion. Figures given by mathematics or sophisticated measuring devices can help you understand why one particular record sounds weird but they will never tell you if the Beatles made better music than the Rolling Stones in the sixties. Brüel & Kjaer: Danish company making measurement instruments for acoustics and vibrations. This company has published for a long time (fifty years or so) free books where most of the drawings for this article come from. These books are available on line in PDF format under the following http://www.bksv.com/bksv/2148.htm link.
One of our distinct ways of teaching language is through the use of computer games. These games require development and input by the students so that they not only learn their new language but get to help create something as they learn. Our experience has shown that teaching students this way is particularly effective. Most recently, our students build a computer game similar to The Sims, which meant that they had to learn all of the names for household items, clothing items, occupations, and actions as they worked on building their virtual world. The students applied their existing coding skills to their new language learning adventures and emerged with a fun new product that they could use in job interviews to show both their coding competency and language competency. Building games can also help students explore new syntax and structure. The format of the games we build in our bootcamps requires that students build in decision-making and multiple-choice options for the characters in the games, which means that the students practice using the subjunctive tense, either/or constructions, and other particularly tricky language applications. This gives them a leg up in terms of grammatical sophistication and helps them be better prepared to transition their new language into the workplace. As part of our commitment to technology-based language learning, we have developed several guidelines for building language learning centers that are particularly well-suited to our type of language boot camps. These guidelines are available as a printable PDF e-book, but here are some general overview. 1. Think about lighting. In any environment where students will be using computer screens extensively, it’s important to make sure you’ve given the students good lighting to minimize eye strain. Focus on reducing harsh overhead glare, lighting that is significantly more yellow or more blue than natural light, and minimizing the amount of fluorescent light you use in your language learning center. 2. Encourage group working. Work stations should be set up in clusters so that students can work together on their projects. Our curriculum is designed to get students talking to each other in their new language as soon as possible. If their work stations are set up in clusters to facilitate easier communication, students will be more likely to talk to each other earlier in the program. Our curriculum also includes significant amounts of group work, which means they will need to be able to sit together and look over each others’ shoulders periodically. 3. Minimize distractions. It’s easy to get distracted when learning a new language because language acquisition requires a significantly increased level of concentration and uses parts of the brain that often haven’t been tapped into since early childhood. To help students in this endeavor, minimize distractions in the classroom by preventing outside interruptions, keeping the classroom at a comfortable temperature, and perhaps using a white noise machine to drown out other auditory distractions. 4. Foster continual language immersion. Above all, complete and total language immersion is the goal for these programs. Encourage students to always use their new language when communicating with you and with each other. When they forget, gently remind them about the ground rules for language use in the classroom. If you follow these guidelines, your language learning center will be set up in a way that will maximize the likelihood of student success. For a more in-depth discussion of these and other guidelines, contact our office for a free copy of our printable PDF e-book. After years of work in the language learning field, we’ve developed a deep foundation of experience and expertise that we’d like to put to use for you, too. We are available to help you set up your language learning center, develop new curriculum, build your program, teach classes, apply for grant funding, and otherwise achieve success for your new language learning program. Our consulting fees vary depending on the amount, type, and duration of work required, the number of consultants needed, the location of the consulting work, and other factors. Please contact our office today to explore whether collaborating with us may be the right call for you. We are committed to helping students of all ages, backgrounds, and income levels reach their language learning goals. If you are building a language learning program in an underserved or underprivileged area, or if you have a significant percentage of students who are immigrants from non-English speaking countries or whose income falls below two times the federal poverty line, please talk to us about our options for reduced fees and other creative ways of structuring our consultancy. We never want cost to get in the way of learning.
Invention of the Coolgardie Safe is credited to Arthur Patrick McCormick, a contractor in Coolgardie, and later Mayor of Narrogin. Coolgardie is in the Eastern Goldfields region of Western Australia. Gold was first discovered there in 1892, the townsite became a municipality in 1894, and by 1898 its population of 15,000 made it the third largest town in WA after Perth and Fremantle. In the last decade of the 19th century, Coolgardie was the capital of the West Australian goldfields. Being 180 kilometres from the nearest civilisation, food supplies were initially scarce and expensive. As fresh food was a valuable commodity, there was incentive to preserve it and keep it out of reach of scavengers. It was in an effort to do this that McCormick came up with his design for the Coolgardie Safe. McCormick noticed that a wet bag placed over a bottle cooled its contents. He further noted that if this bottle was placed in a draught, the bag would dry out more quickly, but the bottle would get colder. What McCormick had discovered was the principle of evaporation: 'to change any liquid into a gaseous state requires energy. This energy is taken in the form of heat from its surroundings.' (Ingpen 1982, p.18) Employing this principle, McCormick made a box for his provisions which he covered with a wet Hessian bag. He then placed a tray on top, into which he poured water twice daily. He hung strips of flannel from the tray so that water would drip down onto the Hessian bag, keeping it damp. The success of McCormick's invention would not have worked without a steady supply of water. Fresh water was scarce in the eastern goldfields at this time but the demand for water from a steadily growing population encouraged innovation. The solution was to condense salt water. Heating salt water in tanks produced steam that was condensed in tall cylinders, cooled and then collected in catchment trays. By 1898 there were six companies supplying condensed water to the goldfields, the largest company producing 100,000 gallons of water a day. (1992, p. 11) McCormick's safe was handmade using materials to hand. Many other prospectors in the Coolgardie region copied the design. In the early 20th century, Coolgardie Safes were manufactured commercially across Australia, and found their way into homes in both rural and urban areas. These safes incorporated shelving and a door, had metal or wooden frames and Hessian bodies. The feet of the safe were usually placed in a tray of water to keep ants away. Museum Victoria has an excellent example of a commercially produced Coolgardie Safe, the Trafalgar, which was manufactured in Adelaide by W. J. Rawling, c.1915. (1992). Worth its weight: a celebration of Coolgardie's centenary, 1892-1992, LISWA, Perth Bonney, W.H. (1895). The History of Coolgardie, Hann, Enright & Co., Perth Ingpen, Robert (1982). Australian Inventions and Innovations, Rigby, Australia
Improved Gravity Map Reveals New Mountains Under the Sea Ocean gravity maps (A) New marine gravity anomaly map derived from satellite altimetry reveals tectonic structures of the ocean basins in unprecedented detail, especially in areas covered by thick sediments. Land areas show gravity anomalies from Earth Gravitational Model 2008. (B) Vertical Gravity Gradient (VGG) map derived from satellite altimetry highlights fracture zones crossing the South Atlantic Ocean basin (yellow line). Areas outlined in red are small-amplitude anomalies in areas where thick sediment has diminished the gravity signal of the basement topography. October 17, 2014 - An international team of remote sensing scientists that includes W.H.F. Smith of the STAR Laboratory for Satellite Altimetry developed a new global marine gravity map that is twice as accurate as previous models. Published October 3 in the journal Science, the new map reveals significant new seafloor features, including an extinct spreading ridge in the Gulf of Mexico, a major propagating rift in the South Atlantic, and as many as 25,000 previously uncharted 1.5km tall seamounts. These discoveries allow us to understand regional tectonic processes and highlight the importance of satellite-derived gravity models as a primary tool in the investigation of ocean bottom regions. Knowing the locations of seamounts is important for conservation and fisheries management because ocean life tend to congregate around them. And the shape of the ocean floor influences and shapes currents and the transmission of heat through the ocean, with implications for navigation and understanding the changing climate. The new model makes use of data from Jason-1, which was recently taken out of service, and the European Space Agency’s CryoSat, whose principal mission is dedicated to exploring the shape and thickness of polar ice fields - not the shape of the seafloor. The new map’s denser coverage and better radar technologies have brought a two-fold improvement in the gravity model used to describe the ocean floor. While the data thus derived has not been fully explored yet, important new findings have already been identified from it. These include an extinct ridge where the seafloor spread apart to help open up the Gulf of Mexico about 180 million years ago. And in the South Atlantic, the team sees the two halves of a different type of ridge feature that became separated roughly 85 million years ago when Africa rifted away from South America. Data, algorithms, and images presented on STAR websites are intended for experimental use only and are not supported on an operational basis. More information
There’s a reason that a broken neck or back is considered to be one of the most tragic of injuries. If the spinal cord snaps, the brain loses its ability to communicate with the rest of the body, and the limbs to talk to each other. What most people don’t realize is that when it comes to locomotion, the second problem is actually worse than the first. The chicken with its head cut off can still run around, thanks to its spinal cord: The brain gave the signal to get going, then became superfluous to requirements. But if the limbs can’t “speak” to each other to coordinate, then walking is impossible. Researchers at Johns Hopkins University (JHU; Baltimore) saw a way of getting around the problem. It turns out that the coordinated movements of limbs in all sorts of animals (including chickens) are produced by a central pattern generator (CPG). Sensors and actuators feed signals into the neurons of the spinal cord and then respond to the output. Because of the cyclical nature of walking, the spinal cord neurons learn to coordinate the inputs and outputs to produce a regular pattern: they become a CPG as the creature learns to walk. So, to give locomotion to an animal with a severed spinal cord, you need to reproduce this neural process. If you could do so with an embedded chip, the researchers reasoned, you could enable walking at the flip of a switch. Now they’ve shown that it really works. In a recent experiment with colleagues at the University of Alberta, Edmonton, they used a chip with analog neurons to control the walking of a temporarily paralyzed cat. Not only were signals from the chip used to stimulate the muscles, but the movement of the limbs was detected and fed back into the artificial neural network. The resulting movement might not have been completely natural, but it proved the concept. And this solution, unlike a more brute-force digital approach, has the potential of actually being implantable in the medium term. Reggie Edgerton, a professor at the Department of Physiological Science and Neurobiology, University of California at Los Angeles, studies the neural control of movement and neuromuscular Plasticity (adaptability and learning). He sees the new work as a step forward: “It provides a compact device that not only can stimulate the muscle but has some ability to modulate the stimulation amplitudes based on kinetic and kinematic feedback of the limbs.” The difficulty of what the JHU accomplished should not be underestimated, he said. “Perhaps the most important point from the present data is the suggestion of some success in proof of concept of recording sensory information, processing it, and then generating a reasonably successful adapted activation pattern of specific muscles.” The neuromorphic approach Ralph Etienne-Cummings, the JHU associate professor who has been in charge of the electronics work, has been working with CPG-based locomotion for several years. With his colleague Tony Lewis at Iguana Robotics (Mahomet, Ill.), he showed back in 2000 that a central pattern generator can be used to efficiently control walking in engineering as well as nature. Together, Lewis and Etienne-Cummings built a small robot: just a pair of legs driven at the hip. The knees were left to move freely, swinging forward under their own momentum like pendulums. Locomotion was simple. The analog CPG chip designed by Etienne-Cummings would produce a burst of spikes that would drive the left/right hips forward/back. Position sensors on the hips would send spikes to the chip when their extremes had been reached, which would modify the output of the CPG and cause the left/right hips to start moving back/forward instead. Essentially, the sensors helped to feed in information about the real-time physics of the legs into the CPG, and it in turn coordinated the actions of the legs. This particular CPG chip worked through the charge and discharge of an analog capacitor, so incoming spikes supplied by the extreme hip position sensors had the effect of either charging the CPG faster (in the first phase) or allowing it to discharge more slowly than it would have otherwise. Because that would change the period of the CPG, the next ‘extreme’ spikes would hit at a different part of the cycle, altering its pattern again. However, eventually, the CPG pattern would converge to that of the sensor spike pattern (a process known as entrainment), and the walking pattern would be set. Thus, as soon as one leg was fully extended, the other hip would start to push forward, producing a gait that exactly matched the physics of the legs. The researchers were also able to make the legs step over obstacles by adding a camera, appropriately converting its output into spikes, and feeding those into the CPG. For this experiment, the CPG chip itself consumed less than 1µW. From robotics to biology According to Jacob Vogelstein, a researcher who has been working as part of Etienne-Cummings’ team for several years, the advantages of applying this approach to those with spinal-cord injuries was obvious: The current state of the art for patients is primitive. “Commercially available locomotor prostheses require the user to press a button each time he or she wants to take a step. A specially outfitted walker is sold with this system and has one button on the left side and one button on the right. When the user wants to move his or her left foot, he or she depresses the left button. When the user wants to move his or her right foot, he or she depresses the right button. There is no sensory feedback loop to control the locomotion.” There are better systems available in the lab, he says, but they require “a fast PC, a whole rack of signal processing hardware, an analog-to-digital converter card and specialized software written in C. If you took all of the hardware and crammed it in a box, you’d probably need 8 cubic feet.” By contrast, the JHU electronics fit on a printed-circuit board. Most of the components are commercially available: an analog signal processor chip, to process signals to be fed into the CPG; a microprocessor, to control the output to the subject; and constant-current stimulator output stages. Of course, the core of the system is the analog CPG chip. In the experiment with the cat, the researchers’ custom device had four sets of neural circuitry that corresponded to four muscle areas: the left and right hind leg flexor and extensor muscles. As with the robotics experiments, hip angle and ground-reaction force sensors were used to send information into the CPG, which prevented opposing muscles from operating at the same time and generally coordinated the movement. The chip was used to directly stimulate the muscles of a cat whose spinal cord had been anaesthetized so that it could not participate in the motion control. Vivian Mushahwar, an assistant professor in the Center for Neuroscience at the University of Alberta, was in charge of doing the in vivo experiment. Though the locomotion was slow, she was impressed with the quality of movement the chip produced. “This walking looked near normal and was fully adaptable to the surface over which the animal was stepping. This was extremely exciting and highly novel. All experimental work in the past focused on either producing in-place stepping or walking on even, unhampered terrain. The preliminary work with the CPG chip allowed for walking to take place on an unpredictable terrain, which is an essential step for producing Functional walking systems that can be utilized in everyday life outside the lab Environment.” The next step Mushahwar, much of whose work is devoted to neuroprostheses, is excited about the potential of the new work. “The wonderful feature of the CPG chip is that it can be used with any functional electrical stimulation system for walking. In other words, it can be used with systems employing surface electrodes or implanted wires to activate groups of flexor and extensor muscles. Because of the flexibility in how its neurons are connected, the sensory input they receive and the capacity of these neurons to ‘learn,’ the chip can be used for restoring locomotion in people with complete spinal cord injury or augment the locomotor capacity in people with incomplete injury. “Our future goal,” she said, “is to combine the CPG chip with microelectronic implants in the spinal cord itself, instead of stimulating muscles directly through surface or implanted wires placed throughout the legs. The spinal implants, which would be distributed in a region of the spinal cord spanning less than 5 cm, would allow the activation of intact populations of neuronal networks within the cord that are responsible for the generation of flexor and extensor alternations in the legs. The combination of the CPG chip with microelectronic implants in the spinal cord would significantly reduce the size of the electrical stimulation system by eliminating the need to implant wires directly in the muscles of the legs. It will also produce even more natural, fatigue-resistant walking than what we were able to achieve to date.” Vogelstein believes that the electronic approach is the only one likely to bear fruit. “In the long term, the CPG chip allows us to pursue an implantable solution, whereas the current [digital] technology has no easy path to implantation. The CPG chip is much smaller than a whole computer, it requires much less power and–because silicon neurons function similarly to biological neurons–it has the potential to communicate directly with the spinal cord and nervous system in its own language. A disadvantage of the CPG chip over a PC-based solution is that it is not as flexible as a computer, but as long as it does its job, it doesn’t need flexibility. You’ll never need your prosthetic CPG chip to run Windows.” Sunny Bains – Brains and Machines Sunny Bains is a scientist and journalist based in London.
Scientists have long sought ways to convert abundant CO2 to useful products such as chemicals and fuel. As early as in 1869, they were able to electrocatalytically convert CO2 to formic acid. Over the past two decades, the rise of CO2 in the Earth’s atmosphere has significantly accelerated research in CO2 conversion using renewable energy resources, including solar, wind, and tidal. Because these resources are intermittent—the sun doesn’t shine every day, nor does the wind constantly blow—how to store renewable energy safely and cost-effectively is a major challenge. Recent research in electrocatalytic CO2 conversion points the way to using CO2 as a feedstock and renewable electricity as an energy supply for the synthesis of different types of fuel and value-added chemicals such as ethylene, ethanol, and propane. But scientists still do not understand even the first step of these reactions—CO2 activation, or the transformation of the linear CO2 molecule at the catalyst surface upon accepting the first electron. Knowing the exact structure of the activated CO2 is essential because its structure dictates both the end product of the reaction and its energy cost. This reaction can start from many initial steps and go through many pathways, giving typically a mixture of products. If scientists figure out how the process works, they will be better able to selectively promote or inhibit certain pathways, which will lead to the development of a commercially viable catalyst for this technology. Columbia Engineering researchers announced today that they solved the first piece of the puzzle—they have proved that CO2 electroreduction begins with one common intermediate, not two as was commonly thought. They applied a comprehensive suite of experimental and theoretical methods to identify the structure of the first intermediate of CO2 electroreduction: carboxylate CO2− that is attached to the surface with C and O atoms. Their breakthrough, published online today in PNAS, came by applying surface enhanced Raman scattering (SERS) instead of the more frequently used surface enhanced infrared spectroscopy (SEIRAS). The spectroscopic results were corroborated by quantum chemical modeling. “Our findings about CO2 activation will open the door to an incredibly broad range of possibilities: if we can fully understand CO2 electroreduction, we’ll be able to reduce our dependence on fossil fuels, contributing to the mitigation of climate change,” says the paper’s lead author Irina Chernyshova, associate research scientist, Department of Earth and Environmental Engineering. “In addition, our insight into CO2 activation at the solid-water interface will enable researchers to better model the prebiotic scenarios from CO2 to complex organic molecules that may have led to the origin of life on our planet.” They decided to use SERS rather than SEIRAS for their observations because they found that SERS has several significant advantages that enable more accurate identification of the structure of the reaction intermediate. Most importantly, the researchers were able to measure the vibrational spectra of species formed at the electrode-electrolyte interface along the entire spectral range and on an operating electrode (in operandi). Using both quantum chemical simulations and conventional electrochemical methods, the researchers were able to get the first detailed look at how CO2 is activated at the electrode-electrolyte interface. Understanding the nature of the first reaction intermediate is a critical step toward commercialization of the electrocatalytic CO2 conversion to useful chemicals. It creates a solid foundation for moving away from the trial-and-error paradigm to rational catalyst design. “With this knowledge and computational power,” says the paper’s co-author Sathish Ponnurangam, a former graduate student and postdoc in Somasundaran’s lab who is now an assistant professor of chemical and petroleum engineering at the University of Calgary, Canada, “researchers will be able to predict more accurately the reaction on different catalysts and specify the most promising ones, which can further be synthesized and tested.” “The Columbia Engineering experiments provide such detail that we should be able to obtain very definitive validation of the computational models,” says William Goddard, Charles and Mary Ferkel Professor of Chemistry, Materials Science, and Applied Physics at Caltech, who was not involved with the study. “I expect that together with our theory, the Columbia Engineering experiments will provide precise mechanisms to be established and that examining how the mechanisms change for different alloys, surface structures, electrolytes, additives, should enable optimization of the electrocatalysts for water spitting (solar fuels), CO2 reduction to fuels and organic feedstocks, N2 reduction to NH3 to obtain much less expensive fertilizers, all the key problems facing society to obtain the energy and food to accommodate our exploding population.” Electrocatalysis and photocatalysis (the so-called artificial photosynthesis) are among the most promising ways to achieve effective storage for renewable energy. CO2 electroreduction has been capturing the imagination of researchers for more than 150 years because of its similarity to photosynthesis. Just as a plant converts sunlight into chemical energy, a catalyst converts electrons supplied by renewable energy into chemical energy that is stored in reduced products of CO2. In addition to its application for renewable energy, electrocatalysis technology may also enable manned Mars missions and colonies by providing fuel for the return trip and carbonaceous chemicals from the CO2 that makes up 95 percent of that planet’s atmosphere. “We expect our findings and methodology will spur work on how to make go faster and at a lower energy cost not only electrocatalytic but also photocatalytic CO2 reduction,” says Ponisseril Somasundaran, LaVon Duddleson Krumb Professor of Mineral Engineering, Department of Earth and Environmental Engineering. “In the latter case, a catalyst reduces CO2 using direct sunlight. Even though these two approaches are experimentally different they are microscopically similar—both start with activation of CO2 upon electron transfer from a catalyst surface. At this point, I believe that both these approaches will dominate the future.” The team is now working to uncover the subsequent reaction steps—to see how CO2 is further transformed—and to develop superior catalysts based on earth-abundant elements such as copper and tin. This article has been republished from materials provided by Columbia University in the City of New York. Note: material may have been edited for length and content. For further information, please contact the cited source. Irina V. Chernyshova, Ponisseril Somasundaran, Sathish Ponnurangam. On the origin of the elusive first intermediate of CO2 electroreduction. Proceedings of the National Academy of Sciences, 2018; 201802256 DOI: 10.1073/pnas.1802256115.
Digital Systems performs a variety of operations. Among various information processing tasks are the arithmetic operations which includes binary addition, subtraction, multiplication and division. The most common and basic arithmetic operation is addition of two binary digits. The simplest digital circuit which performs the binary addition operation on two binary digits is called Half Adder. The addition of three binary digits is performed by Full Adder. What is Full Adder: A Full Adder is the digital Circuit which implements addition operation on three binary digits. Two of the three binary digits are significant digits A and B and one is the carry input (C-In) bit carried from the previous-less significant stage. Thus the Full Adder operates on these three binary digits to generate two binary digits at its output referred to as Sum (SUM) and Carry-Out (C-out). The truth table of the Full Adder is as shown in the following figure: Full Adder Truth Table: The output variable Sum (S) and Carry (C-Out) are obtained by the arithmetic sum of inputs A, B and C-In. The binary variables A and B represent the significant inputs of the Full adder whereas the binary variable C-in represents the carry bit carried from the lower position stage. The Sum (S) of the full adder will be 1 if only one of the three inputs are 1 or all are one otherwise the Sum (S) variable will be 0; as the sum of two 1s in the binary number system is represented by two binary digits with 0 on the lower position and 1 carry out to the higher significant position. Thus variable Carry-Out represents that output of the Full Adder which is carried out on the addition of the two or three binary digits. Thus the Carry-Out (C-Out) bit is 1 when any two of the inputs are 1 or all of the inputs of the full adder are 1. Introduction to Full Adder: The full adder is generally is used as a component in a cascade of adders where the circuit performs the arithmetic sum of eight, sixteen or thirty two bit binary numbers. Full Adder Boolean Expression: Boolean Expression of the digital combinational circuit represents the input and output relationship of the circuit. Boolean Expression of the digital circuit can be used to assess the number and type of basic gates used to design the circuit. The Boolean Expression also represents the topology in which the digital gates are combined to create the final output. The Boolean Expression of the Full Adder along with its gate level realization is as shown in the following figure: The Boolean Expression for the full adder circuit represented in the above image is written in Sum of Product form. Note that both output bits S and C are written in Sum of Product form. It is important to note in the above circuit for the full Adder that the inputs A, B and C are applied at the inputs of the AND Gate and the output of the AND Gates are then applied at input of the OR Gate to generate the final output. It can be seen the circuit of the full adder is actually designed using the Boolean Expressions that is product of the inputs is formed by the AND Gate and sum is produced by the OR gate thus yielding the gate level realization of the Sum of Product representation of the Boolean Expression. Full Adder using NAND gate: A Full Adder can be designed in a number of ways. As the Boolean Expression that is represented in the Sum of Product form can also be expressed as Product of Sum form. Thus the similar circuit can also be designed using the Product of Sum representation of the Boolean Expression. The Circuit that realizes the Boolean Expression written in Product of Sum form is similar in functionality to the circuit that realizes the same Boolean Expression written in Sum of Product form. Thus we can have two different circuits with identical input and output relationship. Similarly the Boolean Expressions can also be exploited and manipulated in a number of ways in order to design the circuit that represents the similar functionality. For example the De Morgan’s theorem can be used in order to derive multiple solutions to the same problem. The NAND and NOR Gates are classified as Universal Gates that these gates can be used implement any possible Boolean Expression. The Full Adder can also be designed using only NAND gate or NOR Gate. Due to this universality of the NAND Gates one does not need any other gate thus eliminating the use of multiple ICs. The Full Adder circuit using the NAND Gates and the Boolean Expression are as shown in the following figure: Full Adder Applications: Full Adders finds applications in a lot of circuits which are comparatively complex and carry out complex operations. Some of the common applications of the Full Adder are listed as follows: - Full Adders are used in the ALU (Arithmetic Logic Unit) of the microprocessor. - Full Adders are used in Ripple Carry Adders where it is employed to add n-bits at a time. - The Full Adders are also used to calculate the addresses of memory locations inside the processor. - In some parts of the processor the full adders are also used to calculate the table indices and increment and decrement of the operators. I hope this article would be helpful for you. In the next article I will come up with more interesting topics of engineering. Till then stay connected, keep reading and enjoy learning.
You are here January 28, 2020 Mother’s gut microbes may influence offspring’s immunity At a Glance - A study in mice found that a mother’s gut bacteria can induce antibodies that protect newborns against diarrheal disease. - With more research, these findings could inform the development of vaccines to better protect infants from infectious disease. For the first few weeks of life, a baby’s immune system isn’t well developed. However, a mother’s immune system offers some protection against harmful microbes even after birth. Antibodies shared through the placenta help protect a newborn from infections. By breastfeeding, mothers also share protective antibodies through their milk, boosting their child’s immune system. A research team led by Dr. Dennis Kasper of Harvard Medical School set out to understand how these antibodies—called maternal natural antibodies—may protect newborns from potentially life-threatening infections. The study, conducted in mice, was funded in part by the NIH’s National Institute of Allergy and Infectious Diseases (NIAID). Findings appeared in Nature on January 8, 2020. The researchers investigated the role of the mother’s gut microbes in producing antibodies to protect against E. coli infection. The body is teeming with billions of microbes. Mounting evidence suggests that these microbial communities, known collectively as the microbiome, may influence health and disease. The team bred newborn mice that lacked immune cells needed to produce antibodies. Some of the mouse pups were raised by mothers who also lacked the ability to make antibodies. Other pups were raised by mothers with normal immune systems. Any protective antibodies the pups received were transferred through breast milk. The scientists then exposed the mouse pups to E. coli that cause diarrheal illness. Infectious diarrhea is the second most common cause of death globally in children under age 5. The team found that maternal antibodies passed through breast milk were crucial for protection against E. coli infection. Pups raised by mothers with normal immune systems had 33-times fewer E. coli bacteria in their intestines and remained healthy. Most of the pups that had not received maternal antibodies became ill and died from the gut infection. The team found that a single species of normal gut bacteria could prompt the mother’s immune system to make a cross-reactive antibody, meaning that it provides protection against a similar pathogen—in this case, E. coli. Using fecal cultures, the researchers isolated Panteoa in the mother mouse’s gut. To demonstrate that Panteoa induced protection, the researchers immunized adult female mice with a strain of the bacteria, then infected their pups with E. coli. These pups were substantially more likely to survive E. coli infection than those born to unimmunized mothers. The researchers were also able to show how a type of antibody called IgG in the milk ingested by pups is transferred into their bloodstream via a specific IgG transporter. Taken together, the results provide insight into how a mother’s gut microbiome may lead to immunity in newborns. “Our results help explain why newborns are protected from certain disease-causing microbes despite their underdeveloped immune systems and lack of prior encounters with these microbes,” Kasper says. “Moreover, they raise the possibility that mothers can confer immune protection to their offspring even to pathogens that they haven't themselves encountered in the past.” —by Erin Bryant - Microbiome Influences Risk of Preterm Birth - Decoding the Variety of Human Antibodies - Probiotics not Helpful for Young Children with Diarrhea - The Healthy Human Microbiome - Innate Immune Cells Have Some Memory References: Microbiota-targeted maternal antibodies protect neonates from enteric infection. Zheng W, Zhao W, Wu M, Song X, Caro F, Sun X, Gazzaniga F, Stefanetti G, Oh S, Mekalanos JJ, Kasper DL. Nature. 2020 Jan 8. doi: 10.1038/s41586-019-1898-4. [Epub ahead of print]. PMID: 31915378. Funding: NIH’s National Institute of Allergy and Infectious Diseases (NIAID) and the European Union.
A recent NASA-funded study has shown how the hydrocarbon lakes and seas of Saturn’s moon Titan might occasionally erupt with dramatic patches of bubbles. For the study, researchers at NASA’s Jet Propulsion Laboratory in Pasadena, California, simulated the frigid surface conditions on Titan, finding that significant amounts of nitrogen can be dissolved in the extremely cold liquid methane that rains from the skies and collects in rivers, lakes and seas. They demonstrated that slight changes in temperature, air pressure or composition can cause the nitrogen to rapidly separate out of solution, like the fizz that results when opening a bottle of carbonated soda. NASA’s Cassini spacecraft has found that the composition of Titan’s lakes and seas varies from place to place, with some reservoirs being richer in ethane than methane. “Our experiments showed that when methane-rich liquids mix with ethane-rich ones – for example from a heavy rain, or when runoff from a methane river mixes into an ethane-rich lake – the nitrogen is less able to stay in solution,” says Michael Malaska of JPL, who led the study. The release of nitrogen, known as exsolution, can also occur when methane seas warm slightly during the changing seasons on Titan. A fizzy liquid could also cause problems, potentially, for a future robotic probe sent to float on or swim through Titan’s seas. Excess heat emanating from a probe might cause bubbles to form around its structures – for example, propellers used for propulsion – making it difficult to steer or keep the probe stable. The notion of nitrogen bubbles creating fizzy patches on Titan’s lakes and seas is relevant to one of the more enchanting unsolved mysteries Cassini has investigated during its time exploring Titan: the so-called “magic islands.” During several flybys, Cassini’s radar has revealed small areas on the seas that appeared and disappeared, and then (in at least one case) reappeared. Researchers proposed several potential explanations for what could be creating these seemingly island-like features, including the idea of fields of bubbles. The new study provides details about the mechanism that could be forming such bubbles, if they are indeed the culprit. “Thanks to this work on nitrogen’s solubility, we’re now confident that bubbles could indeed form in the seas, and in fact may be more abundant than we’d expected,” says Jason Hofgartner of JPL, who serves as a co-investigator on Cassini’s radar team. In characterising how nitrogen moves between Titan’s liquid reservoirs and its atmosphere, the researchers also coaxed nitrogen out of a simulated ethane-rich solution as the ethane froze to the bottom of their tiny, simulated Titan lake. Unlike water, which is less dense in its solid form than its liquid form, ethane ice would form on the bottom of Titan’s frigid pools. As the ethane crystalises into ice, there’s no room for the dissolved nitrogen gas, and it comes fizzing out. While the thought of hydrocarbon lakes bubbling with nitrogen on an alien moon is dramatic, Malaska points out that the movement of nitrogen on Titan doesn’t just move in one direction. Clearly, it has to get into the methane and ethane before it can get out. “In effect, it’s as though the lakes of Titan breathe nitrogen,” Malaska says. “As they cool, they can absorb more of the gas, ‘inhaling.’ And as they warm, the liquid’s capacity is reduced, so they ‘exhale.'” A similar phenomenon occurs on Earth with carbon dioxide absorption by our planet’s oceans. Cassini will make its final close flyby of Titan – its 127th targeted encounter – on 22 April. During the flyby, Cassini will sweep its radar beam over Titan’s northern seas one final time. The radar team designed the upcoming observation so that, if magic island features are present this time, their brightness may be useful for distinguishing between bubbles, waves and floating or suspended solids. The flyby also will bend the spacecraft’s course to begin its final series of 22 plunges through the gap between Saturn and its innermost rings, known as Cassini’s Grand Finale. The 20-year mission will conclude with a dive into Saturn’s atmosphere on 15 September.
The National Science Foundation defines Big Data in their paper “Core Techniques and Technologies for Advancing Big Data Science & Engineering” as large, diverse, complex, longitudinal, and/or distributed data sets generated from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources available today and in the future. And just so we all understand the size (according to Intel) of the Big Data phenomenon, from the dawn of time until 2003 mankind generated 5 Exabytes of data, in 2012 it generated 2.7 Zettabytes1, that’s 500 times more data, and by 2015 it is estimated that figure will have grown to 8 Zettabyes, so how does this mass of available information affect the scientific community? Well firstly you can thank the scientific community for the development of Big Data emanating as it does from Big Science. Scientists made the link between having an enormous amount of data to work with and the mathematically huge probabilities of actually finding anything useful, spawning projects such as astronomy images (planet detection), physics research (supercollider data analytics), medical research (drug interaction), weather prediction and others. It is also the scientific community, which is at the forefront of the new technologies making Big Data Analytics possible and cost-effective. Major projects are underway to evaluate core technologies and tools that take advantage of collections of large data sets to accelerate progress in science projects, such as: - Data collection and management (DCM) – including new data storage, I/O systems, and architectures for continuously generated data, as well as shared and widely-distributed static and real-time data - Data analytics (DA) – with the development of new algorithms, programming languages, data structures, and data prediction tools; new computational models and the underlying mathematical and statistical theory needed to capture important performance characteristics of computing over massive data sets - E-science collaboration environments (ESCE) – novel collaboration environments for diverse and distant groups of researchers and students to coordinate their work (e.g., through data and model sharing and software reuse, tele-presence capability, crowd-sourcing, social networking capabilities) with greatly enhanced efficiency and effectiveness for scientific collaboration; along with automation of the discovery process (e.g., through machine learning, data mining, and automated inference) A very relevant example of a technology created to address this emerging market is the Hadoopis framework, from the Apache Software Foundation. Hadoop redefines the way data is managed and analysed by leveraging the power of a distributed grid of computing resources using a simple programming model to enable distributed processing of large data sets on clusters of computers. Its technology stack includes common utilities, a distributed file system, analytics, data storage platforms, an application layer that manages distributed processing, parallel computation, workflow, and configuration management. This approach allows Hadoop to deliver the high availability, massive scalability and response times needed for Big Data Analytics. It is also the scientific community which may well determine the direction of Big Data in the business world, in a recent briefing about Big Data given to US Congress by Farnam Jahanian, the assistant director for Computer and Information Science and Engineering for the National Science Foundation, he explained the implications of Big Data and how it will influence our thinking as we move forward: Firstly, insights and more accurate predictions from large and complex collections of data have important implications for the economy, access to this information is transforming traditional business and creating new markets. Big Data is driving the creation of new IT products and services based on business intelligence and data analytics is boosting the productivity of firms that use it to make better decisions and identify new business trends. Second, advances in Big Data are critical to accelerate the pace of discovery of almost every science and engineering discipline, from new insights about protein structure, biomedical research, clinical decision making, and climate modelling to new ways to mitigate and respond to natural disasters and new strategies for effective learning and education, there are enormous opportunities for data driven discovery. Finally, Big Data also has the potential to solve some of the world’s most pressing challenges – in science, education, environment and sustainability, medicine, commerce and cyber and national security – with huge social benefits and laying the foundation for a nation’s competitiveness for many decades to come. Big Data and its associated Analytics offer a wealth of opportunities to the scientific community to advance numerous fields of research and take the lead in one of the most significant IT trends in decades with the potential of a paradigm shift leading to Big Science 2.
|Search all my sites| BUY ANIMATIONS: personal/ teaching/ commercial. ALL ASTRONOMY IMAGES THE MOON varies in appearance throughout the lunar month as it revolves around the Earth. This animation shows in a very simplified form how the sunlight (coming from lower right) illuminates the Earth (bluish-white) and the Moon (grey). The viewpoint is fixed on the Earth and shows the moon rotating around the Earth. This rotation takes just over 27 days (a lunar month). The animation at the upper left shows approximately what the moon would look like viewed from Earth. The animations are synchronised and the two views should help make the phases of the moon understandable. The phases are full moon, gibbous moon, half moon & crescent moon. Moonlight is simply sunlight reflected from the surface of the moon. NOTES: The scales are approximately correct for the relative sizes of the Earth and moon but the distance between the two has been greatly reduced to fit within a reasonable frame. The plane of the moon orbit is, in reality, slightly tilted with respect to the plane of the Earth's orbit around the sun. The Earth and moon are locked in an orbit around each other and actually rotate about the pair's centre of gravity which is towards the surface of the Earth on the side facing the moon. This point is called the barycentre (Greek heavy centre). The moon continues to face the Earth as it orbits around, locked into synchrony by tidal (gravitational) forces. This is equivalent to the moon performing one revolution around its own axis for every complete revolution it makes around the Earth. The gravitational pull of the moon on the Earth distorts the oceans creating the tides.
Phosphorus definition, phosphorus is the most abundant mineral in the body. These 2 essential nutrients work closely together to develop strong bones and teeth. About 85 % of phosphorus in the body can be found in bones and teeth, however it is likewise present in cells and tissues throughout the body. Phosphorus helps filter out waste in the kidneys and plays an essential role in how the body stores and utilizes energy. It likewise helps in reducing muscle discomfort after a hard exercise. Phosphorus is required for the development, maintenance, and repair work of all tissues and cells, and for the production of the hereditary foundation, DNA and RNA. Phosphorus is also had to help balance and utilize other minerals and vitamins, consisting of vitamin D, iodine, magnesium, and zinc. The majority of people get a lot of phosphorus in their diets. Foods high in phosphorus is milk, grains, and protein rich foods. Some health conditions such as diabetes, hunger, and alcohol dependency can cause levels of phosphorus in the body to fall. The exact same is true of conditions that make it difficult for individuals to take in nutrients, such as Crohn’s condition and celiac disease. Some medications can cause phosphorus levels to drop, including some antacids and diuretics (water tablets). Symptoms of phosphorus deficiency include loss of appetite, stress and anxiety, bone discomfort, vulnerable bones, stiff joints, tiredness, irregular breathing, impatience, tingling, weak point, and weight change. In children, lowered development and poor bone and tooth advancement might occur. Having too much phosphorus in the body is really more common and more uneasy than having too little. Too much phosphorus is normally caused by kidney illness or by taking in too much dietary phosphorus and inadequate nutritional calcium. Several studies suggest that higher intakes of phosphorus are associated with an enhanced danger of heart disease. As the quantity of phosphorus you consume increases, so does the need for calcium. The delicate balance in between calcium and phosphorus is essential for appropriate bone density and prevention of osteoporosis. Makes use of Phosphates (phosphorus) are made use of medically to treat the following: Hypophosphatemia, low levels of phosphorus in the body Hypercalcemia, high blood calcium levels Calcium based kidney stones These conditions need a physician’s care. Phosphates are also made use of in enemas as laxatives. The majority of people get a lot of phosphorus in their diets. In some cases professional athletes use phosphate supplements before competitors or heavy workouts to assist decrease muscle discomfort and tiredness, although it’s unclear just how much it helps or if it enhances performance. Protein rich foods, such as meat, poultry, fish, eggs, dairy products, nuts, and legumes, are good sources of phosphorus. Other sources include entire grains, tough potatoes, dried fruit, garlic cloves, and carbonated beverages.
become an editor the entire directory only in Bilingualism/Bilingual_Education Home: Family: Parenting: Child Education: Bilingualism Reference: Education: K through 12: Bilingual Education Science: Social Sciences: Linguistics: Languages: Constructed: International Auxiliary: Esperanto: Native Speakers Society: Issues: Education: Bilingual Education CREDE Research on Bilingual Education - UCSC Researchers find all students benefit from strong cognitive and academic instruction conducted in their first language Cummins & Genzuk Report - An analysis of longitudinal study of structured English immersion and early exit and late-exit transitional bilingual education programs for language-minority children. Educating Language-Minority Children - Diane August and Kenji Hakuta, Editors. A review of the current knowledge about the education of limited-English proficient students. Improving Schooling for Language-Minority Children - Outlines the research needs in the area of instruction of limited-English proficient students. Language Policy: Bilingualism and Intelligence - Handouts for a Language Policy college course. Somewhat cryptic, but a good introduction or review. SEDL - Language and Diversity Program - The research activities of the Southwest Educational Development Laboratory. Program models for the education of immigrant children and second-language learners. Teaching Indigenous Languages - An online text which examines the social and linguistic implications of teaching indigenous languages in public schools. " search on: to edit this category. Copyright © 1998-2015 AOL Inc. Visit our sister sites Last update: November 21, 2014 at 11:05:11 UTC -
Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! In this writing sentences activity, learners count the number of objects in 8 problems. Students then write sentences pertaining to the number of objects counted. 3 Views 6 Downloads
In general, countries go to war to preserve, extend or defend their territory and way of life. More specific motives include gaining access to new land or new economic resources, perpetuating or defending religious beliefs, and bringing a swift end to a political conflict.Continue Reading There are limited resources in the world. Some countries have more ample access to desirable natural resources than others. When one country has something another country wants or needs, the country lacking resources may seek to gain access through warfare. Land is also a limited resource, so countries seeking economic expansion may aggressively attack other countries to seize land for its resources or to develop it. Religion has long been a trigger in wars. The Crusades were among the most prominent wars that took place over religious convictions. In that instance, it was an effort by Christian groups to seize control of Holy land from Muslims. Conflicts within a country often start based on different religious or philosophical differences among the people within a nation's borders. The goal of each side is often to gain control of the government to implement desired policies. Similarly, political rhetoric or differences within countries and between countries can eventually become so tense that diplomacy is left behind in exchange for war.Learn more about Social Sciences
We all know the Earth spins round and round, but did you know it's slowing down? Watch this video for the answer. Learn more about how the earth rotates and revolves around the sun, which in turn moves around the galaxy. Students investigate the relationship between Earth's rotation, convection, the Coriolis Effect, and trade winds. Includes pre and post assessment with answer keys, and student investigation. Online student guide for Ocean Motion: Lesson 2 "Traveling on a Rotating Sphere". A list of student-submitted discussion questions for Rotation of Earth. Come up with questions about a topic and learn new vocabulary words to determine answers using an Ask, Answer, Learn table. Even though there are now more than 7 billion people on Earth, we can't change the planet's orbit. Shows how distance between bodies affects gravity and orbits. This study guide summarizes the key points of Revolutions of Earth and Rotation of Earth. You can download and customize it to suit your needs and study habits. This link provides images of the Earth at any chosen time from the Sun, Moon, North or South Pole. These flashcards help you study important terms and vocabulary from Rotation of Earth.
American History of Cemetaries and Gravestones Death Head Gravestone Cemeteries and gravestones allow for a look into the history and cultures of the past, establishing an important visual record and some insight into the changing attitudes of death. Gravestones are art. Art that changes over time and tells a story. Changes occur over time when considering the content, style, materials used, and cultural trends. The choice of a particular gravestone is an important step in the burial process. Each stone and carving has a meaning. Gravestones are more than a mere design. Artwork can be seen in regard to the shape and construction of the gravestones, the materials used, and as the content of the epitaphs written on them. Each stone represents the values of an era, as well as the current trends of society. At different times, wealth was also a deciding factor used in establishing the size and content of the early gravestones. Competition and creativity were prime motivators. Over time the wealth of the deceased, and the display thereof, was not as important. Unity and simplicity were more important than flamboyant displays of wealth. Cemeteries are a record of change, not only seen with the vision and content of the gravestone, but within the cemetery itself. Art is created by the manicured lawns, trees, flowers, and ornaments placed in the cemetery and near the gravestones. The stones reflected the personal preferences of the individual and displayed who was included in their family. There were visual memorials added to cemeteries, such as candles or personal items. The gravestones as well as the cemetery were made to be aesthetically pleasing. Changes of gravestone styles over time can seem extreme. One example is the winged death’s head image carved upon gravestones. These were found prior to 1760. In comparison, the prominent images seen at a later date was of the angel cherub, and then the urn and willow image. All of those symbols represented the immortal soul, regardless of the extreme differences of the images. Later, a greater variety of images were seen on the stones, such as lambs, flowers, or religious symbols. Gravestones were then more personalized to an individual, or their religion and beliefs. Many symbols had the same meaning in regard to death; these corresponded to the cultural trends of the era. Epitaphs also changed over time, from the displaying of the demise of the individuals, to the memory of their lives, and then to a minimal use of epitaphs at all. Not only did the visual aspects of gravestones and cemeteries change, but the outlook of death itself was altered. The virtues of Americans seemed to change. One way this change was seen was by who was buried by whom. The location of a husband and wife together, as well as the location of their children, all played a role in seeing this change. At one time children were commonly buried with their parents; it is believed that more recently children move away and are buried at other locations. Different regions also see a variance of plot or gravestone placement; such as the wife being placed on the right of her husband. The gravestone images that were photographed at the Concord, MA cemetery followed the outline of change that was typical of the era. They often used the design of the winged death’s head image. This is seen upon the gravestones made within a similar time period of about six years. All three of the gravestone placed in the years 1767, 1768, and 1783, have a similar stone shape, rounded at the top with a pillar type side design. All three are similarly carved with a winged head at the top of the stone and added flowers as decoration. In contrast between the three gravestones, all three have a winged head, but the look of each varies quite a bit. The image carved into the 1767 gravestone had the death head look, similar to a winged skull. It also had a deeper carved-in look. The 1768 and 1783 the gravestones had a more cherubic facial structure used to create the winged heads. These two stones were carved in a three dimensional manner, with the head and other images protruding from the stone. When viewing gravestones created prior to the 20th century there is a visible difference in style, content, and materials used. A cemetery can teach the history of the area. According to historians, the grave markers of the past were “not intended to be a memorial, but a stark reminder of what awaited anyone that did not live a godly life" (Grave Markers). People were reminded that death was eminent, so that standards and beliefs would be followed. One could say they were used as a scared straight reminder. Following the era of reminding others of death, gravestones were reminders of life after death. This is when angels and cherubs, along with other religious symbols were added to the stones. Over time not only did the style and artwork of a gravestone change, but the materials used to make the stone changed. Older graves were marked with stones made of fieldstone, wood, slate, limestone, and iron. These substances have allowed for great deterioration over time. More recently gravestones have been made of white bronze and granite. These are believed to be able to better withstand the natural elements of decay. While studying the past, each symbol on a gravestone has a meaning. Some examples are of an anchor, representing a steadfast hope and life after death. The arch signifies separation from our loved ones, and supports the idea that they will be seen at some time in heaven. Birds represent the soul. A cherub represents divine wisdom. A lamb signifies innocence. Images were chosen as a reminder and meaning of life and death, and to maintain the memory of the individual. Trends today have slightly changed the way gravestones are chosen. The memorial aspect is not typically added to a gravestone. Gravestones are often only a marker of an individual’s resting place. Today, we celebrate life rather than death. Funerals and wakes allow for these memories to be exchanged. It is quite apparent that attitudes of death and burial have changed over time. Gravestones can be a window into the past history of a region. Each gravestone and cemetery tells a story that could have been lost without the art and aesthetics recorded and preserved over time. ART310 Gravestones PowerPoint Presentation. (n.d.). Grave Markers. (n.d.). Retrieved November 19, 2010, from Squidoo: http://www.squidoo.com/gravemarkers Jensen, E. S. (n.d.). Social Commentary From the Cemetary.
G6PD Deficiency is a hereditary abnormality in the activity of an erythrocyte (red blood cell) enzyme. This enzyme, glucose-6-phosphate dehydrogenase (G-6-PD), is essential for assuring a normal life span for red blood cells, and for oxidizing processes. This enzyme deficiency may provoke the sudden destruction of red blood cells and lead to hemolytic anemia with jaundice following the intake of fava beans, certain legumes and various drugs (see a complete list of drugs and foodstuffs to avoid). The defect is sex-linked, transmitted from mother (usually a healthy carrier) to son (or daughter, who would be a healthy carrier too; see a diagram of inheritance probabilities). This is due to the fact that the structure of G-6-PD is carried on the X chromosome: As stated by Ernest Beutler, M.D., "in females, only one of the two X chromosomes in each cell is active; consequently, female heterozygotes for G-6-PD deficiency have two populations of red cells; deficient cells and normal cells." The deficit is most prevalent in Africa (affecting up to 20% of the population), but is common also around the Mediterranean (4% - 30%) and southeast Asia. Please note that there are more than 400 genetic variants of the deficiency. You can determine whether you are G-6-PD deficient by a simple blood test. To determine your variant, you must test yourself at specialized genetic labs.
Grounding the Sparks Most electronic components are susceptible to electrostatic discharge (ESD) damage at relatively low voltage levels. Many devices are susceptible at less than 100 volts, while disk drive components have sensitivities below 10 volts. Medical devices and fiber optic components also are extremely vulnerable to the effects of static. Damage can occur at almost any time before, during and after assembly, rework, automated handling or transport. Despite ongoing efforts to curtail ESD with new technology, the age-old problem continues to affect production yields, manufacturing costs and product quality. Electrostatic discharge refers to the sudden transfer of free electrons from a positively charged object to a negatively charged object. The Electrostatic Discharge Association (ESDA, Rome, NY), defines static electricity as "an electrical charge caused by an imbalance of electrons on the surface of a material." This imbalance of electrons produces an electric field that can be measured and that can influence other objects at a distance. Electrostatic discharge is defined as "the transfer of charge between bodies at different electrical potentials." The most common way to create an electrostatic charge is through friction, or triboelectric charging. When two dissimilar materials are rubbed together and separated, a transfer of electrons from one material to the other may occur. This exchange of energy constitutes the electrostatic discharge, which takes place within microseconds or nanoseconds. Electrostatic charge is often created by the contact and separation of two materials. For example, a person walking across a floor generates static electricity as shoe soles contact and then separate from the floor surface. An electronic component sliding into or out of a bag, magazine or tube generates an electrostatic charge as the device’s housing and metal leads make multiple contacts and separations with the surface of the container. A significant electrostatic charge can also be generated simply by sliding a plastic-encased screwdriver across a metallic work surface. Static SourcesThe amount of static electricity that is generated depends on the material, the amount of friction and the relative humidity of the environment. Plastic generally creates the greatest static charge. For instance, thin-film plastic commonly used to wrap parts or package components, can pose numerous ESD headaches. Low humidity conditions, such as when indoor air is heated during the winter months, also generates significant amounts of static. When the air is humid, a thin moisture layer forms on the surface of many insulating materials, which helps electrostatic charges dissipate. Generally, anything above 30 percent relative humidity will help prevent static charges from building up. Steve Halperin, ESDA president, says electrostatic discharge can wreak havoc on electronic components. For instance, it can change the electrical characteristics of a semiconductor device, degrading or destroying it. It also may upset the normal operation of an electronic system, causing equipment malfunction or failure. Another problem caused by static electricity occurs in clean rooms. Charged surfaces can attract and hold contaminants, making removal from the environment difficult. When attracted to the surface of a silicon wafer or a device’s electrical circuitry, these particulates can cause random wafer defects and reduce product yields. According to Halperin, who also runs a Bensenville, IL-based consulting company, ESD occurs far more frequently than most people realize. In fact, many ESD events go unnoticed because static electricity is invisible. Indeed, most electrostatic discharges pass unnoticed by assemblers. Halperin says ESD damage is generally not visible as it occurs. It may be latent or not show up in functional testing of electronic devices. More than 75 percent of all ESD is generated by the human body. The routine movement of an assembler sitting at a workbench can generate as many as 6,000 volts of static. But, in order for humans to feel static electricity, the discharge typically must be 3,000 to 4,000 volts. "While you can feel electrostatic discharges of 3,000 volts, smaller charges are below the threshold of human sensation," says Dave Bermani, corporate marketing coordinator at Desco Industries Inc. (Chino, CA). "Unfortunately, smaller charges can and do damage semiconductor devices. Many CMOS [complementary metal oxide semiconductor] components can be damaged by charges of less than 1,000 volts. Some of the more sophisticated components can be damaged by charges as low as 10 volts." Expensive DilemmaThe cost of ignoring ESD can be staggering. One little "shock" can destroy a component—or worse, significantly shorten its life and result in a field failure. Up to 25 percent of all electronic product failures can be attributed to ESD. Electronics manufacturers lose millions of dollars annually due to ESD damage. "The cost of damaged devices themselves ranges from only a few cents for a simple diode to several hundred dollars for complex hybrids," claims Halperin. Those numbers multiply when costs associated with repair and rework, labor, supplies, equipment and lost production time are factored in. The damage done by ESD takes two forms: catastrophic failure and latent failure. Catastrophic failure renders a component or circuit card instantly defective. An ESD event may cause a metal melt, junction breakdown or oxide failure. Even at ESD voltages of less than 200 volts, gate oxide destruction can occur in a semiconductor, completely changing its electrical characteristics. Low ESD voltages can also cause junction failure where the bonding wire attaches to the package leads. Basic quality and performance tests usually detect these failures long before product shipment. In this case, swapping out the damaged part or card immediately rectifies the problem. However, if damage occurs after testing, it will often go undetected until the device fails in operation. Latent failures are less obvious and are more difficult to identify. A device exposed to ESD may be partially degraded, yet continue to perform its intended function throughout the testing process. Latent defects occur when ESD weakens or wounds a component to the point where it will still function correctly during test and inspection. But, the component may cause poor system performance or complete system failure once the product gets shipped to the customer and placed in the field. Latent defects are difficult to detect by any process other than examining parts under an electron microscope. There are two basic types of ESD models: a human body model (HBM) and a charge device model (CDM). The human body is one of the most common and most damaging sources of electrostatic discharge. The HBM simulates when a discharge occurs between a human body part, such as a hand or a finger, and a conductor, such as a metal rail. The HBM typically occurs when somebody touches a part and zaps it. These types of zaps occur pin-to-pin from the package pins and are protected against via the input-output cells. All parts should be able to tolerate 2,000 volts of HBM zaps. To test for HBM static in electronic components, "a charged 100pF capacitor is discharged into the device via a 1,500 watt resistor," says Jeremy Smallwood, Ph.D., president of Electrostatic Solutions Ltd. (Southampton, UK). "The 100pF capacitor simulates charges stored on the average human body, and the resistor simulates the resistance of the human body and skin." The CDM-type zaps occur more commonly in automated assembly lines. When a part sits on a charged plate, it builds a charge and then gets zapped when a machine, such as a robotic arm, handles it. Because the device builds up a charge internally, all internal structures are at risk. It is more difficult to protect against. Components should be capable of tolerating 200 to 500 volts. A third type of model, the machine model (MM) simulates ESD events occurring during automatic handling operations. It originated in Japan as a worst-case HBM. Smaller PackagingAlthough today’s electronic devices are faster, cheaper and more powerful than just a few years ago, the risk of ESD is greater than ever. The drive for miniaturization has reduced the width of electronic device structures, but increased ESD susceptibility. "The number of ESD-related incidents has gone up considerably in the last 5 years," claims Phil Baratti, manager of applications engineering for factory automation and robotics at Epson America Inc. (Carson, CA). "It’s definitely increasing. Components are becoming more and more susceptible to static." According to Baratti and other observers, the ongoing trend toward smaller and smaller packaging is the culprit. "When you make things smaller, they’re inherently more delicate," notes Baratti. "The increasing sophistication of electronic devices has continued to make them more and more susceptible to ESD-related damage," adds Desco’s Bermani. "As the size of the components is reduced, so is the microscopic spacing of insulators and circuits within them, increasing their sensitivity to ESD. "Typically, surface-mount devices have much smaller architecture, making them more susceptible to ESD than through-hole packaged devices," explains Bermani. "The width of the circuitry conductors is as small as 0.1 micrometer. To pack more and more circuitry into small packages, the spacing isolating circuitry has been reduced and can be as little as 300 micrometers. "For integrated circuit packaging, the I/O count has climbed from 600 to more than 1,000," Bermani points out. "The spacing between the I/Os has decreased dramatically. Where wire bonding is used, the air gap becomes that much smaller, making the neighboring I/Os even more susceptible to ESD." Today’s operating voltages of as little as 1.5 volts and chip-set traces measuring only 400 angstroms in width contribute to this vulnerability. "As component technology progresses, internal device sizes reduce and become more ESD sensitive," says Electrostatic Solutions’ Smallwood. "Many modern components are protected by on-chip protection circuits, without which they would be extremely sensitive." In most cases, the design goal is to increase the amount of ESD voltage that the device can withstand. In some cases, this goal cannot be met for various reasons. "There is often a tradeoff between ESD protection and device performance," Smallwood points out. Control ToolsIn addition to the trend toward smaller architectures, lean manufacturing initiatives and ISO 9000 certification have sparked growing concern over ESD. Fortunately, the pesky problem of electrostatic discharge can be controlled and monitored. Manufacturing engineers have numerous tools to choose from, ranging from inexpensive wrist straps to expensive ionizers. "However, that range of options someArial causes engineers to do an overkill, instead of identifying hot spots," says Baratti. Some engineers make the mistake of not controlling ESD from start to finish. It’s important to examine the entire manufacturing cycle, from receiving and storage to parts kitting and assembly to testing and packaging. "Assembly lines should be as diligent with their ESD control program as hospital operating rooms are in implementing sterilization procedures," warns an electronics industry veteran. A wide variety of products are available to control ESD in production environments. Preventative measures include the use of wrist and heel straps, floor and table mats, static-dissipative lab coats and ESD-shielding bags. Static-dissipative chemicals, such as hand lotion and floor finish, are also available. Products that control ESD work by charge prevention, grounding, shielding and neutralization. All static control products function by: reducing charge accumulation; providing a path for the static charge to move away from sensitive components; or shielding components from static fields or charges. The word "antistatic" is often used in a generic sense to describe the full range of static control materials and products. But, this term has been misused and misapplied, resulting in a great deal of confusion between suppliers and end users. A new term, "low charging," has replaced the word antistatic. It refers to the low static charge generation between surfaces that contact and separate. A material that inhibits the generation of static charges from triboelectric generation is classified as low charging. A low charging material can be conductive, dissipative or insulative. Only conductive or dissipative materials should be used in ESD safe areas. Three types of materials can be used for static control: conductive, dissipative and insulative. These material properties govern what happens after the material is charged. Conductive materials allow electrons to move freely across their surface or through their volume. Dissipative materials allow electrons to move more slowly. Insulative materials do not allow electrons to move. A conductive material has a surface resistivity of less than 1x105 ohms per square centimeter (ohms/cm2). A dissipative material typically has a surface resistivity greater than 1x105 ohms/cm2, but less than 1x1012 ohms/cm2. Anything with a surface resistivity greater than 1x1012 ohms/cm2 is considered insulative. One common misconception is that conductive materials do not generate charges. This is because the dissipation of static charges from grounded conductive material tends to be complete and rapid. Ungrounded conductors can generate and hold static charges. Human InterventionStatic electricity can be conducted to an assembly through the human body or a machine. However, Smallwood says the most common cause of damage usually occurs through the direct transfer of electrostatic charge from the human body to the ESD-sensitive device. Because many problems arise from ungrounded operators, most preventative measures, such as wrist straps, focus on this channel of transference. A wrist strap connects the wearer to the ground. Wrist bans are typically made from elastic nylon fabric that has conductive fibers on the inside surface. These conductive fibers connect to the skin with a coiled cord. One end of the cord snaps to the wrist band; the other end plugs into a ground point. Special ESD chairs and workstations are available. Every workbench should have a dissipative-grounded work surface, a common point ground or continuous monitor with banana jacks for grounding wrist straps and a ground cord connected to the common point ground or continuous monitor. Electrically conductive or dissipative floor mats conduct a charge when grounded. Mats are typically either made from vinyl or rubber material and can be homogeneous or multilayered. Rubber mats have good chemical and heat resistance, but vinyl tends to be more cost-effective. Ionizers allow assemblers to dissipate static charges from any insulating materials quickly and easily. They neutralize a static charge on the surface of nonconductive materials by blowing air filled with an equal number of negative and positive ions across the material. Ionizers are available in many different sizes and configurations, including blowers and static bars. Monitors test wrist straps, floor mats and other protective products continuously. They sound an alarm if there is a problem. Resistance meters check a material’s ability to move a charge across the surface. They can be used to check packaging, flooring and work surfaces. Despite environmental efforts to minimize static on assembly lines, if screwdrivers and other production tools lack sufficient grounding, electronic products will still be at risk. For instance, an operator may be totally grounded with wrist and shoe straps, but if he brings a sensitive component or circuit card in contact with an ungrounded electric screwdriver, ESD damage can occur. "Electrostatic charges can build up on most fastening tools," claims Gordon Wall, Ph.D., development manager for electronic products at Mountz Inc. (San Jose, CA). "Unless those charges are dissipated or prevented, damage to the components can result." Screwdrivers and other electric fastening tools should have an uninterrupted ground path from the bit to the power outlet. To maximize the path for dissipating static charges, Wall says the resistance between the part to be fastened and earth-ground should be less than 1 ohm. To achieve that, the tool housing should be made of nylon or other conductive material. The transformer should also be fully grounded. Industrial robots and other automated assembly equipment also are susceptible to electrostatic discharge. However, Epson’s Baratti claims that these devices have a finite amount of variables that can generate static. By comparison, he says a human operator has an infinite amount of variables that must be studied and treated. According to Baratti, manufacturing engineers should check for any equipment hot spots that generate excessive static. "Any plastic components that are constantly moving, such as cable ducting, can generate unwanted static," he points out. The motor assembly should be built to dissipate charges. "A key identifier for an ESD-compliant robot is its chrome cover," says Baratti. "A special coating allows it to ground out." Baratti also recommends using grippers that are nickel-plated. "Anodized grippers don’t have the correct ground path," he claims. Industry StandardsUntil recently, no standards existed to help manufacturing engineers determine acceptable ESD levels. "Each customer had their own limits for determining how much static was acceptable, and their own methods for testing static discharge," notes Baratti. Every manufacturer had a different set of requirements. But, that lack of consistency is beginning to change. With the recent release of the ANSI/ESD S 20.20 standard, the electronics industry now has a consistent set of rules for the development of all ESD control programs. The standard was developed by ESDA in response to a request from the U.S. Department of Defense to prepare an ESD process standard to replace MIL-STD-1686. The document covers all elements of a static control program, rather than concentrating on individual components such as work surfaces. It has two major sections. An administrative section outlines all the documentation, training and process requirements. A technical section outlines the electrical and mechanical requirements, such as grounding, protected areas and packaging. Both administrative and technical provisions can be tailored to specific applications and requirements. The standard is flexible enough to allow companies with established ESD programs to adopt it. And, to reduce duplication of process controls, it has requirements acceptable to both OEMs and contract manufacturers. The 20.20 standard provides a consistent set of rules for everyone in the industry to follow. They help alleviate disagreements about test methods, performance attributes and material specifications. "It levels the playing field between all OEMs and contract manufacturers," says David Swenson, technical service specialist for the electronic handling and protection division of 3M Co. (Austin, TX). "Now everyone has one agreed-to static control plan. And that’s very good for the industry because it’s going to save everyone a lot of time and money."
The brainstem serves multiple vital functions for the individual for example it serves as the control centre for conciousness and the rhythm of breathing and the heartbeat are controlled from here. Almost all the information transmitted to and from the rest of the body pass via the brain stem and it serves as an important relay station along the route. It consists of three parts- the midbrain, the pons and the medulla oblongata. Ten pairs of cranial nerves emerge from the brainstem controlling, amongst other functions, eye movements, facial sensation, hearing, balance, taste and swallowing. The midbrain is directly connected to the cerebral hemispheres by two promontories its superior aspect- the cerebal peduncles. Through the peduncles flow the nerve fibress transmitting informtion to and from the higher centres. The pons was so named by Italian anatomist Costanzo Varolio because when viewed in axial cross section (e.g. looking up from the feet as one would on a CT or axial MR scan) it's shape resembles a Venetian bridge (Latin: pons). The pons is an important relay station to the cerebllum which controls muscle co-ordination and attaches to the back of the brainstem via three cerebellar peduncles Finally before the brainstem turns into the spinal cord is found the medullar oblongata usually referred to as simply the medulla. Within the medulla are groups of cell nucleii controlling involuntary activities such as breathing, the heartbeat and the muscle tone in the walls of blood vessels. It is not possible to describe in great detail here all the detailed anatomy of the brainstem but it hopefully clear that there is a great deal of important brain tissue gathered together in close proximity here. Furthermore because there are multiple connections with every part of the nervous system an injury in this region may have far-reaching consequences and is the reason that bleeding from cavernomas in the brainstem are more likely to produce profound and lasting symptoms and disabilities than their lobar cousins. Approximately 20% of cavernous malformations are located in the brain stem. It is approximately four times more likely that a brainstem cavernoma will present with symptomatic bleeding compared to lobar cavernomas and the estimated risk of a first ever haemorrhage is approximately 8% per five years. It is difficut to avoid producing symtoms from even small amounts of bleeding in such a functionally rich enviroment. For the same reason microsurgery to remove a brainstem cavernoma is more likely to result in worsed symtoms even after complete removal of the malformation. A useful rule of thumb is to imagine even a successful operation to remve the cavernoma as having similar consequences to the next symtpmatic haemorrhage it could cause. If decided upon treating a brainstem cavernoma then one hopes that risk from subsequent haemorrhages will be removed. Surgery is most feasible when the cavernoma presents itself to the surface of the brainstem. If it does not there are limited options to gain access to the interior of the brainstem surgically but this requires the surgeon to traverse functioning nervous tissue as gently as possible. It is almost impossible to produce no disturbance of function but with very careful planning and modern microsurgical techniques some brainstem cavernous malformations may be removed and the patient recover to their pre-operative level of function over some time. The risks of such a procedure are dictated by the location and the planned route to the target. Your surgeon will wish to discuss the possible implications for you as an individual before proceeding. Happily these are almost always planned procedures and there should be plenty of time to address any questions you might have. The Deep Nucleii The basal ganglia are a group of subcortical nucelii that contribute to a wide variety of functions including voluntary movements, emotional reactions and many others. Not all of their functions are completely understood. They are intimately related to the limbic system (white matter structures important in laying down memory and modulating emotion), the corticospinal tracts(initiating volunary movements) and the thalamii and spinothalamic tracts (relaying sensation to the higher centres). Although anatomically not part of the brainstem, these deep nucleei are equally as susceptible to injury by small amounts of bleeding from a therapeutic perspective pose similar challenges. There are deep nucelii in the cerebellum as well and lesions here may affect not only co-ordination of movement but also mood, emotion and speech Radiosurgery for Brainstem Cavernoma Given the difficulties in reaching cavernomas in these locations radiosurgery has been investigated as an alternative treatment to surgical removal. Early attempts at treatment were dogged by complications possible becuase the radiation doses prescribed were more appropriate to the more metabolically active cancers that had previously been the radiosurgeons therpeutic target. The topic has been revisited for cavernomas deemed "inoperable" using lower radiation prescriptions. This has significantly reduced the complications of treatment although controversy still exists as to it efficacy. The radiosurgery does not, in general, cause the cavernoma to disappear. It may shrink slightly or not alter in appearence. Some series suggest that the rate of repeated haemorrhage is reduced after radiosurgical treatment although the pre-treatment calculated rates of haemorrhage are often higher thanrates obbserved in population studies. As the number of "events" (rebleeding) in absolute terms is relatively small it is difficult to be very confident at this stage that radiosurgery significantly imporves on the untreated course of the disease. It is an area in need of further research.
The first topic you treat in freshman physics is showing how a ball shot straight up out of the mouth of a cannon will reach a maximum height and then fall back to Earth, unless its initial velocity, known now as escape velocity, is great enough that it breaks out of the Earth' gravitational field. If that is the case, its final velocity is however always less than its initial one. Calculating escape velocity may not be very relevant for cannon balls, but certainly is for rocket ships. The situation with the explosion we call the Big Bang is obviously more complicated, but really not that different, or so I thought. The standard picture said that there was an initial explosion, space began to expand and galaxies moved away from one another. The density of matter in the Universe determined whether the Big Bang would eventually be followed by a Big Crunch or whether the celestial objects would continue to move away from one another with decreasing acceleration. In other words one could calculate the Universe's escape velocity. Admittedly the discovery of Dark Matter, an unknown quantity seemingly five times as abundant as known matter, seriously altered the framework but not in a fundamental way since Dark Matter was after all still matter, even if its identity is unknown. This picture changed in 1998 with the announcement by two teams, working independently, that the rate of acceleration of the Universe's expansion was increasing, not decreasing. It was as if freshman physics' cannonball miraculously moved faster and faster as it left the Earth. There was no possibility of a Big Crunch, in which the Universe would collapse back on itself. The groups' analyses, based on observing distant stars of known luminosity, supernovae 1a, was solid. Sciencemagazine dubbed it 1998's Discovery of The Year. The cause of this apparent gravitational repulsion is not known. Called Dark Energy to distinguish it from Dark Matter, it appears to be the dominant force in the Universe's expansion, roughly three times as abundant as its Dark matter counterpart. The prime candidate for its identity is the so-called Cosmological Constant, a term first introduced into the cosmic gravitation equations by Einstein to neutralize expansion, but done away with by him when Hubble reported that the Universe was in fact expanding. Finding a theory that will successfully calculate the magnitude of this cosmological constant, assuming this is the cause of the accelerating expansion, is perhaps the outstanding problem in the conjoined areas of cosmology and elementary particle physics. Despite many attempts, success does not seem to be in sight. If the cosmological constant is not the answer, an alternate explanation of the Dark Energy would be equally exciting. Furthermore the apparent present equality, to within a factor of three, of matter density and the cosmological constant has raised a series of important questions. Since matter density decreases rapidly as the Universe expands (matter per volume decreases as volume increases) and the cosmological constant does not, we seem to be living in that privileged moment of the Universe's history when the two factors are roughly equal. Is this simply an accident? Will the distant future really be one in which, with Dark Energy increasingly important, celestial objects have moved so far apart so quickly as to fade from sight? The discovery of Dark Energy has radically changed our view of the Universe. Future, keenly awaited findings, such as the identities of Dark Matter and Dark Energy will do so again.
Do-it-yourself marble run Predicting how objects roll, fall and trigger each other to move may have more to do with our basic instincts that the laws of probability. Watch this trick and understand why toddlers are excited by something as simple as a rolling ball. Rolling a marble down a maze into dominoes that fall on a seesaw to release a toy car sounds like a simple plan. But it takes some serious tinkering. Itís also a great way for toddlers to exercise their imagination and problem-solving skills. And best of all, itís free! 1. Raid the toy box and rummage around the house for five minutes and youíll find plenty of interesting objects for a marble run. Make a plan and test each stage individually before moving onto the next section. Note: use a ping-pong ball for tiny tots instead of marbles. 2. Stick cardboard tubes, wooden spoons, spatulas or rulers (or all of the above) onto a cardboard ramp to guide a ball as it descends down. Use books or blocks to prop the cardboard up so it becomes an inclined plane. 3. If you donít have dominoes, use toy blocks, DVD or CD cases or old audio cassettes. If youíre feeling adventurous, build some stairs to for the dominoes to climb. Just make sure the ball hits the first one hard enough to knock it over. 4. A few pens under a ruler make a simple seesaw. Park the rear wheels of a toy car over the spine of a large, open hardcover book. With a bit of fiddling, a domino falling on the other end of the ruler will release the car so it rolls off the book. 5. If the domino is too light to lift and release the car, stick some play dough under the raised end of your ruler. Every stage will take lots of tweaking, adjusting and resetting but thatís what makes the end result so exciting. 6. Cut a door into a takeaway container to make a garage for the car. 7. Donít be disappointed if your contraption doesnít work perfectly the first time but be prepared for lots of noise and excitement when it does! Then do it again, and again, and again! Whatís going on? The joy of watching objects rolling, falling and triggering each other to move hints at our basic instinct about the mathematical laws of probability. And the fact toddlers get excited about this too suggests they might know more about chance and probability than youíd think. Think of your facial expression as a display for the probability calculator in your brain. The smaller the probability of an event, the more surprised youíll be when it happens. When someone kicks a ball, we hardly raise an eyebrow. But if the ball hits a tree, bounces off a picnic table and lands in a bin, our faces light up in amazement. What were the chances? Individually, ordinary events donít seem very amazing. A ball bouncing off a tree or bouncing off a table or falling in a bin…whoopy? But link them together in series where each depends on the last and people start cheering. Why is that? Letís say the probability of the desired outcome of four individual events is 50 per cent. If each event depends on the previous eventís outcome, the probability of the first two desired outcomes occurring is 25 per cent (0.5 x 0.5 = 0.25). For the third desired outcome, youíre down to 12.5 per cent (0.5 x 0.5 x 0.5 = 0.125); and the probability of all four occurring is just six per cent (0.5 x 0.5 x 0.5 x 0.5 = 0.06). When you realise that the probability of the desired outcome in a chain of events gets progressively smaller, you can see why a simple little marble run is fun to watch. But hereís the interesting thing. The probability of a marble setting off the chain events it was designed and tweaked to cause is nearly 100 per cent. Itís really not that amazing so the probability calculator in your brain must be subconscious because you still canít help but get excited. Feeling sceptical? Build your own marble run and you can almost feel your brain calculating the probability of each new stage. When the ball rolls down the ramp and hits the first domino, you grin. When the last domino falls on the seesaw, youíre smiling from ear to ear. As the events unfold, your brain is multiplying the probability of them occurring in that order. By the time the car has rolled down the book, along a table and through the tiny door into the garage, youíre jumping around cheering at your amazing success. None of this happened by chance but your brainís subconscious probability calculator clearly thinks it did. Toddlers react the same way but, admittedly, they are anticipating the result, which probably helps. But they wonít high-five you just because a car rolled down a book. And there is evidence that children as young as four have a better grasp of probability than their cute little faces would have you believe. But perhaps the excitement can be explained by another strange infatuation of ours. Americans call overly elaborate contraptions that perform otherwise simple tasks Rube Goldberg machines. Rube Goldberg (1883–1970) was a famous cartoonist who drew comically over-engineered inventions that performed mundane jobs, like operating a napkin or pouring tea. Goldbergís drawings inspired Purdue University to stage the first annual Rube Goldberg Machine competition in 1949 and itís been running ever since with spin-offs appearing around the USA. You can watch videos of the winning entries complete with commentary. (Goldberg became president of the National Cartoonists Society in 1948 and the Reuben Award for ďoutstanding cartoonist of the yearĒ is named in his honour.) Whatever it is, advertising agencies, pop bands and film makers seem to know how much we love watching bizarre collections of odds and ends falling, rolling and toppling over in a choreographed sequence. Perhaps part of the appeal is the obvious fact that somebody has clearly spent hours building these funny, pointless contraptions. But if youíre not convinced by any of this, try it yourself and when it finally works, youíll discover that nobody gets more excited than the masterminds who built the whole thing. Published 21 February 2012
Shakespeare's literary career, which spanned a quarter century roughly between the years 1587 and 1612, came at a time when the English language was at a powerful stage of development. The great fluidity of Early Modern English gave Shakespeare an enormous amount of room to innovate. In all of his plays, sonnets and narrative poems, Shakespeare used 17,677 words. Of these, he invented approximately 1,700, or nearly 10 percent. Shakespeare did this by changing the part of speech of words, adding prefixes and suffixes, connecting words together, borrowing from a foreign language, or by simply inventing them, the way a rapper like Snoop Dogg has today. (Another exemplary instance is the way HBO's series The Wire has integrated slang into our contemporary vernacular.) What's the Big Idea? In the past, most brain experiments would involve the study of defects, and use a lack of health in the brain to show what it can do. Professor Philip Davis from the University of Liverpool's School of English is approaching brain research in a different way. He is studying what he calls "functional shifts" that demonstrate how Shakespeare's creative mistakes "shift mental pathways and open possibilities" for what the brain can do. It is Shakespeare's inventions--particularly his deliberate syntactic errors like changing the part of speech of a word--that excite us, rather than confuse us. With the aid of brain imaging scientists, Davis conducted neurolinguistic experiments investigating sentence processing in the brain. The experiments showed that when people are wired they have different reactions to hearing different types of sentences. One type of measured brain responses is called an N400, which occurs 400 milliseconds after the brain experiences a thought or perception. This is considered a normal response. On the other hand, a P600 response indicates a peak in brain activity 600 milliseconds after the brain experiences a quite different type of thought or perception. Davis describes the P600 response as the "Wow Effect," in which the brain is excited, and is put in "a state of hesitating consciousness." It should be no surprise that Shakespeare is the master of eliciting P600s, or as Davis told Big Think, Shakespeare is the "predominant example of this in Elizabethan literature." The visualization of the experiment looks like this: But how is poetic language different from normal language? Consider these examples, in which Shakespeare grammatically shifts the function of words: An adjective is made into a verb: 'thick my blood' (The Winter's Tale) A pronoun is made into a noun: 'the cruellest she alive' (Twelfth Night) A noun is made into a verb: 'He childed as I fathered' (King Lear) As Davis's experiments have shown, instead of rejecting these "syntactic violations," the brain accepts them, and is excited by the "grammatical oddities" it is experiencing. While it has not been fully proven that we can localize which parts of the brain process nouns as opposed to verbs, Davis says his research suggests that "in the moment of hesitation" brought on by the stimulative effects of functional shift, the brain doesn't know "what part to assign the word to." What is the Significance? For Davis, we need creative language "to keep the brain alive." He points out that so much of our language today, written in bullet points or simple sentences, fall into predictability. "You can often tell what someone is going to say before they finish their sentence" he says. "This represents a gradual deadening of the brain." Davis also speaks of the possible applications for his research on other fields, such as treating dementia. "My hope is that we find ways to treat depression and dementia by reading aloud to patients." And yet, Davis is a literary scholar first and foremost. He argues the heightened mental activity found in the brain responses to his experiments may be one of the reasons why Shakespeare’s plays have such a dramatic impact on readers and audiences. What is at the heart of Shakespeare, he says, is the poet's "lightning-fast capacity" for forging metaphor that created "a theater of the brain." Short of placing multiple electrodes on your scalp, simply read the four sentences below, and ask yourself which one you like best. 1. A father and a gracious aged man: him have you enraged 2. A father and a gracious aged man: him have you charcoaled. 3. A father and a gracious aged man: him have you poured. 4. A father and a gracious aged man: him have you madded. If the experiment worked, here are how the results should have played out: The first sentence should elicit a normal brain reaction. The brain recognizes that the sentence makes sense; unlike the second line, which the brain rejects. The third line ("charcoaled") measures both N400 and P600 responses, because it violates both grammar and meaning, and is gibberish. The fourth line is an example of functional shift, which is found in King Lear. Your brain is now thinking like Shakespeare. Follow Daniel Honan on Twitter @DanielHonan
July 2011 / Volume 63 / Issue 7| The basics of servomotor axis control By Dr. Scott Smith, University of North Carolina at Charlotte Precise control of axis position is an enabling technology for automatic machine tools. How does it work? Several ways exist to control axis position, but one of the easiest to understand is a permanent-magnet DC servomotor. The basic element is a DC motor, in which there are loops of wire on an armature that is free to rotate on bearings. The armature is in the presence of a magnetic field created by permanent magnets. When a voltage is applied to one of the loops, a torque is created, which causes the armature to rotate (Figure 1). As the armature rotates, the torque decreases, but then the voltage is switched to the next loop of wire. The contacts between the loops of wire and the voltage source are maintained by conductive brushes, and the switching between loops is called “commutation.” The rotating armature of the DC motor is connected to a screw. As the screw rotates, it drives a nut attached to the machine tool table, causing the table to slide along the guide way. The torque created by the applied voltage is used to accelerate the inertia of the motor and screw, and to overcome the friction and load torques. A DC motor in this configuration—called “open loop”—has a rotational speed that is sensitive to the load torque. It directly “sees” the inertia of the rotating armature and screw and the friction in the bearings. It also sees the table mass and the cutting forces, but only indirectly through the screw. To make the DC motor less load sensitive, it is common to measure the rotational speed, compare it to the desired speed and adjust the command voltage to the DC motor based on the difference. The rotational speed of the motor can be measured with a small generator attached to the rotating shaft. This is called a “tachogenerator.” The tachogenerator produces a voltage proportional to the rotational speed of the shaft; it is like a DC motor, but backwards. The measured voltage is compared to the input voltage, and the small difference is amplified and passed to the DC motor (Figure 2). In this configuration, the DC motor has a velocity feedback and is said to be under velocity control. This trick makes the DC motor much less sensitive to the external load. End users obviously want to control axis position in a machine tool—not just speed. They need a device to measure position, and also feed that back to compare against the commanded position. A common device for this purpose is a rotary encoder. One design consists of two transparent discs with some radial lines marked on them, a light source and a light sensor. One disc rotates with the DC motor armature, and one does not. The light shines through the discs, and lots of light gets through when the lines are aligned, and the light sensor sees a bright signal. When the lines are not aligned, much of the light is blocked, and the sensor sees a dark signal. As the shaft rotates, the sensor sees a series of bright and then dark pulses, which indicate table position. The commanded table position is also converted into pulses, or “counts,” by a piece of software called “the interpolator.” Based on a desired position and the linear velocity of the table, the interpolator spits out counts at a certain rate. The counts from the encoder are subtracted as the table moves toward the desired position. A DC motor in this configuration is said to be under positional control, or is called a “positional servomechanism.” The difference between the desired and actual position—the “to go” distance—is converted into a voltage, which is fed from the outer position loop to the inner velocity loop (Figure 3). There are many design considerations and variations of this concept. For example, if the amplifier is more powerful, then positioning of the axis is less sensitive to loads and disturbances. However, if the amplification is too high, the control loop becomes unstable, similar to feedback in a microphone, and the axis will vibrate back and forth. It is interesting to note that for the axis to move, there must be an error in position. The difference between the commanded and actual positions creates the voltage that causes the motor to move. CTEAbout the Author: Dr. Scott Smith is a professor and chair of the Department of Mechanical Engineering at the William States Lee College of Engineering, University of North Carolina at Charlotte, specializing in machine tool structural dynamics. Contact him via e-mail at [email protected]. CUTTING TOOL ENGINEERING Magazine is protected under U.S. and international copyright laws. Before reproducing anything from this Web site, call the Copyright Clearance Center Inc. | at (978) 750-8400.
A Brief Disorder Description Age of onset: 2 to 16 years. Symptoms are almost identical to Duchenne yet less severe. Affects pelvis, upper arms and upper legs. Becker progresses more slowly than Duchenne and survival runs well into middle age. Age of onset: at birth. Generalised muscle weakness, with possible joint deformities from shortening of muscles. There are at least 30 different types - many types progress very slowly; some shorten life span. Age of onset: 2 to 6 years. General muscle weakness and wasting, affecting pelvis, upper arms and upper legs first. Duchenne progresses slowly, yet eventually involves all voluntary muscles. A wheelchair is required by about age 8 to 11 years and the condition is severe enough to shorten life expectancy. With high standards of medical care survival is often into the 30s. Age of onset: 40 to 60 years. A group of conditions that cause weakness and wasting of muscles of the hands, forearms and lower legs. Progresses slowly and rarely leads to total incapacity. Age of onset: childhood to early teens. Weakness and wasting of shoulder, upper arm and shin muscles. Joint deformities are common. Disease progresses slowly, yet sudden death can result from cardiac complications. Age of onset: teens to early adulthood. There is also an infantile-onset form. Muscles of the face, shoulder blades and upper arms are among the most affected but other muscles are usually affected. Progresses slowly with some periods of rapid deterioration, disease may span many decades. Most people with the disease have a normal life span Age of onset: late childhood to middle age. Weakness and wasting, affecting muscles around the shoulders and hips first. There are more than 20 different subtypes - some progress to loss of walking ability within a few years and cause serious disability, while others progress very slowly over many years and cause minimal disability. Age of onset: 20 to 40 years. Muscle weakness is accompanied by myotonia (delayed relaxation of muscles after contraction) and by a variety of symptoms that affect other parts of the body including the heart, eyes and brain. Muscle weakness affects face, feet, hands and neck first. Progression is slow, sometimes spanning 50 to 60 years. More severe infantile and childhood forms also exist. Age of onset: 40 to 70 years. First effects muscles of eyelids and throat. While progression is slow, weakening of throat muscles in time causes inability to swallow and mobility can be affected later on. Spinal Muscular Atrophies (SMA): TYPE 1 (INFANTILE PROGRESSIVE) Age of onset: birth to 6 months. Generalised muscle weakness, weak cry, trouble breathing swallowing and sucking. Do not reach the developmental milestone of being able to sit up unassisted. Life span rarely exceeds age 2. TYPE 2 (INTERMEDIATE) Age of onset: 7 to 18 months. Weakness in arms, legs and lower torso, often with skeletal deformities. Children learn to sit unassisted but do not stand or walk independently. Although respiratory complications are a constant threat, children with type 2 SMA usually live to young adulthood and many live longer. TYPE 3 (JUVENILE) Age of onset: 1 to 15 years. Weakness in leg, hip, shoulder, arm and respiratory muscles. Children learn to stand and walk but some lose the ability to walk in adolescence, others walk well into their adult years. Life span is unaffected. ADULT SPINAL MUSCULAR ATROPHY Age of onset: 18 to 50 years. Weakness in the tongue, hands or feet which slowly spreads to other parts of the body. A relatively mild form of spinal muscular atrophy, it has little impact on life expectancy. Age of onset: childhood to 60 years. Inflammatory condition of the muscles that mostly affects the shoulders, upper arms, pelvis and thighs. Resulting disability is very variable. Other symptoms include a muscle pain, rash, fever, malaise and weight loss. Usually responds to treatment with corticosteroid and other immune suppressing drugs. Age of onset: mostly 30 to 50. Symptoms are similar to dermatomyositis except usually there is no rash or muscle pain. Disease severity and progression is very variable. More women than men are affected. INCLUSION BODY MYOSITIS Age of onset: over 50. A slowly progressive condition causing weakness primarily in the quadriceps and forearm muscles. Difficulty with stairs, getting out of a chair and a poor grip are common problems. Swallowing muscles are sometimes affected. In general patients do not die of the disease, but most meet with some degree of disability as the disease progresses. Diseases of Peripheral Nerve: Age of onset: teens to 20 years (occasionally in childhood or infancy). Damage to the peripheral nerves causes muscle weakness and wasting, and some loss of sensation, in the extremities of the body: the feet, the lower legs, the hands and the forearms. There are several different types of CMT and disease progression varies. Dejerine-Sottas Disease is a severe form. HEREDITARY NEUROPATHY WITH LIABILITY TO PRESSURE PALSIES (HNPP) Age of onset: 20 to 40. Recurrent episodes of numbness, tingling, and/or loss of muscle function (palsy). An episode can last from several minutes to several months, but recovery is usually complete. Repeated incidents, however, can cause permanent muscle weakness or loss of sensation. Age of onset: all ages. An autoimmune condition in which the nerves are attacked by the body’s own immune system causing paralysis, muscular weakness and tingling sensations. The disorder can be mild, moderate or severe, with life support needed in the worst cases. Most people spontaneously recover, though some will be left with permanent disabilities. CHRONIC INFLAMMATORY DEMYELINATING POLYNEUROPATHY (CIPD) Age of onset: any age but more common in the 5th and 6th decades. An autoimmune condition which causes slowly progressing weakness and loss of feeling in the legs and arms. Numbness and tingling usually starts in the feet. Balance may also be affected. Severity varies widely among individuals. Some may have a bout of CIDP followed by spontaneous recovery, while others may have many bouts with partial recovery in between relapses. CIPD is treatable by suppressing the immune system although some individuals are left with some residual numbness or weakness. Other conditions affecting the nervous system MOTOR NEURONE DISEASE - AMYOTROPHIC LATERAL SCLEROSIS (ALS) Age of onset: 35 to 65 years. Motor neurons (nerve cells that control muscle cells) are gradually lost. Wasting and weakness of all body muscles, with cramps and muscle twitches common. Progressive, ALS first affects legs, arms and/or throat muscles. Survival rarely exceeds five years after onset. Age of onset: 4 to 16 years. Inherited disease of the nervous system resulting in impairment of limb coordination, muscle weakness and loss of sensation. Severity and progression of disorder vary. Often associated with diabetes and heart disease. Diseases of the neuromuscular junction: Age of onset: 30 to 50 years. An autoimmune condition where the junction between the nerve and muscle is damaged resulting in weakness of muscles of the eyes, face, neck, throat, limbs and/or trunk. Disease progression varies. Drug therapy and/or removal of thymus gland is often effective. There is a juvenile onset form of the condition. CONGENITAL MYASTHENIC SYNDROMES Age of onset: birth or early childhood. Genetic condition causing problems with the way messages are transmitted from the nerves to the muscles, causing weakness (myasthenia) and the muscles tire easily (fatigue). Muscle weakness varies depending on the type of genetic defect, so impact on mobility ranges from mild to severe. Age of onset: over 40 years. Weakness and fatigue of hip and thigh muscles is common. Lung tumour is often present. Progression varies with success of drug therapy and treatment of any malignancy. Metabolic Diseases of the Muscle: ACID MALTASE DEFICIENCY (Type II Glycogen storage disease, Pompe’s disease) Age of onset: Infancy to adulthood. For infants, disease is generalised and severe with heart, liver and tongue enlargement common. Adult form involves weakness of mid-body and respiratory muscles. Progression varies. Enzyme replacement therapy is available. Age of onset: Early childhood. Varied weakness of shoulder, hip, face and neck muscles. Often a secondary metabolic condition, progression varies and carnitine supplementation can be effective. CARNITINE PALMITYL TRANSFERASE DEFICIENCY Age of onset: young adulthood. Inability to sustain moderate prolonged exercise. Prolonged exercise and/or fasting can cause severe muscle damage with urine discoloration and kidney damage. DEBRANCHER ENZYME DEFICIENCY (Type III Glycogen storage disease, Cori’s disease) Age of onset: 1 year. General muscle weakness, poor muscle control and an enlarged liver with low blood sugar. Slow progression. Some patients do not experience muscular weakness until late teens or early adulthood. LACTATE DEHYDROGENASE DEFICIENCY Age of onset: childhood to adolescence. Intolerance of intense exercise with muscle damage and urine discoloration possible following strenuous physical activity. Severity of disorder varies and intense exercise should be avoided. Age of onset: birth to adulthood. Severe muscle weakness, flaccid neck muscles and inability to walk. Brain is often involved, with seizures, deafness, loss of balance and vision, and retardation common. Progression and severity vary. MYOADENYLATE DEAMINASE DEFICIENCY Age of onset: early adulthood to middle age. Muscle fatigue and weakness during and after exertion, with muscle soreness or cramping. Patients are often unable to attain previous performance levels yet condition is non-debilitating and non-progressive. PHOSPHORYLASE DEFICIENCY (Type V glycogen storage disease, McArdle disease) Age of onset: Adolescence. Low tolerance for exercise, with cramps often occurring after exercise. Intense exercise can cause muscle damage and possible damage to kidneys. Reducing strenuous exercise can lessen severity. PHOSPHOFRUCTOKINASE DEFICIENCY (Type VII glycogen storage disease) Age of onset: Childhood. Muscle fatigue which upon exercise can lead to severe cramps, nausea, vomiting, muscle damage and discoloration of urine. Disease varies widely in severity and progression. PHOSPHOGLYCERATE KINASE DEFICIENCY Age of onset: Childhood to adulthood. Muscular pain, cramps, muscle damage and urine discoloration possible following intense exercise of brief duration. Severity varies and intense exercise should be avoided. Less Common Myopathies: Age of onset: birth through to adulthood. Considered a type of congenital muscular dystrophy. In childhood the symptoms can be hypotonia (floppiness), muscle weakness, delayed motor milestones, talipes (clubfoot), torticollis (stiff neck) and contractures (tightness) in the ankles, hips, knees and elbows. The main symptoms in adults include tight tendons at the back of their ankles, as well as tightness of various other joints especially in the hands and mild muscle weakness. CENTRAL CORE DISEASE Age of onset: at birth or early infancy. Motor skill milestones are reached very slowly and hip displacement is not uncommon. Condition is disabling but not life-threatening. CONGENITAL FIBRE TYPE DISPROPORTION Age of onset: at birth or before age of 1. A rare type of myopathy characterized by hypotonia and mild to severe generalized muscle weakness. FIBRODYSPLASIA OSSIFICANS PROGRESSIVA (FOP) Age of onset: early childhood. An extremely rare disorder where a person's muscle and connective tissues, such as ligaments and tendons, are slowly replaced by bone. Starts in the person's shoulders and neck, progressing along their back, trunk, and limbs. HYPER/HYPO THYROID MYOPATHY - Age of onset: childhood to adulthood. Muscle disease caused by under or overproduction of thyroid hormones from the thyroid gland. Weakness in upper arm and upper leg muscles with some evidence of wasting. Severity depends on success in treating underlying thyroid condition. MINICORE (MULTICORE) MYOPATHY (multi-minicore disease) Age of onset: infancy or childhood. Rare myopathy with variable degrees of muscle weakness and wasting. There are different subtypes each with varying symptoms and severity. Most common type is characterized by spinal rigidity, early scoliosis and respiratory impairment. Other types may involve the muscle around the eyes, distal weakness and wasting or weakness around the hips. Weakness is static or only slowly progressive MYOTUBULAR (CENTRONUCLEAR) MYOPATHY Age of onset: birth to early adulthood. There are three different types. In the most severe form babies are born floppy with breathing difficulties and the bones of their head are malformed. The most common type only affects males. More mildly affected people are often able to walk well into adulthood but do find themselves in a wheelchair in later life. MYOTONIA CONGENITA - Age of onset: infancy to childhood. Muscle stiffness and difficulty in moving after periods of rest. With exercise, muscle strength and movement return to normal. Condition causes discomfort throughout life but is not life-threatening. NEMALINE MYOPATHY - Age of onset: at birth or early infancy. Hypotonia (poor muscle tone or floppiness) and weakness of arm, leg, trunk, face and throat muscles. In severe cases, children have marked respiratory weakness. Children rarely survive more than a few years, yet some live into teens. PARAMYOTONIA CONGENITA - Age of onset: adulthood. Poor or difficult relaxation of muscle which usually worsens after repeated use or exercise. Condition causes discomfort throughout life but is not life-threatening. PERIODIC PARALYSIS - HYPOKALEMIC - HYPERKALEMIC - Age of onset: infancy to 30 years. Severe generalised weakness of legs and other muscle groups with periods of paralysis affecting arms, legs and neck. Severity varies by age of onset and success of drug therapy. History | Logo | MD Centre | Mission | Partmerships | Policies | Employment | Calendar | Clinical | Glossary | Links | Duchenne | ED | FSH | LGD | Myotonic | Mitochondrial | SMA | Newsletters | Genetics | InfoMD | PEG | 101 | Physio | Recreation | Resources | Respiratory | Scoliosis | Media | Ryan's Cafe | CampMDA | ChallengeMD | Forum | NMDRC | Wheelie's Rest | Programs | Futures | Respite | CINRG | Glossary | Clinics | Bequests | Donations | Fundraising | InMemory | MDAngels | Volunteers | Workplace Giving
Maintaining good oral care is essential to overall health and well-being. Poor dental care can lead to a variety of issues, including cavities, gum disease, and even tooth loss. Fortunately, there are easy steps you can take to stay on top of your dental care and keep your teeth and gums healthy. Dr Tedros Adhanom Ghebreyesus, the Director-General of the World Health Organisation, mentioned in the foreword of the WHO Global Oral Health Status Report (2022) that oral diseases are prevalent noncommunicable diseases affecting around 3.5 billion people globally. He highlighted that 3 out of 4 people affected by oral diseases reside in middle-income countries. The report also indicates that the burden of oral diseases is progressively rising, especially in low- and middle-income nations. The WHO Global Oral Health Status Report (2022) estimated that oral diseases affect close to 3.5 billion people worldwide, with 3 out of 4 people affected living in middle-income countries. Globally, an estimated 2 billion people suffer from caries of permanent teeth and 514 million children suffer from caries of primary teeth. We are of the opinion that the steps outlined below are uncomplicated and straightforward methods to initiate better control over your overall oral health and establish healthy habits. Here are five easy steps to help you maintain good dental oral care: - Brush twice a day: Brushing your teeth twice a day is the cornerstone of good dental hygiene. Make sure you brush for at least two minutes each time, using a fluoride toothpaste and a soft-bristled brush. Brush in circular motions and pay extra attention to the back teeth, which are often overlooked. - Floss daily: Flossing is just as important as brushing, yet many people neglect this step. Flossing removes plaque and food particles that your toothbrush can’t reach. Take a length of floss and gently slide it between your teeth, being careful not to snap it down onto your gums. - Use mouthwash: Mouthwash can help freshen your breath and kill bacteria in your mouth. Look for a mouthwash that contains fluoride, which can help strengthen your teeth. Rinse your mouth with mouthwash for 30 seconds after brushing and flossing. - Limit sugary foods and drinks: Sugary foods and drinks can be harmful to your teeth. They can cause cavities and erode your tooth enamel. Limit your intake of sugary foods and drinks, and try to stick to water and milk instead. - Visit the dentist regularly: Even if you have excellent dental hygiene habits, it’s important to visit the dentist regularly. Your dentist can catch issues early before they become bigger problems. Aim to visit the dentist twice a year for cleanings and check-ups. By following these five easy steps, you can stay on top of your dental care and maintain good oral health. Remember, good dental hygiene habits can not only help prevent cavities and gum disease, but they can also improve your overall health and well-being.
SPAG - Spelling,Punctuation and Grammar A useful glossary for parents and pupils of the grammar and punctuation terms that children will encounter in both KS1 and KS2. There is a pack for each year group, that includes what will have been taught in previous year groups. There is also an extra Year 6 SATs revision guide. The Spelling Shed helps children to practise spelling via a simple game. The game gives four different degrees of support in the form of difficulty modes; Easy, Medium, Hard and Extreme. Higher levels allow a higher score to be achieved but children can practise as much as they like on lower levels before trying to gain high scores. The scores achieved give a league position and each class has its own league position within a school league and our world league. Spelling Shed also allows for homework and whole-class “Hive” games as a more interactive form of a spelling test. Your child’s teacher will set lists of words that your child should practise and will monitor their progress.
How did the first person evolve? Mabel, age 7, Anglesea, Victoria. Hi Mabel, what a great question! We know humans haven’t always been around. After all, we wouldn’t have survived alongside meat-eating dinosaurs like Tyrannosaurus rex. How the first person came about – and who their ancestors were (their grandparents, great-grandparents and so on) – is one of the biggest questions archaeologists have. Even today, it puzzles us. When all living things were tiny When we think of how humans first came about, we have to first understand that almost every living thing evolved from something else through the process of evolution. For instance, the first known example of life on Earth dates back more than 3.5 billion years. This early life would have been in the form of tiny microbes (too small to see with just our eyes) that lived underwater in a very different world to today. At that time, the continents were still forming and there was no oxygen in the air. Since then, life on Earth has changed incredibly and taken many forms. In fact, for about a billion years during the middle part of Earth’s history (1.8 billion to 800 million years ago), life on Earth was nothing more than a large layer of slime. A long, long lineage All living humans today belong to a species called Homo sapiens. Homo sapiens are the only hominin alive today. Hominins first showed up millions of years ago, and changed in mostly small ways over a long time, through evolution. Because of this complicated family tree, in answering your question we need to think about what you mean by “person”. This may seem silly, because we know straight away when we pass someone on the street that they’re a person, rather than a dog or cat. However, the differences between you and your early ancestor Lucy (more about her below) who lived more than 100,000 generations ago, are much smaller than the differences between a person and a dog. This is why the answer is complicated. So I’m going to give you two answers and let you decide which you think is right. You and I are Homo sapiens The first answer is to assume the first “person” was the first member of our species, Homo sapiens. This person would have been just like you and me, but without an iPhone! The oldest skeleton discovered of our species Homo sapiens (so far) is from Morocco and is about 300,000 years old. This ancestor of ours would have lived at the same time as other members of the human family, including Neanderthals and Denisovans. Archaeologists have long argued about what makes us different to these other ancient types of humans. The answer probably lies in our brains. We think Homo sapiens are the only species that can do things like create art and language – although some recent discoveries suggest Neanderthals were artists too. It’s hard to know why Homo sapiens survived and the rest of our hominin family didn’t. But there’s a good chance the creativity that led to some wonderful early cave paintings found in France and Indonesia helped us to succeed over the last 100,000 years. Another way to answer your question is by assuming the first “person” was the first hominin to split off from the rest of our extended family, which includes chimpanzees and gorillas. This species would have looked different to you and me, but still would have walked upright and used tools made of stone. The best example of this is a famous fossil skeleton called Lucy. When Lucy was alive about 3.18 million years ago she was covered in hair. And she was probably about the same height as you are now, even though her bones tell us she was an adult when she died. Her skeleton was found in Africa, and while we have a lot of it compared to other ancient hominin skeletons, it’s not complete. This makes it hard to work out who the first “person” was. Most fossils from Lucy’s time are incomplete, and we only have a handful of bones to study from each extinct species. This is why every new discovery in archaeology is so exciting. Each new fossil gives us a new chance to put the puzzle of our family tree together.
Adding a north arrow improves many maps, especially large-scale maps that show a smaller area in great detail and maps that are not oriented with north at the top (figure 1), which is often done to save space on the printed page. A new ArcMap option in ArcGIS 10.1 lets you add a north arrow aligned to true north to your page layout. True north is the direction pointing toward the geographic north pole of the axis of the earth’s rotation, the location at 90 degrees north where all the meridians converge. (Meridians are the north–south lines on the earth that extend between the north and south poles.) In previous versions of ArcGIS, a north arrow inserted onto a page layout in ArcMap always aligned to grid north of the data frame. (Read the blog Adding a declination diagram in ArcMap.) The north arrow would always point to the top of the page unless you rotated the data frame. In ArcGIS 10.1, you can now choose to have the north arrow align to the direction of true north at the center of the data frame. To illustrate this, figure 2 below shows the orientation of the north arrow when it is placed on maps using the North America Albers equal area projection with standard parallels at 20N and 60N and the central meridian at 96W (shown in orange in figure 2B). When the true north alignment option is used, the north arrow aligns with the meridian at the center of the data frame that points to true north. Note that if you move the north arrow to another location on the page after you insert it, its orientation will stay the same. If you pan the data in the data frame, the north arrow will update automatically. Note that the maps in figure 2 are for illustration purposes only. Cartographers advise against using a north arrow on smaller-scale maps that use conic projections such as these because north will vary across the map, as you can plainly see by the orientation of the meridians. To set the true north alignment option, follow these three easy steps: Step one: Click Insert on the main menu, then click North Arrow. Step two: Use the North Arrow Selector to choose the north arrow you want to insert, then click Properties. Step three: On the North Arrow tab, set the Align To option to True North (figure 3) and click OK. Note in figure 3 that the calculated angle automatically updates to indicate the angle of the true north arrow. The calculated angle is always a read-only property, calculated by the software. You can also set a calibration angle to define adjustments for either the data frame rotation or true north alignment options. For example, you can use this option along with alignment to true north if you are inserting a north arrow that aligns with magnetic north. Magnetic north is the direction of a compass needle when it is aligned with the earth’s magnetic field. To set this value, you will have to know the correct declination (the angular difference between true and magnetic north) for the time and area being mapped. Other guidelines for using the true north alignment option include the following: - You can use Style Manager to edit the alignment setting of a north arrow symbol so that the symbol defaults to the rotation option you set. - The calibrated angle can be user-edited with either rotation option. The calculated angle is determined by the software, so it is not something you can edit. To learn more about north arrows in ArcMap, read these ArcGIS blog posts:
Finland Independence Day 2023: Finland celebrates Independence Day on December 6, and Finns worldwide are enthusiastic about commencing this year’s festivities with great enthusiasm. In contrast to the multitude of celebrations observed on independence days of other nations, Finland is renowned for commemorating its liberation from Russian rule in 1917 with a banquet that is frequently hosted at the Presidential Palace. This gala is typically attended by more than 2,000 invited guests and is also live-broadcast. Continue reading to learn more about the Finnish Independence Day! Finland Independence Day History Finland’s citizens endured a protracted and arduous process in order to attain independence. Thousands of lives were lost in the course of numerous conflicts. Finns observe Independence Day by staging grand celebrations and paying homage to the martyrs, in remembrance of the numerous sacrifices that were made for the nation’s independence. From approximately the 12th century until 1809, Finland was a Swedish territory. Nevertheless, Finland was defenseless against the Russians, who viewed Sweden’s weakness following the Napoleonic War as an ideal opportunity to seize control of the nation. The struggle for independence against Russia commenced as soon as they assumed power in 1809. During the period of Russian rule in Finland, significant endeavors were undertaken to alter its cultural landscape. Referred to as the “Russification of Finland,” these endeavors encountered strong opposition from the Finnish people. Furthermore, this event prompted the Finns to recognize that the sole remaining hope for their survival was a sovereign nation, and they promptly initiated efforts to achieve independence from Russia. The Fennoman movement, which advocated for the adoption of Finnish as the official language, commenced in 1856. The underlying principle of this movement was to safeguard the national identity of the Finns while elevating their concerns to the forefront. As a consequence of its defeats in World War I and the Russian Revolution, the czarist nation had weakened by 1917. The Finns, perceiving this as a chance to assert their independence, drafted a Declaration of Independence on December 4, 1917. The Declaration was formally ratified by the Finnish parliament on December 6, 1917, thereby establishing that date thereafter as Finland’s Independence Day. FAQs for Finland Independence Day 2023 In Finnish, how does one express “Happy Independence Day”? In Finnish, “Hyvaa Itsenaisyyspaivaa” translates to “Happy Independence Day.” Who ruled Finland a century ago? More than a century ago, Finland was a province under Swedish governance. In what manner does Finland observe Independence Day? The festivities commence with a flag-raising ceremony and conclude at the Presidential Palace with an Independence Day Gala, among other events. How to Observe the Day of Ireland’s Independence Compose a poem Exhibiting a deep affection for Finland? Encourage your imaginative faculties and compose a poem that rouses the patriotism of every fellow citizen. The objective is to compose a poem of such profound significance that it endures for generations within your family. Plan a carnival for Independence Day. This is not nearly as difficult as it sounds. A few patriotic volunteers, assistance from the appropriate authorities, a venue, and sponsorships are all that are required. Food service, a few activities, a fortune teller, a kissing booth, and more could be included in the carnival. Engage in philanthropy by transferring all accumulated funds to a charitable organization. Construct a few shrubs for Finland. On Finland’s Independence Day, anyone can set off fireworks and hold parties; however, only a limited number of these activities contribute to the nation’s long-term development. Activate your conscience by assembling a cohort of companions and engaging in community tree planting. Consider going green! Five Interesting Facts Regarding Finland An annual report on global happiness Finland has been recognized for four consecutive years as the happiest nation on earth. Individuals in Finland consume the most coffee in the world annually, at 26 pounds. Saunas are culturally significant. There are nearly 3 million saunas in the nation. The territory contains a thousand lakes There are roughly 187,888 lakes in Finland. A chilly nation In 1999, the lowest temperature ever recorded in Finland was -60.7°C. FINLAND INDEPENDENCE DAY DATES
Every year on November 25th, the world observes the International Day for the Elimination of Violence Against Women, a day dedicated to raising awareness about and combating one of the most pervasive and devastating human rights violations. This day marks the beginning of 16 Days of Activism Against Gender-Based Violence, concluding on December 10th, International Human Rights Day. Let's explore the significance of this day and the urgent need to eliminate violence against women. Understanding the Day: The International Day for the Elimination of Violence Against Women was designated by the United Nations in 1999 to draw attention to the widespread issue of violence against women, both physical and psychological. This violence knows no geographical, cultural, or socio-economic boundaries, affecting women of all ages and backgrounds. The Prevalence of Violence Against Women: Violence against women takes many forms, including domestic violence, sexual harassment, human trafficking, and femicide. It is estimated that one in three women worldwide has experienced physical or sexual violence in her lifetime. These statistics are not just numbers; they represent the suffering of countless women. Key Goals of the Day: 1. Raise Awareness: The day aims to raise awareness about violence against women, challenging the social norms and attitudes that perpetuate it. 2. Demand Accountability: It calls for accountability and justice for those who perpetrate violence against women, as well as for governments to enforce laws that protect women's rights. 3. Support Survivors: The day acknowledges the strength and resilience of survivors while highlighting the urgent need to support and empower them. 4. Advocate for Change: It encourages individuals, communities, and organizations to advocate for policy changes and initiatives aimed at preventing and ending violence against women. The Role of Men and Boys: Violence against women is not a women's issue alone; it's a societal issue that requires the involvement of men and boys. They play a crucial role in challenging harmful stereotypes, supporting gender equality, and being allies in the fight against violence. How Can You Contribute? 1. Raise Awareness: Share information about the International Day for the Elimination of Violence Against Women on social media, in your community, or at your workplace to educate others about the issue. 2. Support Local Organizations: Get involved with local organizations and initiatives that work to prevent and respond to violence against women. 3. Speak Up: If you witness or suspect someone is experiencing violence, speak up and offer support. Encourage them to seek help. 4. Advocate for Change: Advocate for laws and policies that protect women's rights and promote gender equality. 5. Promote Healthy Relationships: Promote healthy and respectful relationships within your community and educate others about consent and boundaries. 6. Attend Events: Attend events and seminars related to gender-based violence to deepen your understanding and connect with like-minded individuals. Violence against women is not inevitable; it's preventable. By standing up, speaking out, and advocating for change, we can work towards a world where all women and girls can live free from violence, discrimination, and fear. The International Day for the Elimination of Violence Against Women serves as a reminder of our collective responsibility to create a safer and more equitable world for all.
Cake ostrich airplane cat. Articles are used before nouns and are a type of adjective. A an the worksheets 1 2. Use of a and an worksheet. With these printable worksheets students will practice using the articles a an and the in sentences. Printables for an or a. Use an before words that begins with vowel sound. The clever printable and digital worksheet maker from just 3 33 p m quickworksheets is the smart cloud based worksheet generator for making fun effective lesson materials. A an the also see. Make 25 types of printable worksheet or use our new interactive e worksheet maker to make digital worksheets. Write the word a or an to modify each noun. Articles exercise 4 5. Articles are special adjectives used in front of a noun to identify the noun. An hour an owl. Words commonly confused worksheets focusing on when to us a and when to use a. Use an when the first letter of the word abbreviation or acronym starts with a vowel sound. A or an worksheet using a or an in a sentence item 4548 name a or an use a if the next word begins with a consonant sound. Articles worksheet 6 7. Great esl worksheets and vocabulary worksheets. Is it a or is it an. Using a an or the in sentences. A an are used before a noun that is general or not known. An or a worksheets. Sign up today and try 3 for free. Find out how to use a and an in sentences. Test yourself with our free english language quiz about a an. This is a free beginner english grammar quiz and esl worksheet. A an the exercise 8 drag and drop exercise. The is used before a noun to indicate that the noun is known to the reader the indefinite articles. This basic worksheet includes primary ruled lines and pictures. A helpful reminder is to use a before words that begin with a consonant sound and use an before words that begin with a vowel sound. Fill in the gaps. Worksheets grammar grade 2 parts of speech articles. Articles a an the 5 6. Write a or an to complete each sentence. A or an basic exercise 1 2 3 pdf. Some abbreviations that start with consonants start with vowel sounds e g rta ntu and vice versa. Use a when it starts with a consonant sound. These worksheets are for 3rd and 4th grade students or advanced younger students. The word sound is important. This fill in the blank worksheet focuses on this grammar concept by having students read through sentences and determine if they use a or an to complete the sentence. A an some exercise 8. A an the or nothing 3 4. A an the or nothing 2 3. Using a and an free. Determiners and quantifiers tests. Use an if the next word begins with a vowel sound.
Click on any of the News, Events, or Discoveries buttons above to see historical things that happened during Abraham Odom's life. These are only some of the major events that affected the life and times of Abraham, his family, and friends. For example, Abraham is 5 years old when By the early 1700's, Virginia and Maryland have established a strong economic and social structure. The planters of the tidewater region, with abundant slave labor, have large houses, an aristocratic way of life, and a desire to follow the art and culture of Europe. Less wealthy German and Scots-Irish immigrants settle inland, populating the Shenandoah Valley of Virginia as well as the Appalachian Mountains. Those on the frontier build small cabins and cultivate corn and wheat. |Great Britain adopts the Gregorian calendar on 9/14/1752. |Laws in GA prohibiting the importation of slaves are rescinded. Georgia planters were hiring SC slaves for life and even openly purchasing slaves at the dock in Savannah. |First town, Bath, is established in North Carolina by the arrival of the French Huguenots. |The Tuscarora War begins between the local Indians and colonists. After two years of fighting, the Tuscarora Indians move west. |Blackbeard, the pirate is killed off the North Carolina coast. |By 1719, North Carolina and South Carolina have separated into two colonies. |The first library is established in Charles Town, SC, by Thomas Bray. |Hurricane strikes Charleston. |The province is divided into 12 parishes as the Church of England becomes the state church. |The Yemassee Indian Wars begin and continue through 1717. After killing every trader they could find, the Creek Indians launched a broad attack across the Savannah River at settlers on South Carolina's frontier. |By 1719, the South Carolina region is separated from North Carolina and becomes a royal colony. Records were kept in Charleston. |"Stono's Rebellion" - insurrection of slaves on Stono River plantations. |Joseph Salvador purchases land near Fort Ninety Six for Jewish settlement. |By the early 1700's, Virginia and Maryland have established a strong economic and social structure. The planters of the tidewater region, with abundant slave labor, have large houses, an aristocratic way of life, and a desire to follow the art and culture of Europe. Less wealthy German and Scots-Irish immigrants settle inland, populating the Shenandoah Valley of Virginia as well as the Appalachian Mountains. Those on the frontier build small cabins and cultivate corn and wheat. |The population of American colonists reaches 475,000. Boston (pop. 12,000) is the largest city, followed by Philadelphia (pop. 10,000) and New York (pop. 7000) |Map of US Colonies |James Oglethorpe establishes the Georgia Colony in the new world. The new settlers form friendships with the Creek Indian Nation towns in this area. Georgia is the thirteen English colony to be settled. |The New York Bar Association is founded in New York City |Charleston, SC, has become the most affluent and largest city in the South. It is the leading port and trading center for the southern colonies. The population in the Carolinas has exceeded 100,000 with many French Protestant Huguenots. The wealth plantation owners bring private tutors from Ireland and Scotland. Public education does not exist. |The Cherokee War (1760-61) ends in a treaty that opens the Up County for settlement. The Bounty At of 1761 offers public land tax free for ten years, and settlers from other colonies begin pouring into the Carolina "Up Country". |Tsar Peter the Great begins traveling Europe |England's Act of Settlement created; War of Spanish Succession begins |Scotland and England unite to form "Great Britain" |War of Spanish Succession ends |System of forced labor to build roads in France is devised by Jean Orry |King George's War against North America and Caribbean begins |King George's War against North America and Caribbean ends |Seven Year's War begins |Jesuits are forced out of France
Meet the new largest known prime number. It starts with a 4, continues on for 23 million digits, then ends with a 1. As is true with all prime numbers, it can only be evenly divided by one and itself. Prime numbers are essential to modern life, used in everything from securely encrypting banking information to the random number generators used by visual effects specialists for the latest movies. And while finding larger prime numbers doesn't necessarily mean stronger encryption (that's a common misconception) human curiosity drives the continual quest to find ever-larger primes. "Each new prime is an extension of the bounds of human mathematical knowledge," Hartree Centre researcher Iain Bethune, who is part the prime number hunting project PrimeGrid, which was not involved in the new find, writes in an email to Smithsonian.com. The newest prime number is generated by multiplying two by itself 77,232,917 times, then subtracting one. In mathematical terms that is: 277,232,917 - 1. This format of calculation means the new prime is considered a Mersenne prime. Named after the French theologian and mathematician Marin Mersenne, these types of primes are always calculated as a power of two minus one. This pattern creates a countable (although still enormous) list of candidate Mersenne prime numbers. The number—which can be written in shorthand as M77232917—is nearly one million digits longer than the last confirmed prime discovered in 2016. While it’s the fiftieth Mersenne prime discovered, not all candidates between the last two primes have yet been checked so another could be lurking between them. But that would be surprising, says Chris Caldwell, a mathematician who tracks the discovery of large prime numbers. According to Caldwell, the gap between Mersenne primes is usually much larger. When M77232917 is written out as all 23,249,425 digits, the number contains every digit from zero through nine roughly 2.3 million times each. And like all prime numbers, it appears to be random, although some researchers suggest that faint patterns shape the distribution of prime numbers. These faint patterns are enough to help narrow the search for new prime numbers. This helps researchers predict how many primes will exist within a range of numbers, explains Robert Lemke Oliver, a mathematician at Tuffts Univerisity. "It happens that among numbers with 1000 digits, about one in every 2500 will be prime," he writes in an email to Smithsonian.com. Discovering the new prime was a group effort. A computer owned by Jonathan Pace, an electrical engineer living in Tennessee, identified the number using specialized Great Internet Mersenne Prime Search (GIMPS) software. Developed by George Woltman, the software tests candidate numbers as part of a search coordinated by PrimeNet system software, which was written by Scott Kurowski and maintained by Aaron Blosser. After its discovery, M77232917 was verified as a prime number by Blosser and three other people—David Stanfill, Andreas Höglund, and Ernst Mayer—each using different software and computer setups. "What's special about this prime isn't that it's prime, it's that we actually know it's prime," writes Lemke Oliver. Determining if a number is a prime is conceptually simple. All you need to do is divide it by all primes smaller than itself. If no other primes can divide it evenly, it must be a new prime number. In practice, however, this brute-force approach is time-consuming for extremely large numbers, even with modern computers capable of exquisitely quick calculations. Instead, algorithms take advantage of a number theory trick called the Lucas-Lehmer test that only works for Mersenne primes to speeds up the process. Even so, it is still computationally exhausting to test prime number candidates. Pace’s computer took six days of dedicated time to discover M77232917; the verifications took an additional 291 computing hours. The discovery is a first for Pace, who has been running software to hunt for big prime numbers for the past 14 years. Finding new prime numbers is a hot topic. GIMPS offers research awards for the discovery of new Mersenne prize numbers (Pace won $3,000 for his recent discovery), while the Electronic Frontier Foundation has a series of open challenges for the first to discover primes of ever-increasing magnitudes. GIMPS estimates it will take 15 years of calculations to reach the next milestone, finding a prime number that is at least 100 million digits long. The motivation of the prize, set up in the 1990s, is quaint in a modern context, says Seth Schoen of the Electronic Frontier Foundation. "The awards are meant to show how the Internet is useful—to let people who may never have met work together on a large scale to accomplish things," he writes in an email. And that collaboration is key for finding these big primes. "A single person with a shovel might find a large gem, but it is very unlikely," writes Caldwell. "But if you can organize 100,000 people with shovels, coordinate where and how they dig, the chance of the group finding a gem is far far higher." Software like PrimeNet hands out the shovels and coordinates digging sites, while GIMP does the digging. Welcome to the list of primes, M77232917, and enjoy your time as the largest prime number while you can. Just like death and taxes, one thing is certain: one day, a new largest prime number will be discovered.
Elizabeth wants to challenge you to a “Toothpicks and Tiles” game. Do you remember it from Lesson 1.1.2? Using exactly six tiles, solve her challenges below. Justify your answers with pictures and labels. Find a pattern where the number of toothpicks is more than double the number of tiles. Find a pattern where the number of toothpicks is more than the number of tiles. Click and drag the tiles in the eTool below to form a design with the characteristics above. Remember that the toothpicks refer to the side lengths around the arrangements. Click the link to the right to view full version. (CPM)
Preventive dental care is important throughout your life, no matter your age. By practicing good oral hygiene at home and scheduling regular checkups with your dentist, you can help keep your smile bright and healthy for many years to come. Here are a few simple ways that you can prevent the build-up of plaque and cavities: Importance of Dental Care and Prevention of Your Oral Health - Brush your teeth at least twice a day with a soft-bristled toothbrush. Use fluoride toothpaste to remove food particles and plaque from the tooth surfaces. Also be sure to brush the top surface of your tongue; this will remove any extra plaque-causing food particles, and help keep your breath fresh! - Clean between your teeth by flossing at least once a day. You can also use a mouthwash to help kill bacteria and freshen your breath. Decay-causing bacteria can linger between teeth where toothbrush bristles can’t reach. Floss and mouthwash will help remove plaque and food particles from between the teeth and under the gum line. The Right Way of Brushing Your Teeth Brushing: Step 1 Place your toothbrush at a 45-degree angle to your gum. Brushing: Step 2 Brush gently in a circular motion. Brushing: Step 3 Brush the outer, inner, and chewing surfaces of each tooth. Brushing: Step 4 Use the tip of your brush for the inner surface of your front teeth. Flossing: Step 1 Wind about 18 inches of floss around your fingers as shown. Most of it should be wrapped around one finger, and the other finger takes it up as the floss is used. Flossing: Step 2 Use your thumbs and forefingers to guide about one inch of floss between your teeth. Flossing: Step 3 Holding the floss tightly, gently saw it between your teeth. Then curve the floss into a C-shape against one tooth and gently slide it beneath your gums. Flossing: Step 4 Slide the floss up and down, repeating for each tooth. - Eat a balanced diet, and try to avoid extra-sugary treats. Nutritious foods such as raw vegetables, plain yogurt, cheese, or fruit can help keep your smile healthy. - Remember to schedule regular checkups with your dentist every six months for a professional teeth cleaning. - Ask your dentist about dental sealants, protective plastic coatings that can be applied to the chewing surfaces of the back teeth where decay often starts. - If you play sports, be sure to ask your dentist about special mouthguards designed to protect your smile. If it’s been six months since your last dental checkup, then it’s time to contact our practice and schedule your next appointment!
Fuel cells generate electricity using an electrochemical reaction, not combustion, so there are no polluting emissions, only water and heat as by-products. Many fuel cells are fueled with hydrogen, which can be derived from a wide range of traditional and renewable sources, including biogas. Many facilities, such as wastewater treatment plants (WWTP), landfills, food/beverage processing facilities, wineries, breweries, dairies, large industrial factory farms and confined animal feeding operations (CAFOs), generate tons of organic waste as a byproduct of daily operations, be it sewage, effluent, food or animal waste, all of which can be expensive to remove and burdensome to store. These sites often use an anaerobic digester to convert the organic waste into methane or anaerobic digester gas (ADG), and then burn the ADG, in a combustion-based generator or flare it into the atmosphere to dispose of it. Although ADG is considered carbon-neutral since it is derived from an organic (non-fossil) source, flaring or burning leads to releases of direct and indirect GHGs and other air pollutants. Since ADG contains hydrogen, which is the fuel of choice for fuel cells, a cleaner, more efficient option is to use the gas in a fuel cell to generate electricity and heat for the plant, following a gas cleanup step.