content
stringlengths
275
370k
Are there certain things that your child can do that you wish your child would do more often? Do you find that sometimes your child displays a certain behavior for a short period of time but then stops performing that specific behavior? If so, you can learn to use positive reinforcement to increase the likelihood that certain behaviors will continue or increase in the future. You may here some people say that using positive reinforcement can be detrimental because it can cause children to depend on it as opposed to learning internal motivation. However, it is all in how you implement positive reinforcement as far as whether or not you are increasing internal motivation. There are several different ways to use positive reinforcement. At the basic level, you can consider the use of natural reinforcement, social reinforcement, token reinforcement, and tangible reinforcement. All of these entail delivering a specific consequence following a desired behavior that increases the likelihood that the behavior will continue or increase in the future. Natural reinforcement occurs when a natural consequence for a certain behavior increases the likelihood that the behavior will occur again in the future. Examples of natural reinforcement may be when the child points to a ball, the child gets the ball; when a child names the animal on a puzzle piece, the child gets to put the puzzle piece in; when the child says hello to a peer, the peer smiles and says hello. These would all be considered examples of natural reinforcement as long as these consequences resulted in an increase of the desired behavior. Social reinforcement entails consequences for behavior such as smiles, high fives, tickles, praise, etc. It is important to note that many social reinforcers are quite natural. When someone does something that you like you would naturally smile and make a positive comment. Token reinforcement can include the child receiving points, tickets, stickers, stars, pennies, etc. that can later be exchanged for a tangible item or a preferred activity. Tangible reinforcement includes giving the child a desired food, drink, toy, or activity after engaging in the desired behavior. To increase internal motivation, these four types of reinforcement should be used on a continuum where natural and social reinforcement are to be used before token or tangible reinforcement. Token and tangible reinforcement should be reserved when children do not respond to natural and social reinforcement. When you do need to resort to the use of token and tangible reinforcement, try to switch to using natural and social reinforcement as quickly as possible. This is because we do not want to encourage escape motivated behavior. Meaning, we don’t want to encourage children to do something just to get something else or to get away from the task or interaction itself. Of course, if we must use token or tangible reinforcement because it is the only way to get the child to respond, we must start there temporarily. For further assistance with using positive reinforcement, parents and teachers can consult with a board certified behavior analyst (BCBA). BCBA’s can provide hands-on training and support to help parents and teachers learn how to use positive reinforcement effectively as well as a variety of other effective teaching strategies.
By John Ferguson South Plainfield Environmental Commission Storm Water Committee Why is Ground Water Important? Ground water is the primary drinking water source for half of the state's population. Most of this water is obtained from individual domestic wells or public water supplies which tap into aquifers. New Jersey agriculture also depends on a steady supply of clean ground water for irrigation. What is Ground Water? Where does the water that rains on your home go? After it leaves your lawn, street, or sidewalk, where is it headed? If it soaks into the ground, it becomes ground water. A sizable amount of rainwater runoff seeps into the ground to become ground water. Ground water moves into water-filled layers of porous geologic formations called aquifers. Depending on your location, aquifers containing ground water can range from a few feet below the surface to several hundred feet underground. Contrary to popular belief, aquifers are not flowing underground streams or lakes. Ground water moves at an irregular pace, seeping from more porous soils, from shallow to deeper areas and from places where it enters the Earth's surface to where it is discharged or withdrawn. More than 100 aquifers are scattered throughout New Jersey, covering 7,500 square miles. Aquifers are broken into 5 rankings based on the number of gallons they can provide to wells, highest being greater than 500 gpm (gallons per minute) to the low of only 25 gpm. These wells are used for water supply, irrigation, or industrial use. Ground Water Complications Humans have an impact on ground water in several ways. One way people influence ground water is by changing where stormwater flows. By changing the contour of the land and adding impervious surfaces such as roads, parking lots and rooftops, people change how and where water goes. When it rains, the stormwater in a developed area is less able to soak into the ground because the land is now covered with roads, rooftops, and parking lots. Another way people affect ground water is by adding potential pollution sources. How the land above ground water is used by people, whether it is farms, houses or shopping centers, has a direct impact on ground water quality. As rain washes over a parking lot, it might pick up road salt and motor oil and carry pollutants. Theses pollutants can move through the soil or storm drains to enter either a stream or local the ground water. South Plainfield’s EPA Superfund Site is an example of pollutants getting into our ground water and the local tributaries. This EPA project continues to be mitigated. Note that a farm or suburban lawn, snow melt might soak fertilizers and pesticides into the ground and can be another source of pollutants. Controlling Stormwater Flow Managing stormwater to reduce the impact of development on local watersheds and aquifers relies on minimizing the disruption in the natural flow. By designing with nature, the impact of urbanization can be greatly reduced. This can be accomplished by following these principles: • minimizing impervious surfaces. • maximizing natural areas of dense vegetation. • structural stormwater controls such as stormwater management basins • practicing pollution prevention by avoiding contact between stormwater and pollutants.
The International Date Line (IDL) is an imaginary line of longitude on the Earth’s surface located at about 180 degrees east (or west) of the Greenwich Meridian. The date line is shown as an uneven black vertical line in the Time Zone Map above and marks the divide where the date changes by one day. In 1519, Ferdinand Magellan set out westward from Spain, sailing for East Asia with 241 men in five ships. Three years later, the remnants of his crew (18 men in one ship) successfully completed the first circumnavigation of the globe. Although a careful log had been kept, the crew found that their calendar was one day short of the correct date. This was the first human experience with time change on a global scale, the realization of which eventually led to the establishment of the InternationalDate Line. One advantage of establishing the Greenwich meridian as the prime meridian is that its opposite arc is in the Pacific Ocean. The 180th meridian, transiting the sparsely populated mid-Pacific, was chosen as the meridian at which new days begin and old days exit from the surface of Earth. The International Date Line deviates from the 180th meridian in the Bering Sea to include all of the Aleutian Islands of Alaska within the same day and again in the South Pacific to keep islands of the same group (Fiji, Tonga) within the same day . The extensive eastern displacement of the date line in the central Pacific is due to the widely scattered locations of the many islands of the country of Kiribati. The International Date Line is in the middle of the time zone defined by the 180° meridian. Consequently, there is no time (i.e., hourly) change when crossing the International Date Line—only the calendar changes, not the clock. When you cross the International Date Line going from west to east, it becomes one day earlier (e.g., from January 2 to January 1); when you move across the line from east to west, it becomes one day later (e.g., from January 1 to January 2). The time difference between either side of the International Date Line is not always exactly 24 hours because of local time zone variations.If you travel around the world, changing standard time by one hour each time you enter a new time zone, then a complete circuit would mean that you adjusted your clock or watch time by 24 hours. This would lead to a difference of one day between the date on your clock and the real calendar date. To avoid this, countries are on either side of the International Date Line which runs down the middle of the Pacific Ocean. If you cross the date line moving east, you subtract a day, whereas if you are moving west you add a day.
Astronauts aboard the International Space Station (ISS) are constantly performing scientific experiments. Whether they be analyses of how well spiders build webs in microgravity, or even the effectiveness of fungi as protection against space radiation. Now, astronauts will be keeping an eye on tardigrades and squid to see how they adapt to the rigors of space. Smithsonian magazine reported on the experiments, which NASA has just sent to the ISS aboard a SpaceX Falcon 9 rocket. The Dragon capsule carrying the experiments to the ISS is on a resupply mission, and has 4,300 pounds of materials aboard, including fresh supplies for the ISS’ crew and a roll-out solar array. NASA says it’s sending the tardigrades (or “water bears”) to the ISS to study how the stresses of living in space affect them. Tardigrades, which are near-microscopic animals that live in water, are able to tolerate extreme environments. This characteristic, the space agency says, makes them “a model organism for studying biological survival under extreme conditions” in space. Scientists have also sequenced the genome of the tardigrade species Hypsibius exemplaris, which will allow for study of genetic changes. Jamie S. Foster, University of Florida As for the squid, NASA’s sending them to study how their relationship to the bacteria Vibrio fischeri changes in microgravity. The bobtail squids that NASA send have a symbiotic relationship with V. fischeri; similar to the one humans have with their gut microbes. And the space agency hopes that studying the changes in the relationship between the squids and their symbiotic bacteria will help us better understand how our own relationship to critical microbes may change in space. Along with the tardigrades and squid, NASA is sending up multiple other gadgets and experiments. The space agency also sends up a unique, portable ultrasound machine, for example, as well as cotton plants so they can study how their especially resilient root systems do in space. There will also be “tissue chips” for studying how microcrystals form in space. It seems kidney stones can be a problem for astronauts, and this experiment hopes to illuminate why. Incidentally, we are now rethinking how much we want to take a ride on SpaceX’s Starship. The post NASA Just Sent Some Tardigrades and Squid into Space appeared first on Nerdist.
By: Mike Maharrey A lot of people believe that the federal government is a friend to liberty because it protected African-Americans against discrimination in the 1960s. But research shows that this was the exception to the rule, at best. Centralized authority, including the federal government, has historically brutalized minority populations. When it comes to civil rights, the conventional narrative goes like this: African Americans were enslaved and then suffered extreme discrimination until the federal government stepped in during the 1960s and passed the Civil Rights Acts to protect them. This narrative paints centralized-government as the hero. And while the Civil Rights Acts did extend protections to black people and hastened the end of the Jim Crow era, this kind of federal action to protect minorities is actually an anomaly. More often than not, the U.S. federal government has enacted and enforced policies that have facilitated discrimination and worse. To begin with, National power was the tool of slavers. From the moment the Constitution was ratified forward, southern slavers relied on federal power and centralized authority to maintain the legal framework for slavery. It also depended on federal power to enforce the fugitive slave clause, even as free northern states appealed to their state sovereignty to protect their black citizens. Up until the end of the War Between the States, federal power was vigorously applied for the benefit of slavers to preserve their institution and protect their ‘property.’ And after the war, federal power continued to benefit those who would discriminate on the basis of race – particularly when it came to land ownership. A recent article in The Atlantic chronicles how federal agriculture policy enabled rich white landowners and big corporate agribusiness interests to steal land owned by black people. As The Atlantic put it, “A war waged by deed of title has dispossessed 98 percent of black agricultural landowners in America.” Black landowners have lost 12 million acres of farmland over the last century. You might think this happened back in the dark depths of our past, but it didn’t. The losses mostly occurred in living memory – primarily from the 1950s onward. According to the former president of the Emergency Land Fund, black farmers lost in the neighborhood of 6 million acres of land from 1950 to 1969. According to The Atlantic, much of this land theft was accomplished under the authority of law. “The land was wrested first from Native Americans, by force. It was then cleared, watered, and made productive for intensive agriculture by the labor of enslaved Africans, who after Emancipation would come to own a portion of it. Later, through a variety of means—sometimes legal, often coercive, in many cases legal and coercive, occasionally violent—farmland owned by black people came into the hands of white people. It was aggregated into larger holdings, then aggregated again, eventually attracting the interest of Wall Street.” The Atlantic called this “a silent and devastating catastrophe, one created and maintained by federal policy.” [Emphasis added] Federal agriculture policy was the key to this transfer of land ownership. During the New Deal era, the federal government took an increasing amount of control over American agriculture. By the 1950s, the feds regulated virtually every aspect of America’s farm economy and had almost complete control over farm credit. Wealthy white landowners and big agriculture conglomerates took advantage of this system and slowly divested thousands of African American farmers of their land. It started with President Franklin D. Roosevelt’s “life raft for agriculture” – the Farm Security Administration. “Although the FSA ostensibly existed to help the country’s small farmers, as happened with much of the rest of the New Deal, white administrators often ignored or targeted poor black people—denying them loans and giving sharecropping work to white people.” In 1945, the Farmers Home Administration, (FmHA) replaced the FSA. According to The Atlantic, “The FmHA quickly transformed the FSA’s programs for small farmers, establishing the sinews of the loan-and-subsidy structure that undergirds American agriculture today.” There is no denying that African Americans suffered from racism. But you shouldn’t lose sight of the fact that government actions gave racists their power, from Jim Crow laws at the state and local level to federal farm policy that enabled white people to dispossess black Americans of their land. In 1961, President John F. Kennedy’s administration created another federal program known as the Agricultural Stabilization and Conservation Service. (ASCS.) This agency worked alongside FmHA to provide loans to farmers. The Atlantic described how these federal programs came to dominate agriculture in the U.S. with disastrous results for black farmers. As The Atlantic points out, the members of committees doling out money and credit established by these federal programs were elected locally, during a time when black people were prohibited from voting. “Through these programs, and through massive crop and surplus purchasing, the USDA became the safety net, price-setter, chief investor, and sole regulator for most of the farm economy in places like the [Mississippi] Delta. The department could offer better loan terms to risky farmers than banks and other lenders, and mostly outcompeted private credit. In his book Dispossession, Daniel calls the setup ‘agrigovernment.’ Land-grant universities pumped out both farm operators and the USDA agents who connected those operators to federal money. Large plantations ballooned into even larger industrial crop factories as small farms collapsed. The mega-farms held sway over agricultural policy, resulting in more money, at better interest rates, for the plantations themselves. At every level of agrigovernment, the leaders were white.” [Emphasis added] White government officials and bureaucrats were in a perfect positions of power to help their white buddies add to their landholdings. According to The Atlantic, USDA audits and investigations revealed that “illegal pressures levied through its loan programs created massive transfers of wealth from black to white farmers.” Investigations by the United States Commission on Civil Rights reportedly uncovered “blatant and dramatic racial differences in the level of federal investment in farmers.” The FmHA provided much larger loans for small and medium-size white-owned farms, relative to net worth than it did for similarly sized black-owned farms. The report said the FmHA policies “served to accelerate the displacement and impoverishment of the Negro farmer.” The Atlantic provides anecdotal evidence revealing how unscrupulous people used these federal programs for their own benefit. “In the 1950s and ’60s, Norman Weathersby, a Holmes County Chevrolet dealer who enjoyed a local monopoly on trucks and heavy farm equipment, required black farmers to put up land as collateral for loans on equipment. A close friend of his, William Strider, was the local FmHA agent. Black farmers in the area claimed that the two ran a racket: Strider would slow-walk them on FmHA loans, which meant they would then default on Weathersby’s loans and lose their land to him. Strider and Weathersby were reportedly free to run this racket because black farmers were shut out by local banks. “Analyzing the history of federal programs, the Emergency Land Fund emphasizes a key distinction. While most of the black land loss appears on its face to have been through legal mechanisms—“the tax sale; the partition sale; and the foreclosure”—it mainly stemmed from illegal pressures, including discrimination in federal and state programs, swindles by lawyers and speculators, unlawful denials of private loans, and even outright acts of violence or intimidation. Discriminatory loan servicing and loan denial by white-controlled FmHA and ASCS committees forced black farmers into foreclosure, after which their property could be purchased by wealthy landowners, almost all of whom were white. Discrimination by private lenders had the same result. Many black farmers who escaped foreclosure were defrauded by white tax assessors who set assessments too high, leading to unaffordable tax obligations. The inevitable result: tax sales, where, again, the land was purchased by wealthy white people.” Of course, racism was also rampant in the private lending sector, particularly in the deep South. Still, there were always bankers who were either free from the scourge of racism or cared more about the color of money than the color of skin. But as The Atlantic alludes to, federal loan programs dominated the market and squeezed out many private lenders. In a system free from government monopolization, there almost certainly would have been more available credit with better terms available for black farmers. The bottom line is that the existence of federal government programs, coupled with racism, allowed black people’s land to be stolen from them. Racism alone couldn’t have accomplished this without government power to make it actionable. The widespread notion that centralized national power is good for minorities is a myth. Centralized power has never been friendly toward minorities. From the Jews in Germany, to the Ukrainians in the U.S.S.R, to the Armenians in the Ottoman Empire, to Africans in the United States, centralized governments have historically oppressed minorities and sometimes worked to exterminate them. The Civil Rights Act notwithstanding, history shows that the U.S. federal government has by-and-large followed the historical pattern by facilitating, both directly and indirectly, slavery and discrimination.
The world consists of different countries and each country gets operated under their own government. Thus, types of government in the world should be categorized on the basis of some major factors that affect the politics universally. National governments can also be defined as a system which governs a community or a state. They can be categorized into different types of government. The types of these governments actually define that what kind of work will come under regarding the activities of human and then they carry out the necessary steps in various significant manner. The major and only function of any government is to organize a particular state. Each and every country in this world is fixed and the governance system directed them. Al these types of national government are highly essential for any country to have. It is so because it helps in the fields of political research which is highly essential for the worldwide political relations. The movements which are in power provide the experience of having some different types of national government and they are associated with the types of different governments in themselves. In most of the Western countries these Governments are widespread with some popular attributes. Some of the Eastern nations are also included after being colonized under the rule of western nations. There are some types of national government in which a large percentage of the population is permitted to vote whether the voting is related either with making the decisions or selecting the representative who can make the decisions. Some of the different types of national governments are as follows: Let’s take a deep look on the types of government in the world. It is one of the types of government in the world which is commanded by the representatives one-by-one, in the manner they retains the position. A legal monarchy is something which carries an in-written Constitution. This constitution in simple words is the list of all the rules and regulation which a particular country has to follow. It also includes the responsibilities, which each and every person of a country has to fulfill. These rules and regulations are defined by the Head of a State. This can be called as a society national government in which the government keeps hold of the unconditional commands over all the facets of its occupying people. It is another form of national government in which the government head is elected by the chosen leaders or the individuals in the office. His term of office is also a term which means that a President is voted into the office usually for a fixed period of time. The republic might be popular and might be not. Some places like the U.S.A. where the republic is very popular, the public is allowed to choose their own leaders by voting them, but the nations where the republic is not popular the leaders are ceremonial functions. This word is derived from an ancient Greek word which means the rule of the government and the people. This is the type of government in which the people have right to elect their own individual by voting. These representatives meet in the parliamentary office (Parliament) in order to make laws and known as the Parliamentary democracy. These were some types of national government which must be known to everyone.
As scientists continue to wrestle with the vexing problem of how to get humans to Mars and bring them safely home, robotic exploration of the Red Planet has already yielded many amazing discoveries. However, our missions to the planet’s surface have studied only a tiny fraction of the land area, and rovers aren’t likely to get much faster in the future. NASA has just completed preliminary testing on a novel wing design that could one day allow a Mars probe to soar through the planet’s thin atmosphere and cover great distances. There are two projects operating in tandem, both based on the same high-lift boomerang-shaped wing design. There’s the Preliminary Research Aerodynamic Design to Lower Drag (Prandtl-d) and the forward-looking Preliminary Research Aerodynamic Design to Land on Mars (Prandtl-m). NASA scientists have been testing the Prandtl-d design for some time now, but it has only recently been subjected to a full battery of wind tunnel tests. This is essential to understand how the wing will perform in a variety of conditions, including those on Mars if the design is carried over to a space mission. The wind tunnel scale model testing of Prandtl-d was carried out jointly by NASA’s Armstrong Flight Center and Langley Research Center. According to the data, the boomerang wing is remarkably stable, even when it’s completely stalled. That could save a Mars exploration plane from a catastrophic failure when it’s a few million miles away from the nearest repair crew. The airflow patterns over the boomerang wing proved to be totally new to the team, which could account for its ability to generate high lift and remain stable. The next step for Prandtl-m is a high altitude test of the wing design that will take place later this year. A small prototype of the plane will be released at an altitude of 100,000 feet. The atmosphere up that high is a close approximation of Mars, so it’s important to know if Prandtl-m could generate enough lift to stay aloft in such conditions. If the test goes well, that could be huge for future Mars missions. NASA doesn’t expect it will have to design an entire mission around Prandtl-m. The beauty of this design is that it could ride to Mars in a 3U CubeSat (about one foot square) connected to the aeroshell of a Mars rover. This module could be ejected when the rover begins its descent, allowing the plane to deploy and fly a tremendous distance before gliding to the surface. It could be used for geological surveys, imaging, and scouting future landing sites up close. The additional weight of the Prandtl-m craft wouldn’t add much of anything to the launch cost either. NASA believes Prandtl-d could morph into Prandtl-m in time for inclusion on the 2020-era Mars rover. That mission could reach the Red Planet as soon as 2022-2024.
Posted on: 28 September 2016Share Pink eye, medically known as conjunctivitis, is a common and very contagious disease. The good news is that it's easy to treat. If you have children, chances are someone in the home has been exposed to it at least once because it is that common. Here's all you need to know about pink eye, the treatments, and prevention tips. It Comes in Three Forms You could suffer from three different types of conjunctivitis: viral, bacterial and allergic conjunctivitis. The first form is usually due to a cold and doesn't often need medication. In fact, it can't always be treated with medication, as the body naturally needs to fight against the virus. The second form can cause damage and needs antibiotics to treat. This usually has discharge, and can lead to the eyes sticking together overnight. The third form is caused through the irritation of allergens. Both eyes will be affected and it isn't contagious. Children Suffer From It the Worst Many children suffer from bacterial pink eye due to the fact that it is so contagious. This is often because children have lax hygiene and are more likely to rub their eyes or touch their faces. Adults know more about how infections are passed on, so are more likely to be proactive about preventing viruses and diseases spreading. Teaching children about hygiene and preventing diseases spreading is the best way to prevent pink eye. You can also go around the home with antibacterial wipes and ensure your child washes his/her hands regularly. Common Symptoms Include Itchiness The condition has many symptoms, one of those being itchiness. Children are more likely to rub without thinking, and this can make the infection worse. The best thing a child or adult with this condition can do is avoid rubbing his/her eye and seek medical treatment. Other symptoms include a feeling of grit in the eye, discharge (watery or yellow in color), inflammation, burning sensations and blurry vision. Pink coloring in the eye is another symptom, which is where conjunctivitis gets its nickname from. Treatment for Conjunctivitis Seeking help from your pediatrician is the best process for treatment. The exact condition will be diagnosed and a cream or drops can be prescribed in the event of bacterial pink eye. Considering how contagious pink eye is, it is not uncommon for doctors to prescribe treatment without even seeing the patient in order to avoid spreading pink eye to other patients. When it is virus or allergen related, soothing the eyes with cold compresses is beneficial to help take away the burning sensation. Whichever type of pink eye, you can also treat by soaking cotton wool in lukewarm water and washing the eyes. Make sure you use only one cotton wool ball once to prevent the infection passing back into the eyes. Pink eye is very common but is also easily treatable. Look out for the symptoms and book an appointment with your doctor as soon as possible.
Subarctic environments such as Denali National Park and Preserve’s tundra and boreal forest, have undergone drastic changes over the past few decades, likely as a result of a changing climate–increasing temperatures. Because Denali lays at the southern limits of permafrost, the average temperature of the park’s permafrost hovers just below freezing, making it particularly sensitive to thawing with just a small increase in mean annual temperature. Warmer temperatures are expected to thaw permafrost, which will greatly affect ecosystems locally. But one of the farthest–reaching consequences of climate change in northern ecosystems may be the effect of thawing permafrost on the global carbon cycle. Net Balance of C More than 50% of global terrestrial carbon (C) is stored in permafrost regions as soil organic matter. Carbon naturally enters any terrestrial ecosystem by photosynthesis, since plants take in carbon dioxide (CO2) from the atmosphere as they grow. Carbon returns to the atmosphere CO2 from the metabolic respiration of plants, animals, and microbes (bacteria). In water logged areas carbon can also be released in the form of methane (CH4). The ecosystem carbon balance is the difference between carbon uptake and emissions. When carbon uptake by plant growth is greater than carbon emissions by metabolic respiration, the ecosystem is a carbon sink, meaning that atmospheric C is stored in biota and soils. When carbon emissions are greater than carbon uptake, the ecosystem is a carbon source, meaning that carbon from the ecosystem is released (from biota and soils) to the atmosphere. Potential Effects of Thawing Permafrost Permafrost thaw associated with climate warming can lead to two different impacts that change ecosystem carbon balance to a sink or a source. Warming increases plant growth and it promotes the invasion of shrubs and trees into tundra landscapes. These processes can increase the amount of C stored in plant biomass thus reducing the amount of C in the atmosphere. At the same time, permafrost thaw and the associated environmental changes (e.g., ground surface collapse or subsidence) stimulate the microbial decomposition of soil organic matter. This decomposition can decrease the amount of stored C by releasing more CO2 into the atmosphere. These metabolic by-products (CO2 and CH4) are the same “greenhouse gases” involved in climate change. Thus, when permafrost thaws it may affect the cycling of C to or from the atmosphere which can create additional global-scale impacts. Studying the Effects of Permafrost Thaw on C To learn more about how permafrost thaw impacts the ecosystem carbon balance, Dr. Ted Schuur of Northern Arizona University, conducts research at a site just outside the northeastern boundary of the park. This tundra site near Eight Mile Lake is underlain by permafrost that researchers first noticed was thawing in the 1980’s. Schuur and his research group continuously measure soil temperature and moisture using sensors connected to a data-logger to track physical changes in the environment as the permafrost thaws. The site’s carbon balance is monitored using an eddy covariance tower that measures the wind speed and direction as well as the CO2 concentration of the atmosphere. The site is a well-drained wetland so the contribution of CH4 to the landscape carbon balance is minimal. Data from the tower sensors are then analyzed to determine the Net Ecosystem Exchange (NEE) of CO2 between the tundra and the atmosphere. Every year, the NEE of the tundra landscape oscillates between being a carbon source in the winter and a carbon sink during the growing season (May to September). During the growing season, NEE is negative and indicates that the landscape is acting as a carbon sink of atmospheric CO2. This is because growing tundra plants incorporate carbon into new leaves, roots and stems through the process of photosynthesis, outweighing CO2 losses from metabolic processes. During the winter, however, it is too cold and dark for plants to photosynthesize so the tundra landscape has a positive NEE. A positive NEE rate shows that landscape is acting as a carbon source: it is releasing CO2 from plants and soils to the atmosphere. This CO2 is released as tundra plants and soil microbes break down carbon compounds and use the energy stored in those carbon compounds to fuel their metabolism. Metabolic respiration of CO2 by plants and microbes takes place year round but during the growing season this release of CO2 is overshadowed by the photosynthetic uptake of CO2 of the plant community. Dr. Schuur and his team monitor NEE year round so that they can capture daily and seasonal changes in the landscape carbon balance as permafrost thaws. Year round measurements of NEE are also used to determine whether the strength of the growing season carbon sink is equal to the strength of the winter carbon source. When this is true, the ecosystem carbon balance is stable and the carbon storing capacity of the landscape does not change. The long–term carbon balance of a permafrost landscape is important because it allows scientists to understand how this ecosystem is responding to climate change. Findings: Six years of carbon balance monitoring Monitoring the ecosystem C balance of the tundra near Eight Mile Lake has revealed that that this landscape acted as a net carbon source from 2008–2013. Photosynthesis by plants in the growing season did not offset the year round metabolic respiration by plants and soils in five of the six years studied. As a result, the carbon storing capacity of this landscape was reduced from 2008 to 2013: 276.25 g of C was lost from each m2 of tundra. This C was previously stored in tundra plants and soils was released to the atmosphere in the form of CO2. Carbon dioxide is a greenhouse gas that is expected to further accelerate global warming and could cause further thaw of permafrost in the future so it is critical that the scientific community understands whether the carbon balance at this site is typical of warming tundra landscapes. Dr. Schuur and his research team collaborate with permafrost researchers from around the world to compare results from sites and determine the impact permafrost thaw will have on our future climate. Last updated: April 4, 2017
Pneumonia is an infection that can take root in one or both of your lungs. The infection causes inflammation of the sacs of air in the lungs, causing fluid to build-up in the lungs. If an individual develops pneumonia caused by bacteria, one of the earliest symptoms is going to be an increased heart rate. It is very easy for this to develop into a fever. To learn more, a team of home doctors in Gold Coast talk about circulatory health. How does pneumonia affect the circulatory system The pneumonia infection can very easily spread into the bloodstream. Using the bloodstream as a transmission system the infection from the lungs can spread to other organs. This is potentially very serious and the organs can end up badly damaged. It is even possible that this spread of infection could lead to death. When bacteria is spread in the bloodstream the medical term is bacteremia and this can possibly be fatal in that it leads to septic shock. Septic shock reduces the blood flow to the major organs in the victim’s body, and it can result in organ damage and eventually organ failure. There is some research that has indicated that Pneumonia can lead to a greater chance of heart attack, during, and after the infection. It is unclear, but possibly years into the future. Community-acquired pneumonia is a condition that is going to impact between 5 to 6 million Americans and be the cause of 60,000 fatalities. This is not an issue that sits on the periphery of medical concern. There is still some uncertainty as to how Community-acquired pneumonia causes Cardio Vascular Disease but doctors have identified a clear two-way connection. Streptococcus pneumonia is the bacteria that causes most cases of bacterial pneumonia. It is known that this bacterium is able to attack individual heart cells and kill them. It is therefore important that patients do not make light of pneumonia and treat ist as the serious issue it really is.
The vector images are a method used in computer graphics to describe a picture. The image is referred to as a vector picture. Image is explained by a set of primitives that define curves, lines, points and polygons and also colors. It’s radically different. Why select vector pictures Possibility to mention that the information in a form right restricted to a human being (e.g. that the Normal SVG ); Opportunity to mention that the information in a format which uses less distance; Possibility to expand the picture, without affecting its own resolution. This type of method of description of picture information additionally has the undoubted advantage of higher data compression: in training, a vector image will occupy a lesser distance than a corresponding raster. It’s also more easy to handle and change. That is the vector picture is appropriate to manage considerable quantities of information. When a vector image is displayed on a device using a high resolution, it won’t lose its own definition. A lineup that runs if the display is represented with raster images, is saved as a succession of color pixels organized to form the lineup. If we attempted to expand a section of this line, we’d see the individual pixels that make the line-up. If the line was saved in vector mode, the lineup identified using the first that was coordinated could be saved as an equation that begins from some stage and ends at a different stage together with the coordinates that were final. Enlarging a part of this line doesn’t create visual artifacts or the screen of pixels making up the picture since the line will be displayed using the maximum resolution allowed by the screen. If you raise its resolution to 1024×768 pixels and choose a pixels vector picture, then the definition of the image isn’t going to change. The vector graphics are of top quality. Vector images have significant use in the area of publishing, architecture, engineering and computer images. All images applications that are three-dimensional save by defining objects as aggregates of mathematics. From the personal computer, the usage is that the definition of a ribbon. All fonts used by personal computers are produced permitting the user to modify the size. We can observe that vector graphics have a vast array of programs and they’re extremely useful since they occupy distance. Pros and disadvantages of vector images The vector graphics are described mathematically, according to their geometric attributes. To be exact, there is a vector image explained by a set of primitives that describe points, lines, curves, and polygons. They are different from raster graphics, which can be referred to as a grid of correct pixels. 1. Compared to a bitmap, vector graphics will need disk space Typically. Colors or gradients that were easy for the most part form them due to that they don’t demand a great deal of disk space. Lesser the information to produce the picture, smaller would be the document size. They are given taste to other pictures compared. 2. When they are scaled vector images don’t eliminate quality. A vector image can be scaled. In the example of matrix pictures, there is a stage reached where it’s clear that the image consists of pixels. Therefore, vector images’ grade is superior to other sorts of pictures. 3. They altered and may be stored later on. The best part is the practice of modification is simple also. All sorts of changes can be handled with no difficulty. The resultant document doesn’t occupy a great deal of room if a document is altered. That is just another reason. 4. The practice of producing them is simple also. In reality, using drawings vector pictures can be got by us without plenty of issues. User-friendly and Easy programs like Corel Draw Macromedia Freehand and Adobe Illustrator can be used for producing. 1. They’re generally not acceptable for encoding videos or pictures taken from the”real world” (such as – photographs of character ), though some formats encourage mixed makeup. All digital cameras store pictures in a bitmap. 2. The information used to explain them needs to be processed with a powerful machine, i.e. the computer has to be powerful enough to carry out the required instructions for forming the last picture. It can slow down the rendering of the picture onto a display if working with files When the data volume is large. 3. Another drawback is that drawing mistakes are visible as soon as the pictures are expanded to a certain degree. An event can impact the quality of the pictures when they’re employed within the industry of animation. Obtaining Vector images Direct you’re a designer or even in the event that you want any images or graphics to use on a job then you want to read this article about vector images that are valid to use at no cost. Back at the time once the internet was only starting, and when each record designed in Microsoft Word was coated in ugly animation pictures that were generic, there were available pictures known as Clip Art. The concept behind Clip Art was going to offer without needing to worry about copyright images that may be used in files. This notion was something and now’s slightly obsolete. In 2009, the internet is awash with complimentary websites under what’s referred to as a creative commons license, licensed. Which are Creative Commons? Creative Commons is a permit which enables first artists to talk about their job, be it a song, a picture, a movie, and others use it under the terms provided. Licenses can come with limitations like”Not for commercial use” or”You should blame me as the first writer.” This is a great thing since it allows you to know where you stand about copyright, which if you’re a designer is a good thing. You do not need your customers to get into trouble. Vector images are images made in software like Adobe Illustrator or Corel Draw. Vector images are scalable, and also an efficient format for graphics which use a color array that is lesser, such as logos and animation illustrations. Free Vectors for the graphically challenged Now the cool part is if you mix vector images using a creative commons license is that you get a whole pile of authorized pictures available to work with for your graphic design jobs. You would like to design the poster for your work, and you’re not sure about how to locate pictures to use, or you want some pictures to work with on the business website but are uncertain where you stand together with copyright. There are 1000’s of free vector images on Picspree available for you. So don’t hesitate to design these pictures, and utilize away to make a masterpiece that is creative. That is exactly what they were meant for.
Snoring refers to the snorting or rattling noise made by an individual during breathing associated with sleep. It is produced by the vibration of the soft palate and other soft tissues in the mouth, nose or throat. Snoring can be soft and infrequent, or loud and frequent, varying with the individual. Snoring can be graded based upon specific features, and snoring severity increases with the grade. Type of snoring sound There are three sounds associated with snoring when the power of the sound is examined. The duration of each sound depends on the vibrating tissue. Thus vibration of the soft palate produces the longest sound, followed in length by the epiglottal snore and the base of the tongue. It is important to note that in obstructive sleep apnea (OSA), these sounds are combined with more or less complexity, and occur concurrently. There are four types of energy sounds in a possible snore. They are mapped as a snore map. They consist of a low-frequency single syllable (type 1), duplex sounds with low and middle frequencies (type 2), duplex sounds with low and high frequencies (type 3), and triplex sounds with all three types of frequencies (type 4). These sounds create two different snore patterns, namely simple and complex waveform snores. Momentary closure of the airway produces complex waveform snores, while the vibration of an open airway leads to simple waveform snores. Greater complexity is thus present in snores due to OSA, as compared to a simple snore. However, both simple and complex waveforms are mingled in OSA syndrome. Automated analysis of snores has been attempted and involves classifying the sounds in sleep into three: snoring (‘voiced non-silence’), breathing (‘unvoiced non-silence’) and silence. These are segregated with respect to four features. When combined with noise reduction and feature selection, the technique lends itself to automation by achieving an accuracy of almost 97 percent. Mechanism of snoring The vibration of different parts of the throat produces different types of sound during snoring. When the soft tissue of the nasopharynx is involved, the sound is soft and nasal in quality. Vibration of the soft palate and uvula, in contrast, produces a guttural and loud snore which is characteristically throaty. In most snorers, more than one area is involved in the vibration. Tonsil enlargement can also cause snoring. Snoring occurs at peak intensity during stage 4 sleep, or deep sleep, which usually occurs 90 minutes after the onset of sleep. The position of the sleeper also influences snoring, with the loudest sounds occurring when the individual is lying in the supine position. The sound occurs during the stage of inspiration. Sleep produces muscular relaxation which includes that of the throat and airway muscle. This causes constriction of the airways, which increases the velocity of air movement during expiration, and alters the air pressure in the air passages. This in turn causes the sides of the passages to collapse slightly inwards, which promotes soft tissue vibration because of the inrushing air with the next inspiration. When the airway is partially obstructed, as happens with tonsillar inflammation or colds, the same effect is produced, accounting for the onset of snoring with such conditions. Snoring damages the vascularity of the muscles that are involved, leading to their weakening and further airway narrowing. This means snoring will worsen over time unless treated. Snoring is more likely in the presence of: - Increased neck circumference above 43 cm in some studies - Alcohol, sedatives, and antidepressants, which produce muscular relaxation and so narrows the airways - Smoking and allergies can cause airway inflammation which further narrows the airway and produces snoring. - Obstructive sleep apnea is a known risk factor for snoring. - All Obstructive Sleep Apnea Content - What is Obstructive Sleep Apnea? - Obstructive Sleep Apnea Causes - Obstructive Sleep Apnea Pathophysiology - Obstructive Sleep Apnea Diagnosis
The prophet of the transcontinental railroad did not live to see it built. Theodore Judah was known as “Crazy Judah” because of his single-minded passion for driving a railroad through the Sierra Nevada mountains. His advocacy and enthusiasm for the project in California and in Washington, D.C., made possible America’s first transcontinental route. Judah constructed the first railroad in California, helped organize the Central Pacific Railroad Co., surveyed routes across the Sierra Nevada, and served as the railroad’s agent in Washington, D.C. Yet his scouting, surveying, lobbying, and fundraising efforts defined the route and prepared the way for the technology that would unite a nation. Theodore Judah and the American railroad matured together. He was born in 1826 in Bridgeport, Connecticut. In 1830, America had just 23 miles of track, but the railroad businesswas about to explode. As a boy Judah studied civil engineering. By 18 he was a railroad surveyor, giving himself a practical education in technology not even two decades old. A Practical Plan Engineers were in high demand in the late 1840s, as tracks spread across the countryside like creeping vines. Judah’s enthusiasm earned him the nickname “Crazy Judah,” but by 1856 he and his men had built the Sacramento Valley Line, the first railroad west of the Missouri River. The following year he published a pamphlet, “A Practical Plan for Building the Pacific Railroad,” reviewing engineering problems and painting visions of a nation united by tracks — and commerce — from coast to coast. Such a railroad had been discussed for decades — but the financing and engineering obstacles were formidable. Maps and Money Nominated by California’s 1859 Pacific Railroad Convention, Judah traveled to Washington for a crash course in lobbying. He returned having argued persuasively for transcontinental train travel. But he realized he would have to define a practical route and find private financial backing. By October 1861, he had both: a route over the Donner Pass in the Sierra Nevada, and a group of California businessmen as partners. Soon after President Abraham Lincoln signed the Pacific Railroad Act in 1862, tensions mounted between Judah and his business associates. He decided to find new partners in New York — but he got sick during the journey, and died soon after his arrival on the east coast in late 1863. Judah’s partners, known as the Big Four — Collis Huntington, Charles Crocker, Mark Hopkins, and Leland Stanford — would reap the rewards of the project Judah set in motion. When it was completed in 1869, the transcontinental railroad made the nation smaller, fostered trade, and improved frontier life. Working on the Railroad Stephen E. Ambrose tells the story of the men who linked the East and West coasts. Stephen Ambrose: Nothing Like It In The World The Men Who Built The Transcontinental Railroad, 1863-1869
A triangle with unequal sides is known as a scalene triangle. Triangles can be classified into several categories depending upon their properties. In this article, we are particularly interested in the various attributes such as perimeter and area of scalene triangle. There are many formulas available to give a measure of the operations mentioned above that we will discuss. Classification of Triangles Based on Side Length - Equilateral Triangle – In such a triangle, all the sides are of equal length. An interesting point to note is that all angles measure 60 degrees. - Isosceles Triangle – In such a triangle, two sides have the same length. - Scalene Triangle – In this triangle, all three sides are of a different measure. A scalene triangle is one where two sides and two angles are not congruent. For example, the sail of a sailboat is in the shape of a scalene triangle. Based on Angle Measure - Acute Angled Triangle – A triangle in which all three angles measure less than 90 degrees or have only acute angles is called an acute triangle. - Right Angled Triangle – A triangle in which one angle measures 90 degrees or has a right angle is called a right triangle. - Obtuse Angled Triangle – A triangle in which one angle measures more than 90 degrees or as an obtuse angle is called an obtuse triangle.Read More About: web mail Area of a Scalene Triangle The area is defined as the 2D space enclosed within closed boundaries of a figure. Any of the following formulas can be used to find the area depending upon what is known. 1. Heron’s Formula When all three sides are known, we use this formula. Suppose we have a triangle PCN with the measure of sides p, c, and n. Then the Heron’s formula is given by Semi perimeter s = ( p + c + n) / 2 Area of PCN = 2. Height Base Formula If we have a triangle where we know the measure of one side, and the corresponding perpendicular dropped from the opposite vertice, this formula can be used to find the area. Area of a triangle = ½ (base)(height) 3. Law of Cosines If you know the length of two sides and the measure of the opposite angle, then this formula can be applied to find the length of the third side. Once the length of the third side is known, then we can use Heron’s formula to find the area of the triangle. Suppose we have a triangle PCN with side lengths given by p, c, n and angles given by P, C, N then according to the law of cosines, the third side is given by n = p2 + c2 – 2pccosN Perimeter of a Scalene Triangle Perimeter is defined as the total sum of lengths of the boundaries of a figure. If we have a scalene triangle PCN with side lengths given by p, c and n, then the perimeter is given by Perimeter = Sum of all sides = p + c + n The scope of a topic like triangles is very vast; hence, it is best to join an online educational platform such as Cuemath to help a child with his studying process. At Cuemath, the math experts provide several resources such as worksheets, workbooks, interactive games, and puzzles to supplement their learning. Kids get access to a holistic development environment while being able to maintain their own pace of working. Hopefully, this article gives you an idea of which area formula can be applied to what question, and I wish you all the best! Visit Here: Khatrimaza
Every four years we add an extra day to February known as Leap Day. This day is added to make sure that the calendar stays in line with the Earth’s rotation. The associated “mascot” of Leap Year is the frog, whose hopping is easily recognizable. Our plan this week was to teach our students about the history and purpose of Leap Year, as well as teach them a little about the frog. To start off the class, we had the students practice a mindfulness activity called “sitting like a frog.” We asked them to close their eyes and sit very still like a frog on a lily pad. We did this for one minute with no talking, and asked them to simply use their imagination to pretend they were a frog waiting to catch dinner. At the end of the minute they could imagine a bug flying by and catching it because they were so still. This was a quick activity that allowed the class to settle down and focus their attention on something for one full minute. This noticeably helped them listen to the next instructions better and be able to concentrate on the activity. Next we taught the class how to make easy origami frogs. We handed out small green squares and led them step-by-step through the folding. After this, we went through the habitat of a frog, what they eat, and what other animals live around them. Based on their answers, we had them build a habitat for their frog step-by-step. Step 1: Fold the square diagonally to create a triangle and crease the fold Step 2: Take one corner and fold it in, following the bottom line of the triangle. The crease should be halfway between the corner you’re holding and the middle of the triangle. Next with the same corner, you fold it back out halfway between the crease you just made and the middle of the triangle. This should make the corner of the triangle stick out from the side and make one leg of the frog. Repeat for the other side Step 3: Flip the frog over and now you have a simple origami frog. You can lightly press the back down to make it hop. We handed out cutout circles for eyes and cutout red tongues that they could glue onto their frog. Step 4: Next, we gave them all the supplies to make a habitat. They learned that frogs live near ponds, lakes, marshes, etc. so we gave them blue construction paper. We also handed out cut out lily pads, flowers, fish, and bubbles to finish off their frogs habitat. They were able to create their own unique places for their frog to live using these supplies, as well as drawing on their own details. Step 5: They learned that a group of frogs was called an army. We gave them more paper squares to fold some more frogs, so that they each were able to go home with an army of frogs. Origami is a well known Japanese practice that has been seen to enhance mindfulness. It can be done almost anywhere and reinforces the need to concentrate on one thing at a time. This has been shown to allow the mind to stabilize and calm down, an important attribute in life. It also promotes hand-eye coordination, which is also needed for writing, reading, playing an instrument or sport, and drawing. Overall, it is an incredibly beneficial activity that can even help students improve academically or physically while still being fun. We found that our students were incredibly interested in learning about frogs, and in turn, teaching us what they knew. We also noticed that students who grasped the origami frog quicker were willing to go around independently and help their classmates encouraging leadership and unity.
Figure 1. Raptors, representative of those that may cause damage by preying on poultry and other birds, pets, and other animals: When two of the same species are pictured, mature birds on the left and immature birds on right. Pictured left to right.(a) the goshawk (Accipiter gentilis), (b) red-tailed hawk, (Buteo jamaicensis), (c) great horned owl (Bubo virginianus). Hawks and owls are birds of prey and are frequently referred to as raptors— a term that includes the falcons, eagles, vultures, kites, ospreys, northern harriers, and crested caracaras. Food habits vary greatly among the raptors. Hawks and owls are highly specialized predators that take their place at the top of the food chain. Some are responsible for the loss of poultry or small game. In the past, raptors were persecuted through indiscriminate shooting, poisoning, and pole trapping. The derogatory term chicken hawk was used generically to identify raptors, especially hawks, but has fallen out of usage during the past two decades. Recently, many people have developed a more enlightened attitude toward raptors and their place in the environment. People who experience raptor damage problems should immediately seek information and/or assistance. “Frustration killings” occur far too often because landowners are unfamiliar with or unable to control damage with nonlethal control techniques. These killings result in the needless loss of raptors, and they may lead to undesirable legal actions. If trapping or shooting is necessary, permits should be requested and processed as quickly as possible. Always consider the benefits that raptors provide before removing them from an area; their ecological importance, aesthetic value, and contributions as indicators of environmental health may outweigh the economic damage they cause. There are two main groups of hawks: accipiters and buteos. Accipiters are the forest-dwelling hawks. North American species include the northern goshawk (Accipiter gentilis), Cooper’s hawk (Accipiter cooperii), and sharp-shinned hawk (Accipiter striatus). They are characterized by distinctive flight silhouettes—relatively short, rounded wings and a long rudder like tail. Their flight pattern consists of several rapid wing beats, then a short period of gliding flight, followed by more rapid wing beats. Accipiters are rarely seen except during migration because they inhabit forested areas and are more secretive than many of the buteos. The largest and least common, but most troublesome, accipiter is the goshawk (Fig. 1). It is a bold predator that feeds primarily on forest-dwelling rodents, rabbits, and birds. Occasionally, it is attracted by free-ranging poultry or large concentrations of game birds and can cause depredation problems. Its breeding range is limited to Canada, the northern United States, and the montane forests of the western United States. Spectacular autumn invasions of goshawks occur at irregular intervals in the northern states. These are probably the result of wide-spread declines in prey populations throughout the goshawk’s breeding range. Cooper’s hawks will occasionally cause problems with poultry; sharp-shinned hawks are rarely a problem because of their small size. The buteos are known as the broad-winged or soaring hawks. They are the most commonly observed raptors in North America. Typical species include the red-tailed hawk (Buteo jamaicensis), red-shouldered hawk (Buteo lineatus), broad-winged hawk (Buteo platypterus), Swainson’s hawk (Buteo swainsoni), rough-legged hawk (Buteo lagopus), and ferruginous hawk (Buteo regalis). All buteos have long, broad wings and relatively short, fan-like tails. These features enable them to soar over open country during their daily travels and seasonal migrations. The red-tailed hawk (Fig. 1) is one of our most common and widely distributed raptors. Redtails can be found over the entire North American continent south of the treeless tundra and in much of Central America. They demonstrate a remarkably wide ecological tolerance for nesting and hunting sites throughout their extensive range. Typical eastern redtails nest in mature forests and woodlots, while in the Southwest they often nest on cliffs or in trees and cacti. Their diet, although extremely varied, usually contains large numbers of rodents and other small mammals. Redtails occasionally take poultry and other livestock, but the benefits they provide in aesthetics, as well as in the killing of rodents may outweigh depredation costs. Other species of buteos rarely cause problems. Owls, unlike hawks, are almost entirely nocturnal. Thus, they are far more difficult to observe, and much less is known about them. They have large heads and large, forward-facing eyes. Their flight is described as noise-less and moth like. There are 19 species of owls in the continental United States. They range in size from the tiny, 5- to 6-inch (12-to 15-cm) elf owl (Micrathene whitneyi) that resides in the arid Southwest, to the large, 24- to 33-inch (60-to 84-cm) great gray owl (Strix nebulosa) that inhabits the dense boreal forests of Alaska, Canada, and the northern United States. The great horned owl (Bubo virginianus, Fig. 1) is probably the most widely distributed raptor in North America. Its range extends over almost all the continent except for the extreme northern regions of the Arctic. These large and powerful birds are considered to be the nocturnal complement of the red-tailed hawk. Great horned owls generally prey on small-to medium-sized birds and mammals and will take poultry and other livestock when the opportunity presents itself. They are responsible for most raptor depredation problems. Scott E. Hygnstrom. Extension Wildlife Damage Specialist. Department of Forestry, Fisheries and Wildlife University of Nebraska. Lincoln, Nebraska 68583-0819 Scott R. Craven. Extension Wildlife Specialist Department of Wildlife Ecology. University of Wisconsin-Madison, Madison, Wisconsin 53706
After word spread that children were unable to get cancer treatments because the National Institute of Health was shut down due to the failure of Congress to pass a budget or a continuing resolution, the US House of Representatives passed a bill to fund the NIH. After the bill was tabled by Senator Reid, preventing the Senate from voting on the measure, Dana Bash from CNN asked the Senate Majority Leader why. Senator Reid responded, referring to the House, “What right do they have to pick and choose which part of government is going to be funded?” This question shows a lack of understanding of the separation of powers designed into the US Government by the framers of the Constitution. They purposely made it a requirement that any funding measure starts in the House, designed to be a representative body of the people, then be passed in the Senate, representing the states, and then be signed by the President. The founders knew the dangers of concentration of power, having just finished fighting the British for independence from the King, and purposely designed the system to make sure a majority of the states and the people all agreed before anything became law. They also required that funding be allocated each year to provide a mechanism by which measures that could not receive such consent could be eliminated. Therefore, in Congress, each house gets an equal vote, as does the President. Likewise, in a financially healthy family each member gets a vote when making a budget. More than just a way to control spending, a family budget provides a means of communication between a couple. They should agree on how much to spend on meals out, vacations, children’s’ activities, clothing, cable, gifts, and other expenses. Living on a budget is often seen as limiting. There is nothing to keep a family from going deep into debt, however, even with a budget. Witness that the US went into debt by $9 T even with a budget between 1776 and 2008. The US went even faster into debt, however, between 2008 and 2013 when, instead of passing a new budget each year, Congress starting passing a continuing resolution stating that they would spend the same amount as the year before. Because they stopped discussing what to fund and, more importantly, what not to fund, they saw their debt climb another $8 T in just five years, reaching $17 T now. This does not include the large obligations of Social Security and Medicare. A family that does not keep a budget also tends to find that they go quickly into debt without being able to determine where the money went. Having a budget means making conscience decisions of where and how to spend money. One avoids making the rash purchases that are soon forgotten. Oddly enough, rather than feeling limited, most people who budget each month feel that they actually have more money than they did before. This is because they do not spend money on worthless things. So, just as with Congress, each party must approve a purchase before it goes on a budget. Often one person will develop the initial budget, as is the Job of the House, but then the other party gets to make inputs and changes, as does the Senate. After the changes are made, a conference is typically done. Note this is the current issue in Congress, where the House is asking the Senate for a conference, but Senator Reid will not allow this until a clean continuing resolution is passed by the House (one in which there are no provisions to strip out Obamacare). This would mean that the Senate gets everything they want, however, and the House gets nothing. This is like the husband demanding that the wife agrees to all of his demands before he will discuss the family budget with her. This is obviously not workable. For a functional relationship, unlike a dysfunctional one as is seen in Congress, here are the steps to forming a budget: 1. One individual – whomever is more of a planner, develops the first draft. This should list all major purchases expected for the month. Estimates for food, utilities, etc. should be based on experience from previous months. Items such as clothing, entertainment, activities, etc… should be based on needs for the current month. This is where the planning comes in. There should also be a small budget for each party to spend as he/she wants, no questions asked. Things like investments should be based on how much money is available that month and the plan for the year. On months where other expenses are low, more should be dedicated to investments, and vice versa. If the end of the year is approaching and investments are well below the goals, it is time to start cutting other optional expenses and direct these proceeds towards investments. 2. The second individual should then review the budget at a family meeting. Here suggestions are made, things are negotiated, and priorities are set. If both parties do not agree to a particular item, it should not be funded. Remember that each person gets a vote. 3. Once both parties agree, it should become “law.” From that point, the budget should be followed. If there is a pressing reason for a change, the couple should get back together and make a conscious decision of where to pull the money from. When both parties agree, great things can happen. As Congress is learning, compromise cannot come without sitting down together. Contact me at [email protected], or leave a comment. Disclaimer: This blog is not meant to give financial planning advice, it gives information on a specific investment strategy and picking stocks. It is not a solicitation to buy or sell stocks or any security. Financial planning advice should be sought from a certified financial planner, which the author is not. All investments involve risk and the reader as urged to consider risks carefully and seek the advice of experts if needed before investing.
Spinal Muscular Atrophy Spinal muscular atrophy is rare enough that you may not have heard of it, or perhaps you’ve heard of it, but you have no idea what it is and how it affects the body. What Is Spinal Muscular Atrophy? Spinal muscular atrophy (SMA) is a group of inherited genetic muscle-wasting disorders and is classified as a motor neuron disease. This is a rare condition that affects approximately 1 out of every 10,000 live births. The disease impacts the nerve cells that control voluntary muscle movement (also known as motor neurons) and causes the cells to die off. SMA’s primary effect is on muscles because the muscles in the body don’t receive signals from the nerve cells like they should. Atrophy means the muscles are getting smaller, which is the result when muscles are not active. The Affected Muscles Most nerve cells that control muscles are located in the spinal cord. Motor neurons receive nerve impulses sent from the brain to the spinal cord and transmit the muscles through peripheral nerves. The muscles that are impacted the most by SMA are the ones closest to the center of the body, which are referred to as proximal muscles. Proximal muscles are necessary for walking, sitting, moving the head, and crawling. Distal muscles (the ones farther away from the center of the body) can be affected as well, but not usually in the beginning of SMA. Types of SMA The classification of SMA varies based on how much survival of motor neuron (SMN) protein an individual has. This protein is critical for the health and survival of motor neurons; an insufficient amount of this protein results in muscle weakness. The more SMN protein there is, the longer it takes for SMA symptoms to start, and the milder the disease may be. Those who are diagnosed with SMA will fall into one of five categories based on their highest level of motor function: - Type 0 (most severe) – This type is defined by decreased fetal movement, joint abnormalities, difficulty swallowing, and respiratory failure. - Type I (severe) – The most common type of SMA is also known as infantile-onset or Werdnig-Hoffman disease. This type is generally noticeable in children shortly after birth. Muscle weakness on both sides of the body, lack of motor development, poor muscle tone, and twitching of the tongue are major clinical displays of SMA type I. - Type II (intermediate) – Type II is also called Dubowitz disease. It is usually found in patients between 6-18 months. Signs of Type II in children are trembling of the fingers, inability to walk even 10 feet by themselves, and possible respiratory complications. Those with SMA type 2 are not usually able to sit independently by mid-teens or later. - Type III (mild) – Type III SMA is known as Kugelberg-Welander disease and affects the legs more severely than the arms. Patients learn to walk but fall frequently, however, they may be able to walk into their adult years. Young children may have trouble walking up/down stairs. - Type IV – Type IV is referred to as adult SMA. This type impacts the proximal muscles much sooner than other muscles. The onset of muscle weakness is after age 10, but patients are usually ambulatory until about age 60. The leg muscles are affected before the arms, and hands typically stay strong enough to use for basic functions of modern life. What Causes Spinal Muscular Atrophy? Chromosome 5 SMA (the most common form of SMA) is caused by a deficiency of motor neuron protein SMN, which is necessary for normal motor neuron function. SMA is caused by a missing or abnormal survival motor neuron gene 1 (SMN1) on chromosome 5. This gene should produce a survival motor neuron protein, but for those with mutated genes, the absence or lack of this protein causes issues for motor neurons. SMA is an autosomal recessive genetic disease which means both parents are carriers of the faulty gene. About 1 in 40 people are carriers of the disease but it’s likely recessive in their DNA (when a gene mutation exists but is masked by the other copy of the same gene which stops symptoms). Rarer forms of SMA (non-chromosome 5 SMA) are caused by mutations in other genes. Symptoms of Spinal Muscular Atrophy The age when symptoms begin roughly align with the degree in which motor function is affected. There is a greater impact on motor function if the disease starts earlier in life. Primary symptoms of chromosome 5-related SMA is weakness of the voluntary muscles. The muscles most affected are proximal muscles such as the shoulders, hips, thighs, and upper back. If the muscles in the back weaken, it can develop spinal curvatures. Complications arise when SMA affects breathing and swallowing, which results in abnormalities in these functions. Other forms of SMA which are not chromosome 5 affect mostly the distal muscles, although distal muscles may eventually become affected with chromosome 5 SMA depending on the severity of the diagnosis. Complications of SMA can also include scoliosis, joint contractures, pneumonia, and metabolic abnormalities. Treatment of Spinal Muscular Atrophy Unfortunately, there is no cure for SMA. Management of the condition starts with the diagnosis and classification into one of the SMA categories. Pulmonary management is an option: children with SMA1 can survive beyond 2 years of age when offered tracheostomy or non-invasive respiratory support. An intermittent positive-pressure breathing device has also proven effective. Gastronomy should be considered in the treatment plan, and consulting with your doctor will give you a good diet to follow for your specific needs. In the USA in 2017, Spinraza (nusinersen) became the first FDA-approved drug to treat children and adults with SMA. Current treatment research is focused on ways to increase the body’s ability to produce SMN protein and help motor neurons survive hostile conditions. SMA is a serious condition, but researchers are optimistic that there will be better treatment available in the future, and one day, a cure.
Setting: Groups of 2 to 4 participants at tables or on the floor Materials: - paper, pens Poster of the 5 senses: Smell, Touch, Sight, Hearing, Taste 1. Create picture cards of various objects facing down on each table 2. Participants choose 1 card without looking at it before hand. No one else is allowed to see the card chosen. 3. 5 to 10 minutes is allowed for each person to write down or gather his thoughts on how the object will be described without saying the name of the object. At least 2 of the 5 senses must be used. 4. Each person takes his turn describing the object card they are holding. No one is allowed to guess what each object is at this point. 5. When all have created their descriptions, a designated leader of the group gathers the cards without looking at them, scrambles them, and places the cards, pictures up on the table. 6. As a group, you decide which picture went with which description. 7. Reflection - Whose descriptions were the most vivid? What made those descriptions so vivid? Option - Make it harder by having several different types of the same object (different types of hats, different types of shoes)
Mercury is the smallest, densest and least explored planet around the sun. More than half of it is virtually unknown. Insights into this mysterious world of extremes could shed light on how planets were made in our solar system, astronomers say. NASA's MESSENGER probe will be the first spacecraft to image the whole planet, making its initial flyby of Mercury Jan. 14 as part of a long process to settle into orbit. "With MESSENGER, many of Mercury's secrets will now be revealed," said NASA's planetary science division director James Green. A list of some of these is below. Mercury's hidden side The only spacecraft to ever visit the solar system's innermost world — NASA's Mariner 10 — mapped less than 45 percent of Mercury's surface, a heavily cratered landscape. This means more than half the planet is unknown to us, save for relatively poor observations from Earth-based radars. "We can't get cocky about what the other side of Mercury looks like. So far, every solar system body has looked very different from every other one," said Faith Vilas, director of the Multiple Mirror Telescope (MMT) Observatory at Mt. Hopkins, Ariz. "We're expecting some major surprises from it." Ice near the sun? On the closest planet to the sun, where temperatures can reach more than 800 degrees Fahrenheit (425 degrees Celsius), there might surprisingly be ice. Ice is highly reflective to radar, and Earth-based radar suggests deposits of frozen water might be hidden in deep, dark craters at Mercury's poles that have never seen sunlight. This water might have come gassing up from within the planet or from meteorite impacts. MESSENGER will search for hydrogen at the permanently shadowed floors of polar craters. If the spacecraft discovers any, MESSENGER may have found ice amidst an inferno. Is Mercury shrinking? Mercury could be shrinking as its core slowly freezes. Pictures from Mariner 10 revealed the planet's surface appears to have buckled from within, resulting in gigantic cliffs more than a mile high and hundreds of miles long biting into Mercury. MESSENGER will look for any evidence of such crumpling on the world's hidden side and will also study the planet's metal core by analyzing that world's magnetic field. Do a band of small asteroids dubbed "vulcanoids" lie inward of Mercury's orbit, hidden in the glare of the sun? MESSENGER has a chance of spotting these asteroids as it approaches Mercury, although its opportunities are limited. To keep the sun from frying it, MESSENGER hides itself behind a sunshade pointed at the sun at all times, and its scientific instruments are pointed away from the sun. Nevertheless, scientists will use MESSENGER "to chase down any hints there might still be a modern population of vulcanoids," said the MESSENGER mission's principal investigator Sean Solomon. Where does Mercury's atmosphere come from? Mercury's incredibly tenuous atmosphere is unstable, with gases regularly escaping the planet's weak gravity. How Mercury's atmosphere gets constantly replenished is unclear. Researchers suspect the hydrogen and helium in Mercury's atmosphere is continuously brought there by the solar wind, the supersonic stream of charged particles from the sun. Other gases might have evaporated off Mercury's surface, seeped from inside the planet or been brought in by vaporized meteorites. MESSENGER will closely study the planet's atmosphere to pinpoint how it gets generated, Vilas said. Why is Mercury magnetic? A completely unexpected discovery Mariner 10 made was that Mercury possessed a magnetic field. Planets theoretically generate magnetic fields only if they spin quickly and possess a molten core. But Mercury takes 59 days to rotate and is so small — just roughly one-third Earth's size — that its core should have cooled off long ago. To solve this mystery, MESSENGER will probe Mercury's magnetic field. There was some thinking that the field might have become inactive, but last year, scientists discovered Mercury seems to have a molten core after all, so the planet might still be actively generating a magnetic field after all. Why all that metal? Mercury is extraordinarily dense, leading researchers to estimate that its iron-rich core potentially makes up about two-thirds of the planet's mass, a startling figure double that of Earth, Venus or Mars. In other words, Mercury's core might take up roughly three-quarters of the world's diameter. One theory explaining this bizarre density is that huge impacts billions of years ago might have stripped Mercury of its original surface, Vilas explained, collisions that also shifted the planet toward the sun to its current location. Another theory suggests Mercury simply formed where it now lies. To see which theory concerning Mercury's origins might be right, MESSENGER's battery of miniaturized scientific instruments will scope out the planet's geology. Understanding how Mercury formed will shed light on how all the planets evolved, Solomon said.
ABCs of Mathematics This alphabetical installment of the Baby University series is the perfect introduction for even the youngest mathematicians! A is for Addition B is for Base C is for Chord From addition to zero, The ABCs of Mathematics is a colorfully simple introduction for babies―and grownups―to a new math concept for every letter of the alphabet. Written by an expert, each page in this mathematical primer features multiple levels of text so the book grows along with your little mathematician.
|Module 14 - Related Rates| |Introduction | Lesson 1 | Lesson 2 | Lesson 3 | Lesson 4 | Self-Test| |Lesson 14.3: Two Ships| A related-rate problem that models two ships as they move away from each other is discussed in this lesson. Two ships start at a point O and move away from that point along routes that make a 120° angle. Ship A moves at 14 Modeling the Positions of the Ships Suppose that Ship A is moving along the positive x-axis at 14 knots. Enter the parametric equations for Ship A's position at time t hours. Parametric equations can be used to model the position of an object at time t that is moving at speed v along a line that forms an angle with the positive x-axis. Such equations have the form x = vt cos Enter the equations for Ship B's position, which is moving at 21 knots along a line that forms a 120° angle with the path of Ship A. Animating the Motion of the Ships Make sure the Graph Order in the Graph Formats dialog box is still Simultaneous. Finding the Distance between the Ships The distance c(t) between the ships can be found by using the distance formula: Use the restriction t 0 because the problem starts at t = 0. Finding the Speed at which the Ships are Moving Apart The speed at which the ships are moving apart can be found by finding the derivative of c(t) with respect to t, or just observing that it is knots from the formula for c(t). 14.3.1 Approximately how far apart are the ships after two hours? Click here for the answer. Modifying the Problem Assume that both ships travel in the same direction and at the same speed as before, but Ship A begins its journey 5 nautical miles from point O and Ship B begins 3 nautical miles from point O. 14.3.2 Find the derivative of c(t). Click here for the answer. |< Back | Next >| 2007 All rights reserved. |
1. Consider a competitive industry consisting of 100 identical firms each with the following cost schedule: Output Total Cost Market demand is given by the following schedule: Price: $360 290 230 180 140 110 80 Quantity Demanded: 400 500 600 700 800 900 1000 a. Draw the supply curve for an individual firm. On a separate graph, draw the demand and supply curves for the industry as a whole. Indicate the equilibrium price and output. Now draw the individual firm's demand curve on your first graph and show the firm's equilibrium, price and output. b. Explain why the equilibria found in part (a) are only short-run equilibria. What will happen to the industry in the long run? Describe the long-run equilibrium in detail. 2. Turnip farmers are complaining that the price of turnips is too low. They want the government to buy turnips for the school lunch program. Senator Smith notes that several years ago they had the same complaint. In response the government started to buy turnips to produce energy, a program that continues today. That made turnip farmers happy for a while, but now they're upset again, and there seems to be more of them than ever. Explain what happened, using a diagram or two. 3. After several years of rapid technological advance, the home insulation industry has settled down. All firms now use the same production method and entry into the industry is so easy that it is considered perfectly competitive. Each firm in the industry has the following cost functions: TC = Q2 + Q + 90 MC = 2Q where Q is the quantity of insulation, measured in pounds. a. What is the supply curve for a single firm? Suppose there are 1,000 firms in the industry. What is the supply curve for the industry as a whole? b. The market demand curve for insulation is: Q = 30,000 - 1,000(P). What are the equilibrium price and quantity in the insulation market? c. How much does each firm produce and how much profit does each firm make? d. [Parts d and e are harder than the other problems you have been assigned.] To encourage conservation, the government offers firms a subsidy of $3 per unit of insulation produced. What happens to price, quantity, and profits in the short run? Explain the changes in the insulation industry that this subsidy will lead to in the long run. Identify the welfare loss induced by the subsidy in the long run and compute its value. e. Suppose instead of a subsidy, the government mounts a publicity campaign to encourage conservation, and as a result, the demand curve shifts to 35,000 - 1,000(P). What will happen in the short run and the long run? 4. The Quantum Electronics Company is about to market a new combination cell phone//PDA/camera/espresso maker. Its market research team predicts the following demand schedule for this revolutionary product: Price ($) 400 380 360 340 320 295 270 240 Quantity 1000 1200 1400 1600 1800 2000 2200 2400 The company's engineers have estimated the costs of production as follows: 1. Development costs: $100,000. 2. Planning and construction of assembly lines: $80,000. 3. Overhead: $80,000. 4. Materials: $100 per set. 5. Labor: $60 per unit (10 hours at $6 per hour). a. Calculate TR, MR, TC, TFC, TVC, and MC for each quantity of output given in the demand schedule. b. What quantity should Quantum produce and what price should be charged if the company wants to maximize profits? What would profits be? c. Suppose that the engineers discover an error in cost estimates. Assembly line costs should be $100,000 rather than $80,000. What quantity should Quantum produce and what price should be charged? What would profits be? d. Suppose instead that Quantum's workers win a new contract calling for a wage of $10 per hour. What quantity should Quantum produce and what price should be charged? What would profits be? (Assembly line costs remain at $80,000.) Assume materials cannot be substituted for labor. e. Now suppose an excise tax of $40 per unit is imposed. What happens to quantity, price and profits? (Wages and assembly line costs are as in part b.) 5. You are in charge of deciding how to allocate food-concession spaces at a large new airport. There are many spaces available. Two proposals have been made: a. Auction the spaces in a single block, thus granting the winning bidder a monopoly of food services at the airport. b. Auction the spaces individually, with the requirement that no firm can buy more than one space. Under either option, many firms will be interested and will bid. Assume that there are no economies of scale here; i.e. it costs no more for two firms to sell 500 meals each than for one to sell 1,000 meals. Which proposal would you pick to maximize the airport's receipts from the auction? Which would you pick to maximize overall net benefits to society of this activity? Explain. 1. See attached spreadsheet "1." You can see how to graph supply and demand curves. You should be able to construct another chart with both curves. You can also find equilibrium by equating the equations for supply and demand, which are shown on the charts. In the long run, all inputs are variable, and firms can enter and exit the 2. See the attached file. Government subsidies move the supply curve outward over the long term. This causes the price to fall. Prices will fall in the long term as long as turnip growers are free to enter and ... Spreadsheet and graphs demonstrate how the forces of supply and demand create equilibrium in different circumstances.
Oral health is an important aspect of our overall well-being. While brushing your teeth is essential to your daily dental hygiene routine, flossing should not be overlooked. Flossing is a vital practice that can help prevent tooth decay, gum disease, and other oral health issues. Flossing is a process of cleaning between the teeth and removing food particles, plaque, and bacteria from areas that a toothbrush cannot reach. The American Dental Association (ADA) recommends flossing at least once a day to maintain good oral hygiene. However, many people choose not to floss because it adds more time to their morning and nightly routines. This decision can be detrimental to your oral health. Lack of flossing can drastically increase your risk of severe dental disease. Over time, you can experience discolored or loosened teeth. Flossing helps remove plaque between the teeth, which can cause tooth decay. Plaque is a sticky film that forms on the teeth. It is a type of harmful bacteria that feeds on sugars and starches in food. These bacteria produce acid that erodes the tooth enamel, leading to cavities. Flossing helps prevent tooth decay by removing plaque and food particles between teeth and gums. Prevents Gum Disease Gum disease is a bacterial infection that affects the gums and bones supporting the teeth. The initial stage of gum disease is called gingivitis, which causes swelling, redness, and bleeding of the gums. Without treatment, gingivitis can eventually progress, leading to tooth loss. Flossing effectively prevents gum disease as it helps remove the plaque and bacteria that cause the infection. Reduces Bad Breath The bacteria in your mouth can cause bad breath, also known as halitosis. When you don’t floss, the food particles in your mouth can begin to rot, creating a foul smell. Additionally, the plaque in your mouth can break down the enamel and soft tissues in your mouth. Flossing helps remove the food particles and bacteria contributing to bad breath. By flossing regularly, you can keep your breath fresh and improve your oral hygiene. Improves Gum Health Flossing is essential for your gum health. When you floss, you remove the plaque and bacteria from your gum line. With the plaque gone, you reduce the irritation that can cause inflammation. Gum disease can cause red, swollen, and bleeding gums, leading to tooth loss if left untreated. Flossing regularly can help keep your gums healthy. In the long run, flossing saves you money. You may develop tooth decay and gum disease if you do not floss regularly. Unfortunately, these issues can lead to costly dental procedures such as fillings, root canals, and even tooth extractions. Flossing every day can reduce your chances of gum disease and pricey dental bills. Improves Overall Health Not only does flossing benefit your oral health, but it can help your overall health as well. Recent studies have shown a link between gum disease and other health problems, such as heart disease, stroke, and diabetes. However, removing harmful plaque can minimize your risks of these serious health concerns.
The physician and professor Herman Lundborg (1868–1943) was the founder and first head of the Swedish state Institute for Racial Biology in Uppsala, 1922 to 1935. It was the world’s first state institute for racial biology. Since 1910 in Germany there had been plans to institute a kind of publicly financed national office whose assignment would be to register the population. Research was also being carried out in Britain and the United States, though financed by patrons and private funding. To Lundborg and his German researcher friends it was, however, important that a new type of “modern population improvement”, led by scientists, be undertaken by the state. The assignment included, for example, deciding which individuals in the population ought to be sterilised; decisions of that kind should not be made by individuals, neither physicians nor patients; that should be the responsibility of a state institution, in all countries. In Germany the plans were thwarted by the outbreak of World War One, but in neutral Sweden the theories continued to develop, even during the war years. Herman Lundborg had come into contact with the leading names in the German racial hygiene movement in the early 1910s. They were a small yet influential group of individuals who were later to become well-known racial hygiene experts in Nazi Germany; they included Ernst Rüdin, Fritz Lenz, Eugen Fischer and Alfred Ploetz. When Hitler came to power, they were able to implement what had been their shared vision for over 20 years, of “modern population improvement” – a “biological policy”. Lundborg – their friend and expert abroad, professor at a state institute in blond, blue-eyed Scandinavia – lent legitimacy and respectability to Nazi race politics. His Swedish racial research amounted to confirmation from abroad that what was being maintained by right-wing radical and racist circles was actually true. The professor at the state institute in Sweden delivered the same message: the racial theories represented reality – the theories of higher- and lower-standing races and of the threat of racial mixing which were to form the foundation of the emerging Naziism. How Herman Lundborg’s science could be used to confirm Nazi ideology became apparent in 1930, for example when the head of the SS, Heinrich Himmler, referred to Lundborg; it was in conjunction with making the SS racially pure: all SS men, and even their prospective wives, were to be subjected to racial investigation. Himmler referred to a list of race theoreticians, authors in the United States, France and Germany; but Lundborg was the only one on the list (apart from Hans F. K. Günther who had recently been appointed) who was a professor by title and held an official science faculty position at a university. Lundborg was also the only one to have published so many weighty scientific tomes, which added to his list of merits. Moreover, an account is given, for the first time, in the book of Hans F. K. Günther’s years in Scandinavia, 1923 – 1928. He was later appointed professor of race theory, first in Jena, and later in Berlin with his own institute. In Nazi Germany, Günther was known as “Rassengünther” (“Race Günther”) and his famous book on German race theory, Racial Science of the German People (Rassenkunde des deutschen Volkes, first published in 1922, but later revised during his time in Uppsala) was used as a textbook both by the SS and at universities in Nazi Germany. It was widely spread, 430,000 being printed in Germany up and till 1945. The influence of the Institute for Racial Biology in Uppsala left its mark when certain concrete methods were brought to Germany from Sweden. Methods and aids, developed by Lundborg in connection with his major race investigations of the 1920s, were to be re-used in the Nazis’ genocide politics of the 1940s. When peoples were being driven from their lands on the Eastern Front, in a comprehensive process of selection and removal, the race experts of the SS Rasse- und Siedlungshauptamt (SS Race and Settlement Main Office) used investigation form cards modelled on Lundborg’s. Even the race theory implemented by the SS as they drove people from their land was in some decisive detail based on Swedish racial biology. On the Lundborg model ‘race-cards’ there were boxes to be ticked: one was “ob”, standing for East-Baltic race. That particular kind of race, East-Baltic race, was something Hans F. K. Günther had learnt to “see”, and he included it in his German race theories during his years in Uppsala. “East-Baltic race” was to be classed as an undesirable type of individual in the future state of Germania. When that box got ticked for many of those that were examined in Czechoslovakia and Poland their chances of survival greatly diminished.
The signing of the Paris Agreement in 2015 was supposed to be a turning point in the fight against climate change, with almost every nation committing to reduce their carbon emissions. But many countries have already fallen behind on the targets they set, and emissions worldwide have continued to rise. Beyond that, the targets were too conservative to successfully limit global temperature rise to 2 degrees Celsius, the goal set out in the agreement—so even if all of the reductions happen, it will merely merely delay catastrophic climate change, not prevent it. As Julio Friedmann, a top Department of Energy official under President Obama, recently argued, “Winning slowly is the same as losing.” The only way to make up ground is to aggressively pursue an all-of-the-above approach that utilizes every strategy to reduce carbon emissions, or decarbonize. This must include investing heavily in carbon capture, utilization and storage (CCUS)—a cohort of technologies that pull carbon dioxide from smokestacks, or even from the air, and convert it into useful materials or store it underground. CCUS technology can zero out the carbon emissions from fossil fuels used in electricity generation and in industries that renewable energy sources cannot serve, as well as remove previous emissions directly from the atmosphere. Although CCUS has been dismissed in the past as too expensive and unproven, and development and demonstration projects have faced opposition in the United States, recent gains in efficiency and drops in cost have made the technologies far more effective and scalable. Further investment will be critical to enable broader adoption of these recent improvements. Broader adoption of CCUS will be crucial: the Intergovernmental Panel on Climate Change (IPCC), which comprises the world’s foremost climate and energy experts, calculated that the cost of limiting warming to 2°C more than doubles if CCUS isn’t deployed. Forecasts by the International Energy Agency (IEA) go further: the agency predicts that the reduction in emissions necessary to limit global warming to 2°C is impossible without CCUS; through 2050, the technology must provide at least 13 percent of the emissions reductions needed to limit warming. The transition to clean energy has become an inevitability, rather than a possibility, but that transition’s ability to achieve deep decarbonization will falter without CCUS. BROAD DECARBONIZATION THROUGH CCUS: NOT A PIPE DREAM Decarbonization through CCUS is possible through three primary paths: retrofitting existing power plants to decarbonize fossil fuel electricity generation, reducing emissions in heavy industries that renewables cannot penetrate, and directly removing carbon from the atmosphere. Power plants can be retrofitted with CCUS technology to capture emissions from existing coal, oil and natural gas electricity generation. Even optimistic projections for the clean energy transition make it clear that fossil fuels will not disappear anytime soon: an IEA scenario for a sustainable future forecasts that fossil fuels will still make up 60 percent of the energy mix by 2040. Though coal plants, the most carbon-intensive form of electricity generation, are closing in the United States, new capacity is growing in the developing world, and these plants are expected to produce power for decades. Cutting emissions from existing fossil fuel plants with CCUS technology will thus be critical to combating climate change. These retrofits could be made more appealing in a future with a circular carbon economy, in which captured carbon could be resold and recycled for other uses, including producing concrete or plastics. Beyond the electricity sector, CCUS technologies can also help decarbonize other sources of emissions. CCUS can tackle emissions in heavy industry (including the production of cement, refined metals and chemicals), which account for almost a quarter of U.S. carbon emissions. In addition, direct carbon removal technology–which uses chemicals to capture and convert carbon dioxide from the air, rather than from a smokestack–can offset emissions from large emitting industries that cannot readily implement other clean technology, like agriculture. Though this form of carbon capture technology is much less developed than carbon capture from smokestacks, it offers promise as a way to retroactively address carbon pollution. Though these approaches have historically been dismissed as too expensive for widespread adoption, CCUS technologies are becoming increasingly cost-effective. The newest CCUS technologies could drive the price of implementation as low as $20 per ton of carbon by 2025, down from $100 per ton in 2016. In the United States, the company NET Power has begun construction of a new natural gas plant in Houston that it claims has no water cost, will lead to no net emissions and costs no more than a standard natural gas powered plant. And in Iceland, the government has deployed carbon capture technology to capture both emissions from electricity generation and CO2 directly from the air, sending the carbon deep into the earth. A number of startups are also developing promising new approaches to CCUS, including converting captured carbon into fertilizer and employing an enzyme to manage carbon, that could spur a revolution in the technology. SUPPORT FOR CCUS: GROWING BUT NOT FAST ENOUGH Even though CCUS is critical to combating catastrophic climate change, it has faced opposition from many of the most passionate supporters of climate action. Environmental groups and renewable advocates have opposed investing in CCUS, fearing that it would be used to justify further reliance on fossil fuels. But by limiting the scope of investment in decarbonization, the world would miss a major avenue for reducing emissions both in the electricity sector and in a variety of industries. To meet the 2-degree climate target, environmental advocates should support policies like low or zero emissions portfolio standards that are technology-neutral, which would encourage investment in a range of lower-carbon energy sources, including CCUS. CCUS also faces opposition from some conservative voices, especially those who downplay or dismiss the threat of climate change, who see it as a needlessly expensive experiment that reduces power plant efficiency. However, many conservatives have found CCUS to be a more appealing solution to reducing CO2 emissions than restricting electricity production. By creating a larger economy around carbon, CCUS can create jobs and revenue from what was previously only a waste material, and can fuel economic growth. Overall, the tide seems to be shifting. CCUS has managed to attract a remarkably broad coalition of supporters, including climate hawks on the left and fossil fuel supporters on the right. Under an administration that has stepped away from many important climate policies, including the Paris Accord and the Clean Power Plan, building bipartisan support for investing in this technology is critical to its future. The federal government has an integral role to play in ensuring CCUS succeeds by supplementing critical funding for research and development. The Trump administration has repeatedly tried to slash energy technology R&D funding, with the Department of Energy’s CCUS R&D budget cut by as much as 76 percent in proposed budgets. As funding for energy technology innovation in the United States has slowed, China is now set to lead the way in funding for CCUS R&D. The United States must protect and expand its R&D funding in order to play a leading role in the energy transition. The government must also improve incentives for CCUS deployment. The FUTURE Act, recently enacted by Congress as part of the February 2018 budget bill and championed by a noticeably bipartisan coalition, is an important step toward making CCUS economical. The bill extends carbon dioxide capture, storage and conversion tax credits and increases the value of those credits. The same bipartisan group of senators has proposed another bill, the USE IT act, which would amplify support for CCUS technology by directly funding research and development and by setting up a prize competition to reward deployment in the private sector, especially of direct air capture technology. These are welcome steps forward in supporting this critical technology. However, the government can and should go further by procuring CCUS technology for federal infrastructure and investing in national pipeline infrastructure to lower the costs associated with transporting CO2 from retrofitted plants. By doing so and increasing R&D funding, it can spur innovation to lower the cost and increase the efficiency of products on the market and thereby incentivize CCUS deployment. The energy transition is accelerating at a rapid pace, but it will fall short of averting catastrophic climate change if it does not employ a wide range of solutions, including broad deployment of carbon capture, utilization and storage. The Paris Agreement was an important step in combating climate change, but without CCUS as part of the solution, the goals established there will be impossible to meet.
The clustered regularly interspaced short palindromic repeats (CRISPR) associated sequences (Cas) system is a prokaryotic acquired immunity against viral and plasmid invasion. The CRISPR Cas9 system is highly conserved throughout bacteria and archaea. Recently, CRISPR/Cas has been utilized to edit endogenous genomes in eukaryotic species. In certain contexts, it has proven invaluable for in vitro and in vivo modeling. Currently, CRISPR genome editing boasts unparalleled efficiency, specificity, and cost compared to other genome editing tools, including transcription activator-like effector nucleases (TALENs) and zinc finger nucleases (ZFNs). This review discusses the background theory of CRISPR and reports novel approaches to genome editing with the CRISPR system. CRISPR as a prokaryotic adaptive immune system CRISPR was originally discovered in bacteria1 and is now known to be present in many other prokaryotic species.2,3 CRISPR systems in bacteria have been categorized into three types, with Type II as the most widely found. The essential components of a Type II CRISPR System located within a bacterial genome include the CRISPR array and a Cas9 nuclease. A third component of the Type II system is the protospacer adjacent motif on the target/foreign DNA. The CRISPR array is composed of clusters of short DNA repeats interspaced with DNA spacer sequences.4 These spacer sequences are the remnants of foreign genetic material from previous invaders and are utilized to identify future invaders. Upon foreign invasion, the spacer sequences are transcribed into pre-crisprRNAs (pre-crRNAs), which are further processed into mature crRNAs. These crRNAs, usually 20 base pairs in length, play a crucial role in the specificity of CRISPR/Cas. Upstream of the CRISPR array in the bacterial genome is the gene coding for transactivating crisprRNA (tracrRNA). tracrRNA provides two essential functions: binding to mature crRNA and providing structural stability as a scaffold within the Cas9 enzyme.5 Post-transcriptional processing allows the tracrRNA and crRNA to fuse together and become embedded within the Cas9 enzyme. Cas9 is a nuclease with two active sites that each cleaves one strand of DNA on its phosphodiester backbone. The embedded crRNA allows Cas9 to recognize and bind to specific protospacer target sequences in foreign DNA from viral infections or horizontal gene transfers. The crRNA and the complement of the protospacer are brought together through Watson-Crick base pairing. Before the Cas9 nuclease cleaves the foreign double-stranded DNA (dsDNA), it must recognize a protospacer adjacent motif (PAM), a trinucleotide sequence. The PAM sequence is usually in the form 5’-NGG-3’ (where N is any nucleotide) and is located directly upstream of the protospacer but not within it. Once the PAM trinucleotide is recognized, Cas9 creates a double-stranded breakage three nucleotides downstream of the PAM in the foreign DNA. The cleaved foreign DNA will not be transcribed properly and will eventually be degraded.5 By evolving to target and degrade a range of foreign DNA and RNA with CRISPR/Cas, bacteria have provided themselves with a remarkably broad immune defense.6 CRISPR Cas9 as an RNA-guided genome editing tool The prokaryotic CRISPR/Cas9 system has been reconstituted in eukaryotic systems to create new possibilities for the editing of endogenous genomes. To achieve this seminal transition, virally-derived spacer sequences in bacterial CRISPR arrays are replaced with 20 base pair sequences identical to targeting sequences in eukaryotic genomes. These spacer sgRNAsequences are transcribed into guide RNA (gRNA), which functions analogously to crRNA by targeting specific eukaryotic DNA sequences of interest. The DNA coding for the tracrRNA is still found upstream of the CRISPR array. The gRNA and tracrRNA are fused together to form a single guide RNA (sgRNA) by adding a hairpin loop to their duplexing site. The complex is then inserted into the Cas9 nuclease. Within Cas9, the tracrRNA (3’ end of sgRNA) serves as a scaffold while the gRNA (5’ end of sgRNA) functions in targeting the eukaryotic DNA sequence by Watson-Crick base pairing with the complement of the protospacer (Fig. 1). As in bacterial CRISPR/Cas systems, a PAM sequence located immediately upstream of the protospacer must be recognized by the CRISPR/Cas9 complex before double-stranded cleavage occurs.5,7 Once the sequence is recognized, the Cas9 nuclease creates a double-stranded break three nucleotides downstream to the PAM’s location in the Eukaryotic DNA of interest (Fig. 1). The PAM is the main restriction on the targeting space of Cas9. Since the PAM is required to be immediately upstream of the protospacer, it is theoretically possible to replace the 20 base pair gRNA in order to target other DNA sequences near the PAM.5,7 Once the DNA is cut, the cell's repair mechanisms are leveraged to knockdown a gene, or insert a new oligonucleotide into the newly formed gap. The two main pathways of double-stranded DNA lesion repair associated with CRISPR genome editing are non-homologous end joining (NHEJ) and homology directed repair (HDR). NHEJ is mainly involved with gene silencing. It introduces a large number of insertion/deletion mutations, which manifest as premature stop codons, that effectively silence the gene of interest. HDR is mainly used for gene editing. By providing a DNA template in the form of a plasmid or a single-stranded oligonucleotide (ssODN), HDR can easily introduce desired mutations in the cleaved DNA.5 The beauty of the CRISPR system is its simplicity. It is comprised of a single effector nuclease and a duplex of RNA. The endogenous eukaryotic DNA can be targeted as long as it is in proximity to a PAM. The goal of this system is to induce a mutation, and the CRISPR Cas9 complex will cut at the site repeatedly until a mutation occurs. When a mutation does occur, the site will no longer be recognized by the complex and cleavage will cease. Optimization and specificity of CRISPR/Cas systems If CRISPR systems are to be widely adopted in research or clinical applications, concerns regarding off-target effects must be addressed. On average, this system has a target every eight bases in the human genome. Thus, virtually every gRNA has the potential for unwanted off-target activity. Current research emphasizes techniques to improve specificity, including crRNA modification, transfection optimization, and a Cas9 nickase mutation. The gRNA can be modified to minimize its off-target effects while preserving its ability to target sequences of interest. Unspecific gRNA can be optimized by inserting single-base substitutions that enhance its ability to bind to target sequences in a position and base-dependent manner. Libraries of mutated genes containing all possible base substitutions along the gRNA have been generated to examine the specificity of gRNA and enzymatic activity of Cas9. It is important to note that if mutations occur near the PAM, Cas9 nucleases do not initiate cleavage. Targeting specificity and enzymatic activity are not affected as strongly by base substitutions on the 5’ end of gRNA. This leads to the conclusion that the main contribution to specificity is found within the first ten bases after the PAM on the 3’ end of gRNA.5 The apparent differential specificity of the Cas9 gRNA guide sequence can be quantified by an open source online tool (http://crispr.mit.edu/). This tool identifies all possible gRNA segments that target a particular DNA sequence. Using a data-driven algorithm, the program scores each viable gRNA segment depending on its predicted specificity in relation to the genome of interest. Depending on the redundancy of the DNA target sequence, scoring and mutating gRNA might not provide sufficient reduction of off-target activity. Increasing concentrations of CRISPR plasmids upon transfection can provide a modest five to seven fold increase in on-target activity, but a much more specific system is desirable for most research and clinical applications. Transforming Cas9 from a nuclease to a nickase enzyme yields the desired specificity.5 Cas9 has two catalytic domains, each of which nicks a DNA strand. By inactivating one of those domains via a D10A mutation, Cas9 is changed from a nuclease to a nickase. Two Cas9 nickases (and their respective gRNAs) are required to nick complementary DNA strands simultaneously. This technique, called multiplexing, mimics a double-stranded break by inducing single-stranded breaks in close proximity to one another. Since single-stranded breaks are repaired with a higher fidelity than double-stranded breaks, off-target effects caused by improper cleavage can be mitigated, leaving the majority of breaks at the sequence of interest. The two nickases should be offset 10—30 base pairs from each other.5 Multiplex nicking offers on-target modifications comparable to the wild type Cas9, while dramatically reducing off-target modifications (1000—1500 fold).5 CRISPR/Cas9 systems have emerged as the newest genome engineering tool and have quickly been applied in in vitro and in vivo research applications. However, before these systems can be used in clinical applications, off-target effects must be controlled. In spite of its current shortcomings, CRISPR has proven invaluable to researchers conducting high-throughput studies of the biological function and relevance of specific genes. CRISPR Cas9 genome editing provides a rapid procedure for the functional study of mutations of interest in vitro and in vivo. Tumor suppressor genes can be knocked out, and oncogenes with specific mutations can be created via NHEJ and HDR, respectively. The novel cell lines and mouse models that have been created by CRISPR technologies have thus far galvanized translational research by enabling more perspectives of studying the genetic foundation of diseases. - Ishino, Y. et al. J Bacteriol. 1987, 169, 5429–5433. - Mojica, F.J. et al. Mol Microbiol. 1995, 17, 85–93. - Masepohl, B. et al. Biophys Acta. 1996, 1307, 26–30. - Mojica, F.J. et al. Mol Microbiol. 2000, 36, 244–246. - Cong, L. et al. Science. 2013, 6121, 819–823. - Horvath, P. et al. Science. 2010, 327, 167. - Ran, F.A. et al. Nat. Protoc. 2013, 8, 2281–2308.
Practice by reading the dialogue with your tutor. After you are done, switch roles and do it again! - K: I’m going to get another cup of coffee. Do you want something else? - D: I’ll take an Americano with a shot of vanilla syrup. Could you also bring me two packets of sugar? - K: You got it! Do you want anything to eat? - D: I want a sandwich, but I don’t like what’s on the menu. Could you ask them if they have other choices? - K: Sure, I will ask them. Two cups of Joe and 1 sandwich coming right up! Using “another,” “other,” and “others” Read the diagram below out loud with your tutor. Then try coming up with a sentence using each of the three words Other + Ones Other can be placed before the pronoun “ones” when the meaning is clear from the text before it. - We don’t need those books, we need other ones. (= different books) - A: You can borrow my books if you like. B: Thanks, but I need other ones. (= other books) Note: you can say “other one” when it refers to wanting the alternative. - I don’t want this one, I want the other one. Others as a pronoun Others replace “other ones” or “other + plural noun”. Only others can be used as a pronoun and not “other”. - I don’t like these postcards. Let’s ask for others. (others = other postcards) - Some of the presidents arrived on Monday. Others arrived the following day. Others – the others Often “(the) others” refers to “(the) other people”. - He has no interest in helping others. (= in helping other people) - What are the others doing tonight? What is the difference between other and others? “Other” is followed by a noun or a pronoun “Others” is a pronoun and is NOT followed by a noun. - These shoes are too small. Do you have any other shoes? - These shoes are too small. Do you have any others? (no noun after others) Common ways to express containers and quantities. Read the diagram below out loud with your tutor. Then practice making a few sentences with some of the items. Fill in the blank with “another”, “other”, “others.” - Could you tell the others to come to the coffee shop? - I want (1) ____________ glass of water, as I’m done with this one. - (2) __________ people want to order coffee and food. - The (3) __________ asked for coffee to go. - She doesn’t like the (4) _________ types of coffee. Practice placing a to go order at your favorite restaurant. Be creative! This order is to go. I want a large iced mocha with milk and one packet of sugar. In addition, I want an egg sandwich with cheese. Could I get an extra slice of cheese on the sandwich, please? Fill in the blank with the correct container or quantity. Several answers may be correct. - How many rolls of toilet paper are in the bathroom? - She wants a (5) ________ of tea and I would like a (6) ________ of water. - If you go to the supermarket, I need a (7) ________ of flour, a (8) __________ of milk, and a (9) ____________ of ice cream. - To make a toasted cheese sandwich you need two (10) ____________ of bread and two (11) ___________ of cheese. - My mother asked me to buy a (12) _________ of tape, a (13) ________ of scissors, and (14) ___________ of thread. Come up with 4 sentences using “another,” “other,” “others.” Try to use some examples of containers and quantities in your sentences. - The other day I bought another tube of toothpaste. - What have you done recently to help others? Did you enjoy how it felt? - Describe the ingredients and quantities you need to make your favorite dish. - Who is your best friend? How do you treat one another? - What items do you currently have in your kitchen? Describe both the item and the quantity. (6) glass / cup (8) quart / half-gallon / gallon / carton
Fruits contain important vitamins and minerals. They can help you to get healthy and should be an essential part of your daily diet.1 We all have been hearing this statement since childhood, and no matter how cliché it sounds, the role that fruits play in our health cannot be ignored. In individuals with hypertension, fruits can help to lower high blood pressure. So, instead of discussing the obvious, this article will focus on a more specific topic – “how can fruits help keep your blood pressure in check?” An understanding of how eating fruits lowers blood pressure may encourage people to inculcate healthy eating habits.2 To understand this, let us first understand “what” causes high blood pressure. What causes high blood pressure? High blood pressure occurs when blood exerts a higher than normal pressure on the walls of the blood vessels. If left uncontrolled, it can put you at a higher risk of heart attack and stroke.3 The intake of too much salt is one of the major factors that cause hypertension.4 Another biological factor responsible for high blood pressure is the presence of excess fluids in the body. Kidneys play a significant role in filtering the excess fluid out into the urine. To remove the extra fluids from the blood, the kidneys require a delicate balance of potassium and sodium in the blood.5 A variation in either of the nutrients in the blood can disturb the sodium-potassium balance. This imbalance may reduce the ability of the kidneys to filter out excess fluids that could build up in the blood, resulting in high blood pressure.5 Inadequate consumption of potassium is related to an increased risk of high blood pressure, which can lead to cardiovascular diseases. Studies suggest that an increased intake of potassium in your diet helps to lower blood pressure. One factor responsible for potassium deficiency in individuals with high blood pressure is not eating sufficient fruits.6 Fruits are one of the best naturally available sources of potassium. Eating sufficient fruits can help you get the potassium levels that helps restore the sodium-potassium imbalance seen in individuals with high blood pressure, thus, lowering the blood pressure.7 The DASH diet8, a diet plan for lowering blood pressure, recommends eating 4-5 servings of fruit a day. The following amounts represent one serving: - 1 medium-sized piece of fruit - ½ cup canned or chopped fruit - ½ cup fruit juice - ¼ cup dried fruit The diet also recommends choosing fruits over fruit juice.8 All fruits provide us with potassium, but certain fruits are rich in potassium and can be eaten on a regular basis to help prevent potassium deficiency. We have listed a few potassium-rich fruits below: - Tomatoes, tomato juice - Raisins and dates9 If you want the fruits to be effective in lowering your blood pressure, cut down the intake of salt that you eat.2 Limit the intake of packaged and processed foods, which are a major source of sodium.10 Knowledge is in understanding how fruits lower blood pressure, but wisdom is in making a habit of eating fruits regularly to help keep your blood pressure in check. - Better Health Channel. Fruit and vegetables [Internet]. [updated 2011 Sep; cited 2019 Nov 20]. Available from: https://www.betterhealth.vic.gov.au/health/healthyliving/fruit-and-vegetables. - Blood Pressure UK. Potassium rich fruits help to lower blood pressure [Internet]. [cited 2019 Nov 20]. Available from: http://www.bloodpressureuk.org/microsites/salt/Home/Howtoeatmorepotassium/Fruit. - Familydoctor.org. High blood pressure [Internet]. [updated 2017 Oct 27; cited 2019 Nov 20]. Available from: https://familydoctor.org/condition/high-blood-pressure/. - Heart Foundation. Managing high blood pressure [Internet]. [cited 2019 Nov 20]. Available from: https://www.heartfoundation.org.nz/wellbeing/managing-risk/managing-high-blood-pressure. - Blood Pressure UK. Why potassium helps to lower blood pressure [Internet]. [cited 2019 Nov 20]. Available from: http://www.bloodpressureuk.org/microsites/salt/Home/Whypotassiumhelps. - Centres for Disease Control and Prevention. The role of potassium and sodium in your diet [Internet]. [updated 2018 Jun 29; cited 2019 Nov 20]. Available from: https://www.cdc.gov/salt/potassium.htm. - Blood Pressure UK. How fruits and vegetables help to lower your blood pressure [Internet]. [cited 2019 Nov 20]. Available from: http://www.bloodpressureuk.org/microsites/AfricanCaribbean/Home/Healthyeating/Fruitandveg. - CardioSmart. Healthy eating: The DASH diet [Internet]. [cited 2019 Nov 20]. Available from: https://www.cardiosmart.org/~/media/Documents/Fact%20Sheets/en/zx1344.ashx. - American Heart Association. How potassium can help control high blood pressure [Internet]. [updated 2016 Oct 31; cited 2019 Nov 20]. Available from: https://www.heart.org/en/health-topics/high-blood-pressure/changes-you-can-make-to-manage-high-blood-pressure/how-potassium-can-help-control-high-blood-pressure. - AARP. Potassium: The secret weapon for lower blood pressure [Internet]. [updated 2018 Feb 27; cited 2019 Nov 20]. Available from: https://www.aarp.org/health/conditions-treatments/info-2018/blood-pressure-potassium-fd.html.
How Do Antibiotics Actually Work? Antibiotics are substances that kill prokaryotic cells, such as bacteria, while leaving eukaryotic cells (in humans and others) untouched. Antibiotics work in two ways: either by killing microorganisms (being ‘bactericidal), and immediately removing the threat, or by inhibiting the growth of the microorganisms (being ‘bacteriostatic’), allowing the host’s defence system to fight the threat. Antibiotics that affect a wide range of bacteria are called broad spectrum antibiotics, e.g. amoxicillin, and antibiotics that affect only a few types of specific bacteria are called narrow spectrum antibiotics, e.g. penicillin. Each type of antibiotic targets different parts in bacteria. For example bacteria have cell walls, whilst human cells don’t, so some antibiotics prevent the formation of cell walls. This renders the bacteria vulnerable to water flooding inside and bursting, which means that these bacteria cannot replicate and spread the infection. This is known as osmotic lysis. (lysis meaning breaking, whilst osmotic refers to the movement of water molecules from an area of higher concentration to an area of lower concentration). Another method with which antibiotics target and kill bacteria is through interference with their DNA Replication, so that bacteria are unable to replicate further, and through interfering with their protein synthesis, which fundamentally blocks the normal running of their metabolic functions, rendering them dead or unable to replicate. In the latter method, antibiotics make use of the fact that cell structures in bacterial cells are different to cell structures in human cells, so whilst humans (and all other eukaryotes) have 80S ribosomes (organelles in the cell that are responsible for protein synthesis), bacterial cells have 70S ribosomes, with a smaller molecular weight and different shape. These ribosomes are different enough for antibiotics to be able to recognise bacterial ribosomes as separate to eukaryotic ribosomes, and able to target them specifically without damaging the host’s own cells. The antibiotics interfere with the ribosomes to render them incapable of carrying out their function, leaving the bacteria to die. Treating a patient with antibiotics causes the microbes to either adapt by mutating their genes, or die, (providing a ‘selective pressure’). If a strain of a bacterial species acquires resistance to an antibiotic, it will survive the treatment. As the bacterial cell with acquired resistance multiplies, this resistance is passed on to its offspring. The gene for antibiotic resistance is passed rapidly, not only through vertical transmission (i.e. replication through binary fission), but also through horizontal transmission (where a conjugation tube can be set up between bacteria, even those of different species, and the plasmids (circular loops of DNA) are transferred from bacteria to bacteria). In ideal conditions some bacterial cells can divide every 20 minutes; therefore after only 8 hours in excess of 16 million bacterial cells carrying resistance to that antibiotic could exist. This resistance to many of our drugs means that it is increasingly difficult to treat infections of the antibiotic resistant bacterial strains, and so health officials warn that antibiotics should not be used to treat common colds, most sore throats, or the flu because these infections are caused by viruses, against which antibiotics are useless anyway. When used unnecessarily, antibiotics can lead to the spread of resistant bacteria. A strain of bacteria is often referred to as a superbug if it is resistant to several different antibiotics. Antibiotic resistance has been observed in bacteria such as E.coli and MRSA (methicillin resistant staphylococcus aureus). These “super bugs” represent a threat to public health since they are resistant to the most commonly used antibiotics, and therefore scientists are pushed to discover new antibiotics, to remain one step ahead of the bacteria in the fight for survival. Staphylococcus aureus (also known as staph) is a common type of bacteria, often carried on the skin and inside the nostrils and throat, and can cause mild infections of the skin, however, MRSA can be more aggressive in its effects, and can be fatal, causing life-threatening infections such as blood-poisoning, and also harder to treat, as it is resistant to many of the commonly used antibiotics, leaving attempts futile. If someone was to be infected with MRSA, there still remain certain drugs that are effective against MRSA, one of them being vancomycin. However, the more it is used, the more vancomycin resistance will be observed. If this drug were to become ineffective, then other stronger drugs would have to be resorted to, such as daptomycin, harbouring the risk of more antibiotic resistance. Currently, cases of VRSA, although low, are still present. Looking To The Future Whilst scientists are rapidly trying to understand how bacteria gain resistance to antibiotics, and are also looking into how to prevent the bacteria from causing infections or spreading their own genetic information, the main focus for now should be reducing the spread of MRSA, with the most effective way being labelled as hand washing, especially in hospitals, as this is the place where MRSA is most likely to be contracted and spread. Due to overuse and misuse, antibiotics are no longer as effective as they ere before, and if we continue frivolity when it comes to their prescription, then we are looking at an era akin to one centuries ago, where operational, invasive procedures are not possible, and simple infections are not only difficult to cure, but also more aggressive. The World Health Organisation has rightly called antibiotic resistance as one of the biggest threats to human health today, requiring action from all global communities and societies.
Extinct and Endangered Species Activities for Kids In this set of activities adaptable for grades K-3, parents and educators will find ideas for teaching about extinction and endangered species. These activities are designed to complement the BrainPOP Jr. Extinct and Endangered Species topic page, which includes a movie, quizzes, online games, printable activities, and more. Classroom Activities for Teaching About Extinct and Endangered Species Adopt an Endangered Species Together with the class, choose an endangered species to adopt. You can look on websites of the Nature Conservancy or the World Wildlife Fund or even visit your local zoo to learn about different endangered plants and animals. You can also look at your states’ endangered plants and animals list to find out what you can do to help protect species close to home. Have students learn how the animal’s habitat became threatened, and discover if that can be changed. Track the progress of the plant or animal and find ways to raise awareness in the whole school or community, such as a poster campaign or a fundraiser. Remind students that living things need safe, clean places in order to live. If possible, organize a clean-up day at a park, beach, or piece of public land. Students can pick up items of litter and later research how long these items would have taken to biodegrade. Let students learn about the damage that everyday items, such as plastic bags, can do to soil, plants, and water. Many communities organize an annual day to improve neighborhoods. Participate in a local event and help children understand the importance of maintaining a safe and clean environment for all living things. Family and Homeschool Activities for Teaching About Extinct and Endangered Species If possible, visit your local zoo or aquarium. Explore different animals and discuss a few, pointing out adaptations and behavioral characteristics. Pose questions about human impact. How might ocean animals be affected by fishing nets, oil spills, or water pollution? Many zoos offer tours or sessions that focus on endangered species or conservation efforts. Talk to your child about what he or she can do to protect endangered species. Find out what species of plant and animal are endangered in your community or state. Research together and learn how they became endangered and how you can protect them. Write letters to your representative or other government leader to urge them to set aside land for reserves, support conservation legislation, and develop ways to promote initiatives that will help endangered species.
Two recent research reports are projecting significant changes in the Arctic and the northeast coasts in the next 100 years due to a changing climate. Research recently published in the journal Nature Geoscience by researchers at UCLA indicates that under a "medium" greenhouse gas emission (primarily CO2 and Methane) scenario in the coming 50-100 years, the Arctic Ocean will probably be ice free by the end of this century. Many recent measurements have shown that the extent of Arctic sea ice in the summer is decreasing even faster than most current climate models show. The research reported by the UCLA group involved studying the projections of sea ice extent from 18 different state-of-the-art climate models and looking at trends. The results indicate that the decrease in summer Arctic sea ice coverage will continue and perhaps accelerate in a warmer climate, and probably before September 2100, the Arctic Ocean essentially will be ice-free in late summers. According to another recent paper in Nature Geoscience, researchers using sophisticated coupled-ocean-atmosphere-general circulation models (COAGCM) found that "Human-induced climate change could cause global sea-level rise" due in part to changes in the large scale ocean circulations in the Atlantic. The study found that with the large scale circulation slowing more -- as has been happening -- in a warmer climate, the rise just from thermal expansion of the ocean would result in an average sea level rise of about 18 inches in Boston and New York and a bit more than 12 inches in Washington, along the tidal Potomac. Melting of Greenland glaciers would add to this sea level rise. The study found that sea level rises along the west coast would not be as large as along the east coast. It is important to remember that these results are based on computer simulations of the ocean-atmosphere system and these models are getting more and more realistic, giving scientists results in greater and greater detail and also involving uncertainties not spelled out in the current study. Finally, a note on the results from an MIT study in Part 5 of my Global Warming/Change series: The MIT Joint Program on Science and Policy of Global Changeshows a 50 percent probability of a global temperature rise of about 10 degree with "no policy change" by 2100, which was quite a bit higher than their previous study and quite a bit higher than most other studies and projections. Since this is so much higher than other estimates, I looked at the global climate model used as part of the study and found that the very important climate element of cloud was according to the MIT group atmospheric dynamics component “The atmospheric model's climate sensitivity can be changed by varying the cloud feedback.” Clouds and aerosols/haze/pollution are still a bit of a “wild card” in the model simulations. How and which (high, middle, low) clouds will change and increase or decrease in a warming climate is an area of research by many scientists. Varying the cloud feedback of atmospheric models will certainly change model projection results so it would be very helpful for further discussion from the MIT group of how much they feel their results and projections are due to increasing CO2, and how much due to varying the cloud feedback.
Cancer is the name given to a large group of diseases, all of which have one thing in common: cells that are growing out of control. Normally, the cells that make up all of the parts of our bodies go through a predictable life cycle -- old cells die, and new cells arise to take their place. Occasionally, this process goes awry, and cells begin to multiply out of control. The end result is a mass of cells, called a tumor. A benign tumor is one that does not spread, or metastasize to other parts of the body. It is considered noncancerous. A malignant tumor, on the other hand, can spread throughout the body and is considered cancerous. When malignant cells break away from the primary tumor and settle into another part of the body, the resulting new tumor is called either a metastasis or a secondary tumor. There are several major types of cancers: carcinomas form in the cells that cover the skin or line the mouth, throat, lungs and organs; sarcomas are found in the bones, muscles, fibrous tissues and some organs; leukemia are found in the blood, the bone marrow, and the spleen; and lymphomas are found in the lymphatic system. Causes of Oncology Cancer often takes many years to develop. The process typically begins with some disruption to the DNA of a cell, the genetic code that directs the life of the cell. There can be many reasons for disruptions, such as diet, tobacco, sun exposure, reproductive history or certain chemicals. Some cells will enter a precancerous phase, known as dysplasia. Some cells will progress further to the state of carcinoma in situ, in which the cancer cells are restricted to a microscopic site, surrounded by a thick covering and do not pose a great threat. Eventually, unless the body's own immune system takes care of the wayward cells, a cancer will develop. It may take as long as 30 years for a tumor to go through the entire process and become large enough to produce symptoms. Symptoms of Oncology Since cancer can arise from such a wide variety of sites and develop with many differing patterns of spread, there are no clear-cut symptoms. Cancer is unlike many more specific diseases such as heart disease or arthritic disease. The precise nature of symptoms of cancer depends not only on primary site but specifically where the tumor is located in an organ, rate of development and also secondary spread is present or not. Many primary tumors cause local swelling or lump if they arise at a visible or accessible part of the body, such as a skin, breast, testicle or oral cavity. A typical swelling due to a cancer is initially painless, though ulceration (skin breakdown) can occur, which may then become painful. Treatment of Oncology The aim of cancer treatment is to cure the patient and save life. The cases where complete cure is not possible, treatment aims to control the disease and to keep the patient normal and comfortable as long as possible. The treatment of each patient is designed to suit an individual and depends on the age of the patient, stage and type of disease. There may be only one treatment or combination of treatments. There are four main modalities of treatment : Surgery, Radiation therapy, Chemotherapy, hormonetherapy and Immunotherapy. Surgery and radiotherapy aim at eradicating the disease at the primary site (site of origin) of cancer whereas chemotherapy, hormonetherapy and immunotherapy deal with disease which may have spread outside the site of origin of cancer. Surgery is the most important part of the cancer treatment. Surgery attempts to remove cancer cells from the body by cutting away the tumor and any tissues surrounding it which may contain cancer cells. It is a simple, safe and effective method when cancer is small and confined to the site of origin. It is best suited for certain type of cancers such as, breast cancer, head and neck cancers, early cancers of the cervix and lung, many skin cancers, soft tissue cancers and gastrointestinal cancers. Radiotherapy has become the pre-eminent form of cancer treatment since beginning of this century and now it is used for fifty percent of patients. Improvements in radiotherapy equipment, technique and applications, have led to an increasing role both in local treatment and also in its use as a whole-body treatment , as part of bone marrow transplantation techniques for leukaemia and other malignant diseases. Radiation is a special kind of energy carried by waves or a stream of particles originating from radioactive substances and delivered by special machines. These radioactive x-rays or gamma rays can penetrate the cell wall and damage the nucleus of the cell which prevents growth and division of cells. This also affects the normal cells but these cells recover more fully than cancer cells. Chemotherapy uses drugs which interfere with the growth and division of malignant cells. Once the drugs are administered, they circulate throughout the body. It is advantageous over surgery & radiation for treating cancer that is systemic (spread throughout the body). Chemotherapy is very useful in treating cancers like leukemia, lymphomas, testicular cancer. Chemotherapy can be given as the primary treatment, or following surgery or radiotherapy to prevent reappearance of cancer. The side-effects of the chemotherapy include nausea, vomiting, hair loss, fever etc. which are temporary and completely reversible. Hormone therapy has limited use in cancer treatment since only a small minority of tumors are hormone sensitive e.g. breast and prostate cancer. This therapy provides systemic means of treatment, i.e. to the whole body, but without the side effects of chemotherapy. In summary, it is misconception that all cancers are incurable. Current methods of treatment are effective for many cancers. A large number of cancer patients are cured and more patients could be cured if their cancers were detected early and treated promptly.
By Peter Konieczny In 2015 the world celebrated the 800th anniversary of Magna Carta, the charter in which King John agreed to new rules for his government. In the centuries since then, the document has come to embody the idea of the importance of rule of law and that even the power of monarchs had its limitations. However, one thing usually overlooked in the praise of this event is that Magna Carta was a failure. By 1215 King John was a desperate man. Military failures overseas combined with feeble rule at home were pushing his leading nobles into rebellion. According to chronicler Roger of Wendover, the king, “when he saw that he was deserted by almost all, so that out of his regal super-abundance of followers he scarcely retained seven knights, was much alarmed lest the barons would attack his castles and reduce them without difficulty.” With his options dwindling, John agreed to peace talks, which were held at Runnymede, along the south bank of the River Thames. The negotiations began on June 10, and five days later an agreement was hammered out. There were 63 clauses that the king would have to abide to, ranging from “to no one deny or delay right or justice,” to “no town or person shall be forced to build bridges over rivers…” It was a sweeping repudiation of royal power, but even as copies of it were being drawn up throughout England, the king made it clear that he had no intention of following it. This article is the Introduction to Medieval Warfare magazine’s issue VII:2 – The First Barons’ War in 1215-17. Click here to learn more about the issue. Before the end of the summer of 1215, the First Barons’ War had begun. The English king had cobbled together his few supporters with mercenaries from overseas, while the various rebellious barons sought to take control of their own castles. The first major action of the war was the siege of Rochester Castle, a strategically important fortress. Our first article, by William E. Welsh, describes the bitter fight for this castle, where both the attackers and the defenders earned praise from the chroniclers. If John was pleased by his success at Rochester, his happiness would be short-lived. The rebellious barons were able to convince Prince Louis, the son and heir apparent of King Philip II of France, to support them and become their King of England. Louis would first send reinforcements and then arrive himself on the island. In the spring of 1216, Londoners were celebrating the prince’s entry into the city, and much of the southeastern part of the country was under his control. The entire war is too large to cover in just one issue, with many of the smaller clashes taking place throughout England. For instance, there were the actions of Willikin of the Weald, a minor nobleman, who gathered a band of archers in Kent and Sussex and made ambush-style attacks against the French soldiers. Some scholars believe he might have been the basis of the story of Robin Hood. This issue will focus on the two most important military events of the war – the Siege of Dover and the Battle of Lincoln. The first was a long-lasting affair, in which the remaining royalist supporters held out against Louis. The second happened quickly, thanks to the bold (even rash) actions of William Marshal. To tell the stories of Dover and Lincoln, we are pleased to have Catherine Hanley and Sean McGlynn, both of whom have recently written books about the war. However, if one were to look for a turning-point in the war, it did not take place on the battlefield. Instead, it was when the English king took ill as he was moving across England in his desperate bid to counter the advances of Louis. Roger of Wendover explains that John’s army had tried crossing a river but the rushing waters had swamped his baggage train, causing him to lose his belongings, money, and even crown jewels. The following night “he felt such an anguish of mind about his property which was swallowed up by the waters, that he was seized with a violent fever and became ill; his sickness was increased by his pernicious gluttony, for that night he surfeited himself with peaches and drinking new cider, which greatly increased and aggravated the fever in him.” King John would die, not from peaches but probably from dysentery, on October 19, 1216. While some might think this would have a been disaster for his cause, it turned out to be the best thing that could have happened. Many rebel barons no longer saw a reason to continue to fight – John was their enemy and he was now dead. His son, Henry III, would be king, and he was only nine years old. Having a young boy on the throne was considered ideal for the barons, a better option than having a French prince.
The aggregate demand and keynesian range Aggregate demand shows the amount of total goods and services those household, firms governments and foreign buyers desire to consume at each different price level. Aggregate demand is downward sloping because it has negative relationship between price level and total output. The higher the price level, the lower the output consume by household. In other word, the lower the price, the higher the outputs consume, vice versa. Aggregate supply curve shows the relationship between the overall price level and the total aggregate quantity of output supplied by all firms in an economy. In macroeconomic, the aggregate supply curves comprise into 3 segments which are Keynesian range (horizontal), Intermediate range (up sloping) and Classical range (vertical). In Keynesian range, the firms did not make full use of the resources. So, more workers and capital can be hire and invest more to economy. These will not increase the price. In intermediate range the AS curve is upward sloping and the economy still faces shortages, so some resources still can be inject to the business. By this, rising in output will raise the price too. In the classical range, since the economy has reached full employment, output will remain constant although price level increase. Short run aggregate supply curve is upward sloping, intermediate range. Due to the curve is upward sloping, any expansion in output will lead to rising in price level. A short run macroeconomic equilibrium occurs when domestic output equals the price level. In others words, it forms when AD curve intersect with upward sloping AS curve (P2, Y2). In this case, the AD and AS curves show the quantity of demand and supply in each price level. The economy is operating nearly full employment and the equilibrium is depends on price level and output. Any changes in price or output will make changes in equilibrium. Thus, the multiplier is close to zero. When the firms keep on increase the labours, machinery and others vital resources, the economy will reach full employment because the firms have fully use of the resources to produce output. Hence, the upward sloping AS curve (short run) will become vertical (long run). Long run aggregate supply curve is vertical, classical range. Due to the curve is vertical, any rising in price level will not make changes on domestic output. A long run macroeconomic equilibrium occurs when domestic output equals the price level. In others words, it forms when AD curve intersect with vertical AS curve (P3, Y3). In this case, the AS curves show the same quantity of output on different price level. Since the economy is operating full employment, so the equilibrium is depends on price level only. Any changes in price will lead to changes in equilibrium. Thus, the multiplier is close to zero. Aggregate demand (AD) shows how much do the consumers, firms, foreigners, government are willing to purchase at possible price level. Aggregate output(Y) and the price level are having an inversely relationship. When AD increases, the firms will increase their supply and raise the prices of outp1ut. In this situation, prices of the goods and services but the output remain constant, it is because the aggregate output is already reached the maximum and full employment is reached. There is not to be possible to increase the output, as economy is reached full capacity. Increasing in output will push up the cost of output. Aggregate Supply, AS Price level, P Aggregate output, Y As the graph above shows that, although the AD is increase from increase to as well as shift to the right, but the aggregate output still remain at its full employment output. The price of output increase from to to reach the equilibrium. The equilibrium is increase from e0 to e1. Any changes in AD doesn’t affect the output, thus, the multiplier is totally zero. On the other hand, if long run aggregate supply(AS) curve shift to the right, the price and output will be affected. Aggregate output, Y Price level, P Aggregate Supply, AS As the graph shows above, when the AS curve shift to the right while the AD is shifting to the same direction from to. Obviously the output is rising from to Y1 whereas the Price level is slight increase from P0 to P1 Other than that, the equilibrium is slightly increased from the e0 to e1. Credit creation is the ability of the bank sector expands their money supply to make more loans. There are four assumptions for us to understand credit creation. Firstly, the bank can only have one kind of liability – demand deposit and invest one kind of asset – loans. Secondly, the reserve ration must be fix. Thirdly, bank only hold requiring reserve and finally is there are no cash withdraw from the financial system. Although these four assumptions are impossible in real world, but these assumptions can let us more understand about the credit creation. Required reserve ratio is the minimum percentage of cash reserve of commercial banks to deposit in central bank. At first, we assume the individual deposit RM 1000 into CD Bank and the required reserve is 0.25 (25%). After individual deposit money into his account, the credit side of the bank’s account shows CD bank’s liabilities have increase by RM 1000. On the debit side, the CD bank will record RM250 (1000*0.25= 250) as the required reserve ratio and the remaining balance- RM750 (1000-250= 750) which are excess reserve. So RM750 excess reserve can be loan out. The people who receive the loan (RM 750) will deposit the money into others bank, which known as second generation banks. Again the 2nd generation bank will record RM 750 in credit side (liabilities increase). Due to require reserved ratio 25%, 2nd generation bank will record require reserve RM187.5 (RM750*0.25=187.5) and balance can be loan out RM562.5 (750-187.5=562.5) in debit site. Repeatedly, 2nd generation bank will loan out the remaining balance and the people who receive the loan will deposit the money into third generation banks. 3rd generation banks will also follow the 25% of reserve ratio and the remaining balance will be loan out. This process will continuously until the deposit become zero. Finally, the total new deposits are RM 4000 with initially RM1000. The total credit created is RM 3000. Refer the diagram below to know how it being created. Another faster way to calculate total new deposit is using money multiplier formula, which is 1/ require reserve ratio. The money multiplier formula is the reciprocal of the required reserve ratio. In the example above, the required reserve ratio is 0.25, so the money multiplier is 1/0.25 = 4, it means every RM 1 increase in reserve would cause increase RM 4 in deposit when there are no withdrawal. An additional RM 1000 in reserve will cause deposits increase to RM 4000 and credit creation is RM 3000 (4000-1000=3000). In case if the required reserve ratio is 0.4, than the money multiplier is 1/0.4 = 2.5. An increase RM 1000 in reserve will cause RM 2500 increase in deposit and credit creation will be RM 1500. From here, we can conclude the higher the ratio caused the lower of the credit creation. The lower the ratio caused the higher of the credit creation, vice versa. Presence of borrower In money supply, borrowers play the independent roles; it is because the money supply is generated by the loans from bank. Thus, money supply cannot create without the borrowers. Fixed reserve ratio Commercial banks are obliged by the law to follow the reserve ratio which is insisted by the central bank. Money supply cannot leave out this consumption; it is because this consumption is to prevent the prospective interruption in the financial system. Banks would grant the loans with the security that the depositor recognize. So, the higher securities power will lead to a higher money supply. Condition of economy Borrower will loan more money when the condition of economy is well being, on the other hand, borrower would not take the loans from the bank when economy is having the dire situation. No cash drain from the financial system Money must remain within the country to prevent the cash drain from the financial system. For example, depositor should deposit the money in local bank and the loan should be in the local bank also. The loans should not loan out to another country. If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
Dennis Aabo Sørensen can feel the difference between a rough and smooth surface, which is impressive when you consider he lost his hand. Advances in prosthetics give hope to amputees around the world that they can continue living normal lives after losing a limb. Even the intricacies of the hand are being worked out with advanced robotics. However, the ability to manipulate objects is only part of the equation. Most prosthetic hands can’t offer any semblance of the sense of touch the patient has lost, but a project at Ecole polytechnique fédérale de Lausanne (EPFL) in Switzerland might change that. That’s where Sørensen’s bionic fingertip was developed. Researchers from EPFL believe this is the first time an amputee has been able to feel the texture of a surface with electronic sensors. Some prosthetic limbs can relay temperature or pressure information to the remaining skin where the device connects, but the bionic fingertip was wired directly into the nerves in Sørensen’s upper arm using thin needles. The fingertip has a soft surface, not unlike the real thing. When Sørensen was wired in, a machine moved the fingertip across a smooth or rough surface. The flexible fingertip deforms, which causes electrical impulses that are interpreted in the signal processor. It then produces electrical spikes and transmits them to Sørensen’s nerves in an attempt to mimic the way a real nerve operates. In a blind test, Sørensen was able to tell the difference between the two surfaces with 96% accuracy. He said following the test that the sensation was very similar to that of a real finger, specifically the index finger of his phantom hand. The study seems to indicate that the sensation of having a missing “phantom” limb actually makes the bionic finger more accurate. Non-amputees who tried the sensor were only able to tell the difference between the rough and smooth surface 77% of the time. EEG readings show that the same part of the brain is activated when these subjects use the bionic finger as when they use their real one. The fact that the brain was able to interpret the signals from an “extra” finger at all is potentially useful. That means future development of the sensor might not require amputees as test subjects. In addition to making more useful prosthetics for amputees, the team sees remote surgical robots as a potential use case. Doctors could actually feel what a robot feels as if they were actually present to do the surgery themselves. Manufacturing robots could also benefit, allowing workers to feel what’s going on without putting their hands near anything dangerous.
A potential well is the region surrounding a local minimum of potential energy. Energy captured in a potential well is unable to convert to another type of energy (kinetic energy in the case of a gravitational potential well) because it is captured in the local minimum of a potential well. Therefore, a body may not proceed to the global minimum of potential energy, as it would naturally tend to due to entropy. Energy may be released from a potential well if sufficient energy is added to the system such that the local maximum is surmounted. In quantum physics, potential energy may escape a potential well without added energy due to the probabilistic characteristics of quantum particles; in these cases a particle may be imagined to tunnel through the walls of a potential well. The graph of a 2D potential energy function is a potential energy surface that can be imagined as the Earth's surface in a landscape of hills and valleys. Then a potential well would be a valley surrounded on all sides with higher terrain, which thus could be filled with water (e.g., be a lake) without any water flowing away toward another, lower minimum (e.g. sea level). In the case of gravity, the region around a mass is a gravitational potential well, unless the density of the mass is so low that tidal forces from other masses are greater than the gravity of the body itself. A potential hill is the opposite of a potential well, and is the region surrounding a local maximum. Quantum confinement can be observed once the diameter of a material is of the same magnitude as the de Broglie wavelength of the electron wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials. A particle behaves as if it were free when the confining dimension is large compared to the wavelength of the particle. During this state, the bandgap remains at its original energy due to a continuous energy state. However, as the confining dimension decreases and reaches a certain limit, typically in nanoscale, the energy spectrum becomes discrete. As a result, the bandgap becomes size-dependent. This ultimately results in a blueshift in light emission as the size of the particles decreases. Specifically, the effect describes the phenomenon resulting from electrons and electron holes being squeezed into a dimension that approaches a critical quantum measurement, called the exciton Bohr radius. In current application, a quantum dot such as a small sphere confines in three dimensions, a quantum wire confines in two dimensions, and a quantum well confines only in one dimension. These are also known as zero-, one- and two-dimensional potential wells, respectively. In these cases they refer to the number of dimensions in which a confined particle can act as a free carrier. See external links, below, for application examples in biotechnology and solar cell technology. Quantum mechanics view Τhe electronic and optical properties of materials are affected by size and shape. Well-established technical achievements including quantum dots were derived from size manipulation and investigation for their theoretical corroboration on quantum confinement effect. The major part of the theory is the behaviour of the exciton resembles that of an atom as its surrounding space shortens. A rather good approximation of an exciton’s behaviour is the 3-D model of a particle in a box. The solution of this problem provides a sole[clarification needed] mathematical connection between energy states and the dimension of space. Decreasing the volume or the dimensions of the available space, increases the energy of the states. Shown in the diagram is the change in electron energy level and bandgap between nanomaterial and its bulk state. The following equation shows the relationship between energy level and dimension spacing: Research results provide an alternative explanation of the shift of properties at nanoscale. In the bulk phase, the surfaces appear to control some of the macroscopically observed properties. However, in nanoparticles, surface molecules do not obey the expected configuration[which?] in space. As a result, surface tension changes tremendously. Classical mechanics view The Young-Laplace equation can give a background on the investigation of the scale of forces applied to the surface molecules: Under the assumption of spherical shape and resolving the Young-Laplace equation for the new radii (nm), we estimate the new (GPa). The smaller the radii, the greater the pressure is present. The increase in pressure at the nanoscale results in strong forces toward the interior of the particle. Consequently, the molecular structure of the particle appears to be different from the bulk mode, especially at the surface. These abnormalities at the surface are responsible for changes of inter-atomic interactions and bandgap. - M. Cahay (2001). Quantum Confinement VI: Nanostructured Materials and Devices : Proceedings of the International Symposium. The Electrochemical Society. ISBN 978-1-56677-352-2. Retrieved 19 June 2012. - Hartmut Haug; Stephan W. Koch (1994). Quantum Theory of the Optical and Electronic Properties of Semiconductors. World Scientific. ISBN 978-981-02-2002-0. Retrieved 19 June 2012. - Norris, DJ; Bawendi, MG (1996). "Measurement and assignment of the size-dependent optical spectrum in CdSe quantum dots". Physical Review B. 53 (24): 16338–16346. Bibcode:1996PhRvB..5316338N. doi:10.1103/PhysRevB.53.16338. PMID 9983472. - Brus, L. E. (1983). "A simple model for the ionization potential, electron affinity, and aqueous redox potentials of small semiconductor crystallites". The Journal of Chemical Physics. 79 (11): 5566. Bibcode:1983JChPh..79.5566B. doi:10.1063/1.445676. - Kunz, A B; Weidman, R S; Collins, T C (1981). "Pressure-induced modifications of the energy band structure of crystalline CdS". Journal of Physics C: Solid State Physics. 14 (20): L581. Bibcode:1981JPhC...14L.581K. doi:10.1088/0022-3719/14/20/004. - H. Kurisu; T. Tanaka; T. Karasawa; T. Komatsu (1993). "Pressure induced quantum confined excitons in layered metal triiodide crystals". Jpn. J. Appl. Phys. 32 (Supplement 32–1): 285–287. Bibcode:1993JJAPS..32..285K. doi:10.7567/jjaps.32s1.285. - Lee, Chieh-Ju; Mizel, Ari; Banin, Uri; Cohen, Marvin L.; Alivisatos, A. Paul (2000). "Observation of pressure-induced direct-to-indirect band gap transition in InP nanocrystals". The Journal of Chemical Physics. 113 (5): 2016. Bibcode:2000JChPh.113.2016L. doi:10.1063/1.482008.
Description and Distribution Lyme disease is an illness caused by a spirochete bacterium (Borrelia burgdorferi). This disease is transmitted to people and animals primarily by the bite of the black-legged tick, Ixodes scapularis. In 1975, Lyme disease was first recognized in the United States in children from Lyme, Connecticut. However, the bacterium that causes Lyme disease was not identified until 1982. Since then, Lyme disease has been reported with increasing frequency. The majority of cases occur along the east coast from Delaware to Massachusetts and the upper midwest in Wisconsin and Minnesota. Lyme disease has now been reported from 43 states, including Michigan. In Michigan, the first official reported human case of Lyme disease was in 1985. Cases have now been reported in both the upper and lower peninsula and in most of Michigan's 83 counties. It is anticipated that the number of cases reported will continue to increase. Transmission and Development The black-legged tick, Ixodes scapularis, is the most common carrier of Lyme disease in the mid-western and eastern states. I. pacificus is known to be the vector in the west. Other species of ticks such as the dog tick or wood tick, the lone-star tick and the rabbit tick, and biting insects such as mosquitoes, deer flies and horse flies have been shown to carry the Lyme disease bacterium. However, their ability to transmit the disease is not known at this time. Studies are continuing in Michigan to determine the extent of the black-legged tick population. The black-legged tick has a rather complex life cycle involving development from an egg to a larva, larva to a nymph, and finally from a nymph to an adult. This process usually takes two years. Larvae and nymphs require blood to proceed to the next development stage, and adult females need blood to lay their eggs. At each of these stages, the tick seeks an animal host for a single blood meal and then drops off the host. In their two years of life, black-legged ticks spend very little time (only about two and a half weeks) on hosts. The rest of the time is spent off the host, developing into the next stage and waiting for another host to come along. In the spring, the eggs hatch into larvae. During the summer, the larvae feed on mice, squirrel, raccoon, rabbit and other animals. In the fall, the larvae mature into nymphs, which then hibernate over winter. In the spring and summer these nymphs become active again, preferring to feed on mice. It is during the time the tick is in the nymphal stage that it is most likely to infect humans. At the end of its life cycle the female tick lays eggs and dies. Clinical Signs and Pathology Lyme disease in humans is usually not a life-threatening illness and one should regard the health risks it does pose with concern rather than alarm. It is most often a mild illness mimicking a summer flu, but serious problems involving the heart, joints and nervous system may develop in some individuals. Lyme disease in humans may progress through three stages, depending upon the individual. In stage 1, people may have any combination of the following signs and symptoms: headache, nausea, fever, spreading rash, aching joints and muscles and fatigue. These signs and symptoms may disappear altogether, or they may reoccur intermittently for several months. A characteristic red rash, called erythema migrans (EM) may appear within 3 to 32 days after a person is bitten by an infected tick. The rash is circular in shape and can attain a diameter of 2 to 20 inches. EM is not restricted to the bite site and more than one lesion may occur on the body. Up to 30% of the people who have Lyme disease do not develop EM lesions, making diagnosis more difficult. In stage 2 (weeks to months after initial exposure to the bacterium or after the first symptoms appear), some people may develop complications involving the heart and/or nervous system. Specific disorders may include various degrees of heart block, nervous system abnormalities such as meningitis, encephalitis and facial paralysis (Bell's palsy), and other conditions involving peripheral nerves. Painful joints, tendons, or muscles may also be noted during this stage of the disease. Arthritis is the most commonly recognized long-term sign of Lyme disease (stage 3). From one month to several years after their first symptoms appear, people may experience repeated attacks of arthritis. Dogs, cats, cattle, horses and other domestic animals may also exhibit a variety of signs, including fever and lameness. Wild animals such as deer, raccoon and mice show no signs and apparently suffer no ill effects from the disease. Lyme disease is difficult to diagnose because the disease mimics many other diseases and there is no definitive test for it at this time. A diagnosis should be based on a history of tick bite, the presence of a circular rash, an examination by a physician for other symptoms, and laboratory tests. The most reliable indication of Lyme disease is a large circular rash (erythema migrans). If you develop any of the symptoms or recall being bitten by a tick, discuss your suspicions of Lyme disease with your physician. Treatment and Prevention Prompt diagnosis and treatment with antibiotics can cure the infection and prevent later complications in both humans and domestic animals. Treatment during later stages of the disease often requires more intensive antibiotic therapy. While there is no sure way to completely eliminate the chance of contracting Lyme disease, there are several specific preventative measures one can take: - Wear long pants tucked into boots or socks and wear long-sleeved shirts buttoned at the cuff. - Use tick repellents containing 0.5% permethrin or mosquito repellents containing 30% DEET. - Examine clothing, skin and pets for ticks and remove them promptly. Be aware of Lyme disease, but do not be so concerned that you cannot enjoy the outdoors. The risk of developing the illness is minimal in Michigan and even if infection occurs the disease can be diagnosed and treated with antibiotics. The relationship between deer and the disease is complex. Deer show no symptoms of the disease. Deer may carry small numbers of the spirochete that causes Lyme disease but they are dead-end hosts for the bacterium. Deer cannot infect another animal directly and no deer hunter has acquired the disease from dressing out a deer. Infected ticks that drop from deer present little risk to humans or other animals since the ticks are now at the end of their life cycle and will not feed again. There is no evidence that humans can become infected by eating venison from an infected deer. In addition, the Lyme organism is killed by the high temperatures that would be reached when venison is cooked or smoked. Deer supply the tick that transmits the bacterium with a place to mate and provides a blood meal for the female tick prior to production of eggs. Research has shown that white-tailed deer are important to the reproductive success of the black-legged tick. In the absence of deer, this tick will opportunistically feed on other medium sized mammals and humans. As a management tool for Lyme Disease, there is still debate in the scientific community as to whether reducing the number of deer present in an area will effectively or dramatically reduce Lyme Disease "risk". There is very little risk of hunters contracting Lyme disease when pursuing game. This is because hunters are in the woods from October through March when the nymphal stage of the tick is inactive. Even though the adult stage of the tick is active in the fall (when temperatures are above 40º F), the heavier clothing that hunters wear makes it difficult for ticks to find and attach to bare skin. In addition, the risk of picking up ticks from game animals is insignificant compared with that from the environment (meadows, brushland or woods). For questions about wildlife diseases, please contact the Michigan DNR Wildlife Disease Laboratory.
The Mantle is the second layer of the Earth. It is the biggest and takes up 84 percent of the Earth. In this section you will learn and more about how hot the mantle is, what it is made of, and some interesting facts about the Mantle. The mantle is divided into two sections. The Asthenosphere, the bottom layer of the mantle made of plastic like fluid and The Lithosphere the top part of the mantle made of a cold dense rock. The average temperature of the mantle is 3000° Celsius. The temperature of the mantle will become much hotter as you get closer to the Inner Core The mantle is composed of silicates of iron and magnesium, sulphides and oxides of silicon and magnesium. The mantle is about 2900 km thick. It is the largest layer of the Earth, taking up 84% of the Earth. Convection currents happen inside the mantle and are caused by the continuous circular motion of rocks in the lithosphere being pushed down by hot molasses liquid from the asthenosphere. The rocks then melt and float up as molasses liquid because it is less dense and the rocks float down because it is more dense.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) The vestibular apparatus or vestibular system, or balance system, is the sensory system that provides the dominant input about our movement and orientation in space (see equilibrioception). Together with the cochlea, the auditory organ, it is situated in the vestibulum in the inner ear (Figure 1). As our movements consist of rotations and translations, the vestibular system comprises two components: the semicircular canals, which indicate rotational movements; and the Otoliths, which indicate linear translations. The vestibular system sends signals primarily to the neural structures that control our eye movements, and to the muscle that keep us upright. The projections to the former provide the anatomical basis of the vestibulo-ocular reflex, which is required for clear vision; and the projections to the muscles that control our posture are necessary to keep us upright. The canals are cleverly arranged in such a way that each canal on the left side has an almost parallel counterpart on the right side. Each of these three pairs works in a push-pull fashion: when one canal is stimulated, its corresponding partner on the other side is inhibited, and vice versa. This push-pull system allows us to sense all directions of rotation: while the right horizontal canal gets stimulated during head rotations to the right (Fig 2), the left horizontal canal gets stimulated (and thus predominantly signals) by head rotations to the left. Vestibulo-ocular reflex (VOR)Edit The vestibular system needs to be fast: if we want clear vision, head movements need to be compensated almost immediately. Otherwise our vision corresponds to a photograph taken with a shaky hand. To achieve clear vision, signals from the semicircular canals are sent as directly as possible to the eye muscles. This direct connection involves only three neurons, and is correspondingly called Three-neuron-arc (Fig 3). Using these direct connections, eye movements lag the head movements by less than 10 ms, one of the fastest reflexes in the human body. The automatic generation of eye movements from movements of the head is called vestibulo-ocular reflex, or short VOR. This reflex, combined with the push-pull principle described above, forms the physiological basis of the Rapid head impulse test or Halmagyi-Curthoys-test: when the function of your right balance system is reduced by a disease or by an accident, quick head movements to the right cannot be sensed properly any more. As a consequence, no compensatory eye movements are generated, and the patient cannot fixate a point in space during this rapid head movement. Another way of testing the VOR response is to attempt to induce nystagmus (compensatory eye movements in the absence of head motion) by pouring cold or warm water into the ear. The mechanics of the semicircular canals can be described by a damped oscillator. If we designate the deflection of the cupula with , and the head velocity with , the cupula deflection is approximately α is a proportionality factor, and s corresponds to the frequency. For humans, the time constants T1 and T2 are approximately 3 ms and 5 s, respectively. As a result, for typical head movements, which cover the frequency range of 0.1 Hz and 10 Hz, the deflection of the cupula is approximately proportional to the head-velocity (!). This is very useful, since the velocity of the eyes must be opposite to the velocity of the head in order to have clear vision. Signals from the vestibular system also project to the Cerebellum (where they are used to keep the VOR working, a task usually referred to as Learning or Adaptation) and to different areas in the cortex. The projections to the cortex are spread out over different areas, and their implications are currently not clearly understood. While the semicircular canals respond to rotations, the otoliths sense linear accelerations. We have two on each side, one called Utricle, the other Saccule. Figure 4C shows a cross section through an otolith: the otoconia crystals in the Otoconia Layer (Fig. 4, top layer) rest on a viscous gel layer, and are heavier than their surroundings. Therefore they get displaced during linear acceleration, which in turn deflects the Hair cells (Fig. 4, bottom layer) and thus produces a sensory signal. Most of the utricular signals elicit eye movements, while the majority of the saccular signals projects to muscles that control our posture. While the interpretation of the rotation signals from the semicircular canals is straightforward, the interpretation of otolith signals is more difficult: since gravity is equivalent to a constant linear acceleration, we somehow have to distinguish otolith signals that are caused by linear movements from such that are caused by gravity. We can do that quite well, but the neural mechanisms underlying this separation are not yet fully understood. Diseases of the vestibular system can take different forms, and usually induce vertigo and instability, often accompanied by nausea. The most common ones are Vestibular neuritis, a related condition called Labyrinthitis, and BPPV. In addition, the function of the vestibular system can be affected by tumors on the cochleo-vestibular nerve, an infarct in the brain stem or in cortical regions related to the processing of vestibular signals, and cerebellar atrophy. Less severe, but often also with large consequences, is vertigo caused by the intake of large amounts of alcohol. BPPV, which is short for Benign Paroxysmal Positional Vertigo, is probably caused by pieces that have broken off from the Otoliths, and have slipped into one of the semicircular canals. In most cases it is the posterior canal that is affected. In certain head positions, these particles push on the cupula of the canal affected, which leads to dizziness and vertigo. This problem occurs rather frequently, often after hits to the head or after long bed rest. The tell-tale sign of BPPV are vertigo attacks which repeatably appear when the head is brought into a specific orientation. In most cases BPPV can be eliminated (for the patient in an almost miraculous way) by lying down, bringing the head in the right orientation, and sitting up quickly. - SensesWeb, which has been created by Tutis Vilis, contains fantastic animations - at a high level(!) - off all sensory systems, as well as the corresponding PDF-Files, and additional further links. - Vestibular Primer (by David Dickman, Ph.D.) A very good, up-to-date introduction to the vestibular system. - The Vestibular System Stephen M. Highstein, Richard R. Fay, Arthur N. Popper (eds) Springer-Verlag (2004) ISBN 0-387-98314-7 Comment: A book for experts, summarizing the state of the art in our understanding of the balance system - Vertigo : Its Multisensory Syndromes Thomas Brandt Springer-Verlag (2003) ISBN 0-387-40500-3 Comment: For clinicians, and other professionals working with dizzy patients. - Driver Drowsiness: Is something missing? J. Christopher Brill, Peter A. Hancock, Richard D. Gilson University of Central Florida (2003) Comment: Research on driver or motion-induced sleepiness aka 'sopite syndrome' links it to the vestibular labrynths. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
IGBT - An Introduction What is IGBT?: - IGBT is a three terminal power semiconductor switch used to control the electrical energy. - Both Power BJT and Power MOSFET have their own advantages and disadvantages. - BJTs have lower conduction losses in on state condition, but have longer turn off time. - MOSFETs have higher on state conduction losses and have lower turn on and turn off times. - The combination of BJT and MOSFET monolithically leads to a new device called Insulated Gate Bipolar Transistor. - The other names of this device are GEMFET (Conductivity Modulated FET), COMFET (Conductivity Modulated Field Effect Transistor), IGT (Insulated Gate Transistor), bipolar mode MOSFET, bipolar MOS Transistor. - It has superior on-state characteristics, good switching speed and excellent safe operating area. - The symbol is shown below. As shown, it has three terminals namely Emitter, Collector and Gate. - There is a disagreement in the engineering community over the proper symbol and nomenclature of the IGBT symbol. - Some prefer to consider the IGBT as basically a BJT with a MOSFET gate input and thus to use the modified BJT symbol for the IGBT as shown above. - Some prefer to consider drain and source rather than collector and emitter as shown below. - The V-I characteristics curves are drawn for different values of VGS. - When VGS > VGS(threshold) the IGBT turns-On. - In this figure VGS4 > VGS3 >VGS2 > VGS1 - By keeping VGS constant, the value of VDS is varied and corresponding values of ID is noted down. - As shown, the V-I characteristics of IGBT is similar to BJT Click here for the Switching Characteristics of IGBT - The transfer characteristics of IGBT and MOSFET are similar. - The IGBT is in the Off-state if the gate-emitter potential(VGE) is below the threshold voltage(VGE(threshold)). - For gate voltages greater than the threshold voltage, the transfer curve is linear. - The maximum drain current is limit by the maximum gate-emitter voltage. The main advantages of the IGBT are: - Good Power handling capabilities - Low forward conduction voltage drop of 2V to 3V, which is higher than for a BJT but lower than for a MOSFET of similar rating. - This voltage will increase with the temperature. This property makes the device easy to operate in parallel without danger of thermal instability. - High speed switching capability. - Low gate current. - Relatively simple voltage controlled gate driver. Some other important features of the IGBT: - This power semiconductor device does not have the problem of secondary breakdown. - So it has large Safe Operating Area (SOA) and low switching losses - Only small snubbers are required. - Absence of body diode in IGBT. (Remember that Power MOSFET has the parasitic diode) - Separate diode must be added in anti-parallel when reverse conduction is required. Gate TurnOff Thyristor (GTO) Comparison of MOSFET with BJT Please write your comments below... Your comments are highly appreciated... If you like the post, then please subscribe to get new posts directly to your mail id...
It might be kind of fun or funny depending on who is doing it, but what happens when you do stand on your head? Your blood all runs that direction. That might not be all that bad for healthy people, yet some could get a headache. I personally can’t seem to do it. But, for the ones who have a low platelet count in their blood, it could be life threatening. Why you ask? Here are some facts for you to know about Thrombocytopenia! Say what? I know I can’t pronounce it either. - Thrombocytopenia refers to platelet counts in your blood lower than the normal range of 150,000 to 400,000. - Causes of thrombocytopenia can be classified in 3 groups; diminished production, increased destruction, and splenic sequestration. What is thrombocytopenia? Thrombocytopenia is a lower than normal number of platelets in the blood. Platelets are one of the cellular components of the blood along with white and red blood cells. Platelets play an important role in clotting and bleeding. Platelets are made in the bone marrow similar to other cells in the blood. The circulating platelets make up about two third of the platelets that are released from the bone marrow. The other one third is typically stored (sequestered) in the spleen. Platelets, in general, have a brief lifespan in the blood (7 to 10 days), after which they are removed from circulation. The number of platelets in the blood is referred to as the platelet count and is normally between 150,000 to 400,000 per micro liter of blood. Platelet counts less than 150,000 are termed thrombocytopenia. A platelet count greater than 400,000 is called thrombocytosis. Platelets initiate a sequence of reactions that lead to the formation of a blood clot. They circulate in the blood vessels and become activated if there is any bleeding or injury in the body. When activated, the platelets become sticky and adhere to one another and to the blood vessel wall at the site of the injury to slow down and stop the bleeding by plugging up the damaged blood vessel or tissue (hemostasis). A low platelet count in severe cases may result in spontaneous bleeding or may cause delay in the normal process of clotting. In mild thrombocytopenia, there may be no adverse effects in the clotting or bleeding pathways. Continue Reading What on earth did I just say? Well without the platelets in your blood it will just keep flowing. Like a bloody nose that won’t quit. Or if you cut yourself you could bleed to death because it will not stop coming out of you. What are the complications of thrombocytopenia? The complications of thrombocytopenia may be excessive bleeding after a cut or an injury resulting in hemorrhage and major blood loss. However, spontaneous bleeding (without any injury or laceration) due to thrombocytopenia is uncommon, unless the platelet count is less than 10,000. Other complications may be related to any other underlying factors or conditions. For example, autoimmune thrombocytopenia related to lupus may be associated with other complications of lupus. TTP or HUS can have many complications including severe anemia, confusion or other neurologic changes, or kidney failure. HIT or heparin induced thrombocytopenia can have devastating complications related to blood clot formation (thrombosis). Thus standing on your head is not a good thing if you platelets are low, because all your blood would run to your head and cause swelling. It is not thick enough to stay in other places of your body. It is important that you follow your doctor’s orders. Get your testing done and make sure you know what things mean. We only have one body and to live a long life and be healthy enough to enjoy your freedom and all the money you are making in your blogging business, you need to take care of yourself. Here are some more things to keep in mind and be aware of. Blood counts, changes in Cancer, cancer treatment and other diseases often cause drops in blood count levels. The problems caused by low blood counts depend on which type of blood cell is affected. For more on low red blood cell (hemoglobin or HGB) counts, see the section called “Anemia.” For more on low white blood cell (WBC) counts, see the section called “Infection, increased risk.” For more on low platelet (PLT) counts see the section called “Bleeding or low platelet count.” Bleeding or low platelet count Platelets are cells that help your blood clot, so you stop bleeding. A normal platelet (PLT) count on a blood test is about 150,000 to 450,000. Normal clotting is still possible with a platelet count of 100,000. The danger of serious bleeding is very high when the platelet count goes below 20,000. What to look for - Bleeding from anywhere (such as the mouth, nose, or rectum) - Bloody or dark brown vomit that looks like coffee grounds - Bright red, dark red, or black stools (poop) - Women may have heavy vaginal bleeding during monthly periods - New bruises on the skin - Red pinpoint dots on the skin, usually starting on feet and legs - Bad headaches, dizziness, or blurred vision - Weakness that gets worse - Pain in joints or muscles What the patient can do - Use only an electric razor (not blade) for shaving. - Avoid contact sports (such as wrestling, boxing, or football) and any other activities that might lead to injury. - Protect your skin from cuts, scrapes, and sharp objects. - Use a soft toothbrush. - If your mouth is bleeding, rinse it a few times with ice water. - Talk to your cancer team about whether you should put off flossing your teeth until your platelet counts improve. - Do not blow your nose or cough with great force. - Keep your head level with or above your heart (lie flat or stay upright). - Use a stool softener to avoid constipation and straining during a bowel movement. Do not use enemas or suppositories of any kind. - Do not put anything in your rectum, including suppositories, enemas, thermometers, etc. - Stay away from anti-inflammatory pain medicines, such as aspirin, naproxen, or ibuprofen (Motrin®, Advil®, Naprosyn®, Aleve®, Midol®) and medicines like them unless your cancer team tells you to use them. Check with your pharmacist if you’re not sure whether a medicine is in this class of drugs, or if it contains one of them. - If bleeding starts, stay calm. Sit or lie down and get help. What caregivers can do For nosebleeds, have the patient sit up with head tilted forward, to keep blood from dripping down the back of the throat. Put ice on the nose and pinch the nostrils shut for 5 minutes before releasing them. Ice on the back of the neck may also help. For bleeding from other areas, press on the bleeding area with a clean, dry washcloth or paper towel until bleeding stops. Now that you have had a lesson on Blood and why standing on your head may not be good, I want to ask you how you would share this fact with others. come and share with me here on Facebook.
The future of these plants, called phytoplankton, is important because they exist at the base of the marine food web and represent a large source of food for fish. Also, they affect global climate by using atmospheric carbon dioxide, a greenhouse gas. Phytoplankton depend upon nitrogen and phosphorus to grow and, ultimately, replenish the supply of these nutrients in the ocean. Since the 1930s, scientists have known that the average nitrogen-to-phosphorus (N:P) nutrient ratio of phytoplankton closely mirrors the N:P ratio in the ocean - 15:1 for the plants and 16:1 for the water. Scientists accepted this as a constant called the Redfield ratio, named after the late Harvard University scientist Alfred Redfield. But researchers at the Georgia Institute of Technology and Princeton University designed a mathematical model based on phytoplankton physiology. It shows a broad range of N:P ratios are possible depending on the conditions under which species grow and compete. This research - part of a larger biocomplexity research project led by Professor Simon A. Levin at Princeton -- is published in the May 13 edition of the journal Nature. "The take-home message is that this finding reinforces what some researchers have been saying lately - that N:P is not so fixed," said lead author Christopher Klausmeier, a Georgia Tech assistant professor of biology and former postdoctoral fellow at Princeton. Other authors are Elena Litchman, also of Georgia Tech, and Tanguy Daufresne and Levin of Princeton. "This shows the range of ratios within which we could expect the ocean to change in the future," Klausmeier said. "Right now we have 16:1, but 500 years from now, if we have a different mix of growth conditions, then it might change the overall N:P needs of the phytoplankton community and the ocean." Under two extreme conditions - one with few resources because of increased competition and the other with abundant nutrients - researchers determined the optimal strategies that phytoplankton use to allocate the cellular machinery - namely ribosomes and chloroplasts -- for nutrient uptake. Ribosomes assemble two proteins that take up nitrogen and phosphorus. Chloroplasts gather energy from the sun. "When competing to the very end, then the optimal strategy has a lot of resource acquisition machinery, but not much assembly machinery," Klausmeier explained. "In that case, there aren't many ribosomes, and therefore not much phosphorus. So if you have a small amount of phosphorus, you have a high N:P ratio. This strategy is best for competition to equilibrium. "In the other scenario, where nutrients are very available, you have a lot of ribosomes. Then you have a lot of phosphorus and therefore, a low N:P ratio. This is optimal under exponential growth conditions," Klausmeier added. Given these optimal strategies, researchers were able to determine the N:P needs of species competing at the extremes. "These two scenarios set the endpoints of what happens in reality," Klausmeier explained. "In the real world, it's a mix of conditions." From a literature review earlier in the study, they found that N:P ratios among different species vary from 7:1 to 43:1 - with one oddity requiring a 133:1 ratio. Results from modeling the optimal strategies mirror this range of ratios, Klausmeier said, in contrast with the long-accepted constant ratio of N:P in the ocean. "The 16:1 Redfield ratio has been used too dogmatically by some scientists," Klausmeier said. "It has been treated as an optimum ratio, but that's not what Redfield intended. He has been misunderstood and oversimplified. This ratio is an average that is subject to change." As is the case in many other ecological studies, researchers in this study had to confront the natural variability found in nature. "This is a very ecological story," Klausmeier noted. "One thing that frustrates ecology and makes it tough is that there's a lot of natural variability. We want to explain the variability, not just the average number. So this problem turned out to be more complicated because of the variability." Klausmeier's findings have broader implications, as well, because of the roles phytoplankton play in the ocean ecosystem and across the globe. "Phytoplankton do half of the planet's primary production," Klausmeier explained. "They capture energy from the sun and have a big role in biogeochemical cycles -- how elements cycle through the biosphere. Phytoplankton have a main role in the carbon cycle. They need carbon dioxide to grow, so they suck it out of the atmosphere, controlling its presence there. And that ties into global climate." Klausmeier believes his study contributes to a better understanding of global biogeochemical cycles. "It's important for us to understand global climate and how it might change in the future," he added. "And ocean life, such as phytoplankton, is a big player in climate." This study was funded by grants from the National Science Foundation and the Andrew Mellon Foundation for Levin's biocomplexity project. Biocomplexity refers to studies of ecological and evolutionary systems as a whole. Georgia Tech Research News and Research Horizons magazine, along with high-resolution JPEG images, can be found on the Web at http://www. For technical information, contact: 1. Christopher Klausmeier, Georgia Tech, 404-385-4241 or E-mail: [email protected]. 2. Simon A. Levin, Princeton University, 609-258-6880 or E-mail: [email protected].
Year 6 Eagles discussed unity and harmony. They discussed their ideas of what it meant, talking about working together, ensuring there is no war, hatred or fights. We looked at the story of Noah and how in Christianity, the story is told as God asking Noah to unite the people and follow his commandments. As the people didn’t, God sent rain for 40 days and 40 nights resulting in only Noah and his family being saved from the flood. Year 6 then discussed Jainism, the oldest Indian language, older than Hinduism. It taught unity and harmony and as with all other religions- it taught forgiveness. A huge emphasis is put on forgiveness and the class discussed why. “It makes you feel better.” “To forgive means you keep your friends.” In Jainism they used forgiveness circles and the class used this concept to research some forgiveness quotes. It turned out that the children were inspired to write their own! Have a look and see what you think…
This article involves children in firsthand experiences. Basing children’s learning on content that can be experienced firsthand guarantees a measure of meaning. Children are not asked to gain knowledge secondhand, by listening to someone else tell them about a distant place or time. Rather, children are involved in touching, taking apart, tasting, and smelling things in their here-and-now world. By doing so, they are the ones who are receiving information directly and making sense of it. Piaget’s work fully documented that firsthand experiences are necessary if children are to learn, think, and construct knowledge (Piaget & Inhelder, 1969). When children actually handle objects in their environment, they gain knowledge of the physical properties of the world in which they live. As they experiment with a wide variety of objects and materials, they learn that some things are heavy, others light; some are rough or smooth, others sharp or rounded. These concepts cannot be taught through direct instruction, but can only be learned through firsthand experiences. When children engage in firsthand experiences, their minds are as active as their bodies. By handling objects and observing things in their world, children begin to compare them. They classify and sequence objects and things, relating new information to their existing ideas of how the world works, fitting it into their schemes or ideas. When information doesn’t fit their existing ideas, they change these or create new ones. As they do so, they are constructing their own knowledge and storing it as concepts, rules, or principles (Piaget & Inhelder, 1969). Then, too, when children act on their environment, they are figuring out how to do things. They learn how to balance blocks, care for themselves and others, and become part of a democratic group. Through daily, firsthand experiences, children have the opportunity to confirm or change their ideas about how things work and what they can do with the things in their world. These initial, often incomplete and tentative, hypotheses and schemes about their world are the foundation on which all subsequent learning is built. © ______ 2006, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
16-year-old Helena Muffly wrote exactly 100 years ago today: Monday, January 8, 1912: A regular snow storm set in this afternoon. How beautiful the snowflakes looked as they descended to ground. Am now able to extract the cube root without difficulty. Pa came for Jimmie and me this evening. Her middle-aged granddaughter’s comments 100 years later: The teacher must have clarified how to do cube roots. Grandma was struggling with cube roots the previous Friday. As a parent who had strong opinions during the “math wars” of the 1990’s about what should be included in (and, perhaps more importantly, what should be excluded from) the math curriculum, I’m fascinated by early 20th century math text books. In textbooks from a hundred years ago, there was more focus on calculation than there is today but they also contained some cool word problems. Cube roots are a great example of this. Here are some cube root word problems from a 1911 textbook called Kimball’s Commercial Arithmetic: 1. If a cubical block contains 21,952 cubic inches, how many square feet of paper will be required to cover the entire surface? 2. The entire surface of a cubic block is 384 square feet. How many 1-foot cubes can be cut from the block, allowing nothing for waste? 3. A cubical cistern holds 400 bbl. of water. How deep is it? 4. What are the dimensions of a cube that has the same volume as a box 2 ft. 8 in. long, 2 ft. 3 in. wide, and 1 ft. 4 in. deep? The texts also contained lots of “tricks” and principles. 1. The cube of a number cannot have more than three times as many figures as its root, nor but two less. 2. If a number is separated into periods of three figures each beginning at the units’ place, the number of figures in the cube root will be the same as the number of periods. I thought of several easy cube roots (100 is the cube root of 1,000,000. and 5 is the cube root of 125.), and decided that the principles are correct. (Of course they were correct—but somehow I felt better after I thought of a few problems to confirm it.) If you’re a math geek, here are some previous posts that explored the math curriculum and problems from a hundred years ago.
Galileo Galilei was the first astronomer to use a telescope to study the heavens. Galileo made a number of observations that finally helped convince people that the Sun-centered solar system model (the heliocentric model), as proposed by Copernicus, was correct. These arguments can be divided into two kinds: Those that proved that the Ptolemaic model was incorrect; and those that undermined the broader philosophy of Aristotelianism that included the Ptolemaic model. We'll first consider some philosophically important observations and then the ones that proved Venus, at least, goes around the Sun and not around Earth. Sun and Moon One of the ideas that made Aristotelianism popular with the church during the middle ages was that the heavens are perfect. This also meant that they were unchanging, because if they change then either they weren't perfect before or they won't be perfect after the change. Galileo discovered spots on the Sun and also saw that the surface of the Moon was rough. People really tried hard to account for these observations without making the heavens imperfect; one suggestion was that over the mountains of the Moon there was a layer of clear crystal so the final surface would be smooth and perfect! Galileo saw near Jupiter what he first thought to be stars. When he realized that the stars were actually going around Jupiter, it negated a major argument of the Ptolemaic model. Not only did this mean that the Earth could not be the only center of motion, but also it knocked a hole in another argument. The supporters of the Ptolemaic model argued that if the Earth were moving through space, the Moon would be left behind. Galileos observations showed that the moons of Jupiter were not being left behind as Jupiter moved. Phases of Venus One observation definitely disproved the Ptolemaic model, although it didn't prove that Copernicus was right (as Tycho Brahe pointed out). This was the observation that Venus has phases, much like our Moon does. To the naked eye, Venus always appears as a bright dot in the sky. With a telescope, however, it is fairly easy to see the phases of Venus. Just as the Moon has phases, Venus too has phases based on the planets position relative to us and the Sun. There was no way for the Ptolemaic model (Earth centered solar system) to account for these phases. They can only occur as Galileo saw them if Venus is circling the Sun, not the Earth.
Download Research Tools One of the responsibilities for us as researchers is to have the courage to challenge accepted "truths" and to seek out new insights. Richard Feynman was a physicist who not only epitomized both of these qualities in his research but also took enormous pleasure in communicating the ideas of physics to students. Feynman won the Nobel Prize for his computational toolkit that we now call Feynman Diagrams. The techniques he developed helped the physics community make sense of Quantum Electrodynamics (QED) after the war, when the entire community was in a state of confusion about how to handle the infinities that appeared all over the place when one tried to make a perturbative expansion in the coupling. Feynman was the subject of a recent TEDxCaltech conference, fittingly called, "Feynman's Vision: The Next 50 Years." The event was organized in recognition of the 50-year anniversary of Feynman's visionary talk, "There's Plenty of Room at the Bottom," in which he set out a vision for nanoscience that is only now beginning to be realized. It is also 50 years since he gave his revolutionary "Feynman Lectures on Physics," which educated generations of physicists. I had the honor of speaking about Feynman's contributions to computing, from his days at Los Alamos during the war, his Nobel Prize winning computational toolkit (Feynman Diagrams), and his invention of quantum computing, By striving to think differently, he truly changed the world. The following are some highlights from my presentation. Parallel Computing Without Computers Feynman worked on the Manhattan Project at Los Alamos in the 1940s with Robert Oppenheimer, Hans Bethe, and Edward Teller. In order to make an atom bomb from the newly-discovered trans-uranic element, Plutonium, it was necessary to generate a spherical compression wave to compress the Plutonium to critical mass for the chain reaction to start. It was, therefore, necessary to calculate how to position explosive charges in a cavity to generate such a compression wave; these calculations were sufficiently complex that they had to be done numerically. The team assigned to perform these calculations was known as the "IBM team," but it should be stressed that this was in the days before computers and the team operated on decks of cards with adding machines, tabulators, sorters, collators, and so on. The problem was that the calculations were taking too long, so Feynman was put in charge of the IBM team. Feynman immediately discovered that because of the obsession with secrecy at Los Alamos, the team members had no idea of the significance of their calculations or why they were important for the war effort. He went straight to Oppenheimer and asked for permission to brief the team about the importance of their implosion calculations. He also discovered a way to speed up the calculations. By assigning each problem to a different colored deck of cards, the team could work on more than one problem at once. While one deck was using one of the machines for one stage of the calculation, another deck could be using a different machine for a different stage of its calculation. In essence, this is a now-familiar technique of parallel computing—the pipeline parallelism familiar from the Cray vector supercomputers, for example. The result was a total transformation. Instead of completing only three problems in nine months, the team was able to complete nine problems in three months! Of course, this led to a different problem when management reasoned that it should be possible to complete the last calculation needed for the Trinity test in less than a month. To meet this deadline, Feynman and his team had to address the more difficult problem of breaking up a single calculation into pieces that could be performed in parallel. My next story starts in 1948 at the Pocono Conference where all the great figures of physics—Niels Bohr, Paul Dirac, Robert Oppenheimer, Edward Teller, and so on—had assembled to try to understand how to make sense of the infinities in QED. Feynman and Schwinger were the star speakers, but Feynman was unable to make his audience understand how he did his calculations. His interpretation of positrons as negative energy electrons moving backwards in time was just too hard for them to accept. After the conference, Feynman was in despair and later said, "My machines came from too far away." Less than a year later, Feynman had his triumph. At an American Physical Society meeting in New York, Murray Slotnick talked about some calculations he had done with two different meson-nucleon couplings. He had shown that these two couplings indeed gave different answers. After Slotnick's talk, Oppenheimer got up from the audience and said that Slotnick's calculations must be wrong since they violated Case's Theorem. Poor Slotnick had to confess that he had never heard of Case's Theorem and Oppenheimer informed him that he could remedy his ignorance by listening to Professor Case present his theorem the following day. That night, Feynman couldn't sleep so he decided to re-do Slotnick's calculations by using his diagram techniques. The next day at the conference, Feynman sought out Slotnick, told him what he had done, and suggested they compare results. "What do you mean you worked it out last night?" Slotnick responded. "It took me six months!" As the two compared answers, Slotnick asked, "What is that Q in there, that variable Q?" Feynman replied that the Q was the momentum transfer as the electron was deflected by different angles. "Oh," Slotnick replied. "I only have the limiting value as Q approaches zero. For forward scattering." Feynman said, "No problem, we can just set Q equal to zero in my formulas!" Feynman found that he had obtained the same answer as Slotnick. After Case had presented his theorem, Feynman stood up at the back of the audience and said, "Professor Case, I checked Slotnick's calculations last night and I agree with him, so your theorem must be wrong." And then he sat down. That was a thrilling moment for Feynman, like winning the Nobel Prize—which he did much later—because he was now sure that he had achieved something significant. It had taken Slotnick six months to do the case of zero momentum transfer while Feynman had been able to complete the calculation for arbitrary momentum transfer in one evening. The computational toolkit that we now call Feynman Diagrams have now penetrated to almost all areas of physics and his diagrams appear on the blackboards of physicists all around the world. This toolkit is undoubtedly Feynman's greatest gift to physics and the story perfectly illustrates Feynman's preference for concrete, detailed calculation rather than reliance on more abstract theorems. The Physics of Computation At the invitation of his friend Ed Fredkin, Feynman delivered a keynote lecture at "The Physics of Computation" Conference at MIT in 1981. Feynman considered the problem of whether it was possible to perform an accurate simulation of Nature on a classical computer. As Nature ultimately obeys the laws of quantum mechanics, the problem reduces to simulating a quantum mechanical system on a classical computer. Because of the nature of quantum objects like electrons, truly quantum mechanical calculations on a classical computer rapidly become impractical for more than a few 10's of electrons. Feynman then proceeded to consider a new type of computer based on quantum mechanics: a quantum computer. He realized that this was a new type of computer: "Not a Turing machine, but a machine of a different kind." Interestingly, Feynman did not go on to explore the different capabilities of quantum computers but simply demonstrated how you could use them to simulate true quantum systems. By his presence at the conference, Feynman stimulated interest both in the physics of computation and in quantum computing. At this conference 30 years later, we heard several talks summarizing progress towards actually building a quantum computer. In the last five years of his life, Feynman gave lectures on computation at Caltech, initially with colleagues Carver Mead and John Hopfield, and for the last three years by himself. I was fortunate enough to be asked by Feynman to write up his "Lectures on Computation." The lectures were a veritable tour de force and were probably a decade ahead of their time. Feynman considered the limits to computation due to mathematics, thermodynamics, noise, silicon engineering, and quantum mechanics. In the lectures, he also gave his view about the field of computer science: He regarded science as the study of natural systems and classified computer science as engineering since it studied man-made systems. Inspiring Later Generations Feynman said that he started out very focused on physics and only broadened his studies later in life. There are several fascinating biographies of Feynman but the one I like best is No Ordinary Genius by Christopher Sykes. This is a wonderful collection of anecdotes, interview, and articles about Feynman and his wide range of interests—from physics, to painting, to bongo drums and the Challenger Enquiry. Feynman was a wonderful inspiration to the entire scientific community and his enjoyment of and enthusiasm for physics is beautifully captured in the TV interview, "The Pleasure of Finding Things Out," produced by Christopher Sykes for the BBC. Feynman is forever a reminder that we must try to think differently in order to innovate and succeed. —Tony Hey, corporate vice president of the External Research Division of Microsoft Research
Biscayne Bay Habitat Restoration Through volunteer efforts and partnerships with supporting organizations, Tangency will be organizing various clean-ups and a mangrove habitat restoration project in the Biscayne Bay. Red mangrove seeds are collected by volunteers, which will be planted and grown in a nursery for one year until they are ready for transplant to the proper location. An wide assortment of fish, crustacean, mollusk, and jellyfish species call Mangrove forests home. For costal communities around the globe, these fisheries serve as vital sources of food. For an even greater diversity of fish, these mangrove labyrinths serve as nurseries for countless commercially caught fish species, including many reef-dwelling species. A study conducted on the Mesoamerican reef revealed that as many as 25 times more fish species live in reefs located near mangroves than in areas where mangrove forests have been cut down. The health of these forests are of critical importance now more than ever given the declining health of many reefs. Timber and plant products An extremely useful resources for many coastal and indigenous communities worldwide, Mangroves provide a plethora of uses. Their wood is resistant to rot and insects, making it suitable for fuel and construction material, and their local ecosystems can be scoured for medicinal plants. Their leaves can serve as fodder for animals,and recent commercial exploits have seen their wood used for wood chips, pulp, and charcoal production. Mangrove forests form barriers on the coast to prevent the erosion of coastlines by tides and storms. The root systems of mangrove forests are quite dense, trapping sediments, and preventing them from out-flows. Where mangroves have been cleared out, storm damage from hurricanes and typhoons is often much more intense. By preventing sediments from out-flowing, mangroves help sustain healthy reefs that would otherwise have been covered and suffocated by them. It comes as a surprise that, given mangrove forests’ diversity of species, and their frequent nearness to tourist attractions like coral reefs and sandy beeches, only a few contries have tapped into the tourism potential these ecosystems have to offer. Places like Bonaire and Thailand offer snokrleling expeditions throughout the mangrove forests, giving tourists the opportunity to see a magical array of baby fish, jellyfish, urchins, and crabs making their way through the backdrop of interlaced roots digging deep into the sandy substrate. These forests must be valued intact, as they are, in order for them to provide such ecosystem services. Beach Clean Ups Removing plastics, debris, and trash from the shores of beaches and islands, as well as from the cluttered roots of mangroves, helps maintain a healthy ecosystem. Plastics poise a unique problem to the environment because they can take up to thousands of years to degrade and fully decompose. Wildlife can consume potentially toxic debris from the shoreline, or have it wrapped around them such as the case with various penguins and birds with six-pack rings stuck around their necks. Kayak Clean Ups Not all debris is found on the shoreline, or easily accessible from land - kayaks provide a platform to dredge canals, pick up free-floating plastics, and move through mangrove forests. Kayaks also provide a means for volunteers to see many animals that may not be easy to spot from the shore, or on coastal areas close to urban centers - such as starfish in the water, or falcons and hawks in the mangrove canopy. This engages volunteers and highlights clean ups as not only good for the environment, but a fun way to spend a day!
The Virus Phenomenon Computer viruses take many forms these days, and seem to be lurking everywhere on the Internet, whether in the form of trojans, infected Web sites, or worms that eat bandwidth and slow computers to a crawl. Some of the same concepts and techniques are used by virus creators these days as were used by those that created the first viruses. The methods of transmission have changed: notably you almost never see floppy drives these days, and when you do the computer doesn’t boot from the floppy, unless it is an antique! That’s where most of the early viruses slept and infected from; the boot sector of floppy disks. Interestingly what might have been the first personal computer virus was Elk Cloner, which infected Apple II computers via the boot sector of floppies. That was in 1982. Viruses Through Time Over a decade earlier 1971 saw possibly the first computer virus. It infected computers on the network that was the forerunner of the Internet: ARPANET. This was the creeper virus. These computers weren’t personal computers. But with the spread of personal computers in the 1980’s, virus writers found a fertile new field to infect. 1988 saw the Morris worm in what may have been the first intense worm infestation. The Morris worm was an accidentally infectious creation. Worms in recent times are made intentionally to spread as quickly and aggressively as possible. Some have slowed portions of the Internet to a crawl or crashed many thousands of computers. As program suites such as Microsoft Office became a commodity sold with every new PC, and users of Ms. Word were common in businesses, homes, and schools, macro virus epidemics became possible. In 1999 the Melissa virus (a Word macro virus) spread using Word’s built-in macro language, using it to send e-mails to the user’s contacts. Another virus during that time period (in the year 2000) spread very rapidly by using e-mail as a vector. This was the ILOVEYOU virus. Users would soon have to learn that even mail sent from friends couldn’t be trusted. As Internet access became more and more widespread, and the bandwidth of Internet connections increased, the ability of viruses to spread faster than any human disease became the norm. In 2001 the Nimda virus used multiple methods of infection, and was one of the most rapidly spreading viruses ever seen. It also infected more systems than possibly seen before. Total infection counts are always estimates, however. As Web sites proliferated they also became more sophisticated and often used for online commerce. This required databases, and on Microsoft server systems the common database was and is Ms. SQL Server. The SQL programming language became a vector for virus infection by SQL code injection. Desktop computers often held a small simple version of the SQL core, and this too could be infected. Viruses such as SQL Slammer in 2003 spread rapidly and were notable for their tiny size, composed of SQL code. Recently, (in 2008) there was much news and hype about the Conficker Worm. It didn’t turn out to be as bad as expected regarding its planned attack from infected systems on a particular date due to the preparations of ISPs. See my articles on the Conficker worm and how to protect against it to learn more. Learn From the Past If we don’t learn from the past we are doomed to repeat it, so the saying goes. This is definitely true with computer viruses of all types. For more information, check out my articles on how computer viruses are made, how they spread, what computer worms are, how to protect against computer worms, the differences between worms and viruses, and what trojan horses are.
The lower esophageal sphincter (LES) is in charge of tightening the esophagus after food is passed into the stomach. When the LES is weak, the acid from your stomach washes up into the lower part of your esophagus causing a burning sensation. This is acid reflux. But does acid reflux have a connection to stomach or esophageal cancer? The Link Between Acid Reflux and GI Cancer Acid reflux, especially when it’s only a periodic condition, does not increase the risk of esophageal or stomach cancer. However, when acid reflux becomes chronic, it results in the esophagus being regularly exposed to stomach acid which can increase the risk of esophageal cancer. Chronic acid reflux is called Gastroesophageal Reflux Disease, commonly referred to as GERD. Does Long-Term Use of Medications for Acid Reflux Increase Cancer Risk? Proton Pump Inhibitors (PPIs) are commonly used to treat acid reflux, including GERD. They suppress acid production, reducing esophageal irritation and the typical symptoms of GERD, including heartburn and regurgitation. The drugs have been in use for more than two decades. In recent years, some researchers have raised questions about the long-term side effects of PPIs. Specific research studies conducted in the last few years have indicated the increased risk of stomach cancer with prolonged use of PPIs. The gastric cells of the stomach secrete an enzyme called gastrin. Gastrin handles the pumping of stomach acid. PPIs block acid production by shutting down the proton pumps. As a result, the body demands more acid and, consequently, more gastrin is formed. Excessive secretion of gastrin can lead to the formation of gastrointestinal tumors. Does GERD Increase the Risk of Cancer? When acid reflux is chronic, meaning it doesn’t go away after a short period of time, it’s considered Gastroesophageal Reflux Disease, commonly referred to as GERD. The stomach uses hydrochloric acid to help break the food and fight pathogens. The tough lining of the stomach can withstand the strong acid. However, the esophagus doesn’t have that same lining. That means repeated exposure to the acid, because of GERD, damages the esophageal tissue. This results in a slightly higher risk of developing esophageal cancer because of the constant irritation of the esophagus. GERD’s Connection to Barrett’s Esophagus When ignored, GERD can advance to a more severe condition called Barrett’s esophagus. The damaged tissues become thick, red, and begin to look and act like small-bowel tissue. This condition is called Barrett’s esophagus. About 10% of GERD patients develop Barrett's esophagus. People who have had trouble with acid reflux for over five years and experienced increased difficulty in swallowing are at the highest risk of the condition. Does Barrett’s Esophagus Cause Cancer? Barrett’s esophagus puts you at a higher risk of developing adenocarcinoma, the most common form of esophageal cancer. It takes many years for Barrett’s esophagus to develop into cancer. Experts recommend that patients with GERD or Barrett’s esophagus opt for routine examinations to determine if there are precancerous cells, called dysplasia. These can often be treat before cancer develops. How is Barrett’s Esophagus Diagnosed? Barrett's esophagus is usually diagnosed through an upper endoscopy. A small piece of the tissue is removed from the esophagus, and a biopsy is performed to confirm the diagnosis. The pathology report will also reveal the presence or absence of dysplasia, or in rare cases, cancerous cells in the esophagus. How to Treat GERD Before it Becomes Barrett’s Esophagus There are several things you can do to help reduce GERD symptoms: Maintain a healthy weight. Eat slowly and be sure to chew food thoroughly. If you know that some foods trigger symptoms, avoid those Don’t lie down too soon after a meal. If you need to lay back, keep your head elevated. Wear looser fitting clothing. GERD can be treated with a relatively simple surgery to repair the failed lower esophageal sphincter. This can be a good option for a more permanent fix that does not require long-term use of Proton Pump Inhibitors or other medications. Caring for GI Cancers in Portland and Vancouver If you have been diagnosed with GI cancer, including stomach or esophageal, as a result of GERD, Barrett’s Esophagus, or PPI use, request a consultation with our GI cancer specialists. We understand that every patient is different. Our oncology team focuses on creating personalized treatment plans for all our patients. We also provide palliative care, diagnostic services, and access to relevant clinical research. If you have been diagnosed with cancer, give us a call. Our compassionate team of oncologists will take care of you through every step.
Be ours a religion which, like sunshine, goes everywhere. — Theodore Parker, 19th-century Unitarian minister and abolitionist IN TODAY'S SESSION... children learned about the Unitarian Universalist Principle about being free to search for what is true and right. A play, "Many Paths to God," showed that many people have different beliefs that meet the same spiritual needs. We made a collage of our own personal symbols and played a game with our Unitarian Universalist symbol, the chalice. Children experienced that everyone is free to develop their own beliefs and that the differences each brings are celebrated. EXPLORE THE TOPIC TOGETHER. Ask your child to retell the story "Many Paths to God." Then invite family members to talk about their beliefs. Does everyone in the family have similar beliefs? Do adults in the family have beliefs that differ from the beliefs of their parents? EXTEND THE TOPIC TOGETHER. Children thought of symbols that represent who they are. Explore your home. What symbols are in your home? What do they represent? Do you have symbols from many different religions? Identify the religions that are represented in your home. A Family Adventure. Visit and worship in a denomination of friends or relatives. What symbols do you see? What do the symbols stand for? How do their beliefs seem to differ from yours? Their values? Family Discovery. As a family, choose another religion to study, perhaps one you are not familiar with at all. At a local library or in your congregational library, find age-appropriate books on the religion. A Family Game. Guess the Symbol: Ask each person to find a small object they think symbolizes them. Have everyone secretly bring their symbol to a central place. Then, re-gather and try to guess which family member has chosen each item as a symbol. A Family Ritual. Find a book of prayers or meditations from many different religions. Read a different one each night at the dinner table or at bedtime. Vary the religious traditions as much as possible.
All Puritan communities lived both humbly and piously. But many Puritans were drawn to the mysterious atmosphere of the wilderness and the wild natives that resided amongst them. This mixture of adventure and piety affected Puritan literature greatly. In many of the captivity tales, "savage" Native Americans force Puritans to abandon their homes and follow them in bondage into unknown wilderness. The tales are written to illustrate a moral lesson, wherein the person(narrator) survives his/her ordeal through an unwavering devotion to God. Many narratives follow this formula. One of the most popular stories following this motif is Mary Rowlandson's "A Narrative of the Captivity and Restoration of Mrs. Mary Rowlandson". Her narrative is arguably the most famous captivity account of the English-Indian era. Rowlandson gives a vivid and graphic description of the eleven week captivity by the Native Americans. Her narrative gives a first person perspective into the conditions of her captivity. She gives insight on her personal views of the Indians, both before and after her capture. Rowlandson displays a change in her perception on the term "savage" and "civilized", despite the fact that her overall world view remains the same. In this narrative, Mary Rowlandson, a wife of a pastor and a mother of three from Lancaster, Massachusetts recount the invasion of her town by the Native Americans during King Philip's War. During the eleven week time span, Rowlandson deals with the death of her youngest child, the loss and separation of her family and friends, and her terrible living conditions all the while she strives to keep her faith in God. She learns to cope with the Native Americans, which causes her attitude towards them gradually changes. She is first shocked at the lifestyle and actions of the "savages", but suppresses her dependence on them.
The Second Intifada, also known as the Al-Aqsa Intifada, was a significant and prolonged period of conflict and violence that began in September 2000 and continued for several years. This Palestinian uprising marked a dark and tumultuous chapter in the ongoing Israeli-Palestinian conflict. The term “Intifada” itself means “shaking off” in Arabic and signifies a popular and often violent resistance against Israeli occupation. Origins of the Second Intifada The Second Intifada had multiple triggers and underlying causes, including: The collapse of Peace Talks: Frustration mounted as peace negotiations failed to produce a final agreement, particularly during the Camp David Summit in 2000. Visit to Al-Aqsa Mosque: The September 2000 visit by then-opposition leader Ariel Sharon to the Al-Aqsa Mosque compound in Jerusalem, one of the holiest sites in Islam, led to violent clashes between Palestinians and Israeli forces. Economic Hardships: Palestinians in the West Bank and Gaza Strip experienced economic difficulties and high unemployment, contributing to growing unrest. Political Struggles: A leadership vacuum and divisions within the Palestinian Authority, which was established after the Oslo Accords, further fueled discontent. Characteristics of the Second Intifada The Second Intifada was characterized by a range of activities and actions, including: Violence: Unlike the First Intifada, the Second Intifada was marked by more organized and deadly violence, including suicide bombings, shootings, and attacks on Israeli civilians. Israeli Response: Israel responded to the attacks with military operations, incursions, and construction a security barrier in the West Bank. Casualties: The Second Intifada resulted in many casualties among Palestinians and Israelis. International Involvement: International actors, including the United States, the United Nations, and European countries, played roles in trying to mediate and end the violence. Impact and Outcomes The Second Intifada had significant and lasting consequences: Economic and Social Impact: The Palestinian territories faced severe economic and social challenges due to the conflict, affecting daily life and well-being. Security Measures: Israel implemented security measures, such as checkpoints and the West Bank barrier, in response to the violence. Impact on Peace Process: The violence and insecurity derailed peace negotiations and made achieving a lasting peace agreement more challenging. Legacy of Distrust: The Second Intifada deepened mutual distrust and animosity between Israelis and Palestinians. The Second Intifada represents a painful and divisive period in the Israeli-Palestinian conflict. While the violence eventually decreased, the scars and consequences of this uprising continue to shape the region’s dynamics. The conflict remains unresolved, and the quest for a peaceful solution persists, underlining the importance of diplomacy, compromise, and efforts to address the root causes of the conflict. More about the subject on Wikipedia.
In calculus, limits are a very important concept. If you are learning mathematics or seeking a good career in this field, you also need to master limits. Basically, the concept of limits makes a strong basis for derivation, integration, and geometrical calculations. This is why it has many types that are divided into multiple areas of mathematics. Along with that, there is a smart limit calculator with steps that has been developed by calculator.com to simplify limits. Yes, you may not believe but this online tool makes your impossible limit calculations into possible results. What Is Limit? The boundary set as input for which the function yields an output value is called the limit. For a function f and real number c, the limit is defined by the following equation: The limit calculator online also makes use of the same formula to calculate limits in a matter of seconds. Properties of Limits: The following are the characteristics that help define a limit: In the following section of the article, we will resolve a couple of examples to clarify your concept of limits in more detail. You may also verify the examples by using the smart limit calculator. >> Calculate the limit for the following function when x approaches 1. f(x) = (x^2 – 1)/(x – 1) As the given function is; f(x) = (x^2 – 1)/(x – 1) If we directly put the value of x as 1 in the above function, we will get a 0/0 form. So we better go by factoring the numerator: f(x) = (x – 1)(x + 1)/(x – 1) Cancelling the common factor (x-1) Simplifying the expression and we get f(x) = x + 1 Substitutng the value of x = 1 f(x) = x1+ 1=2 How You Can Use Limit Calculator? You will not consider it a fact but the limits calculator by calculatpred.com is a gem in itself. This online tool has opened a gateway to speedy and accurate limit calculations. Also, you get a complete content guide to use this tool and manually calculate limits. If you want to use the limit solver, follow these steps: - Open the limit calculator on calculatored.com - Enter the function with its lower and upper bounds - Tap calculate and get answers in seconds Limit Relation With Integrals: The idea of the limit of a geometric web broadens the definition of the limit of a sequence and relates it to the limit and direct limit in the mathematical category. In general, there are two sorts of integrals: definite integrals and indefinite integrals. The upper limit and lower limit for definite integrals are correctly specified. As opposed to definite integrals, which have an arbitrary constant when integrating the function, indefinite integrals can be written without constraints. Wrapping It Up: In the following blog, we had an overview of limits and the way you can calculate them. In case you feel trouble calculating limits, try a limit calculator and say goodbye to your tension.
The latest World Meteorological Organisation evidence on CO2 and methane concentrations in the earth’s atmosphere is reported in this BBC article. The unexpected rapidity of the 3.3 ppm rise CO2 in 2016 was higher than the previous record 2.7 ppm in 1997-8. “since 1990 there has been a 40% increase in total radiative forcing, that’s the warming effect on our climate of all greenhouse gases. the last time the Earth experienced a comparable concentration of CO2 was three to five million years ago, in the mid-Pliocene era. The climate then was 2-3C warmer, and sea levels were 10-20m higher due to the melting of Greenland and the West Antarctic ice sheets”. These results mean that the assumptions made relating to the Paris Accord on Climate Change will need to be reassessed as being too modest in their proposed guidance on reducing emissions. Further concerns are being raised about the unexpected rise in methane concentrations particularly in the tropics and sub-tropics which do not seem to be directly connected to emissions originating from human activity. “The WMO report has been issued just a week ahead of the next instalment of UN climate talks, in Bonn. Despite the declaration by President Trump that he intends to take the US out of the deal, negotiators meeting in Germany will be aiming to advance and clarify the rulebook of the Paris agreement”. Unequivocal evidence – from US climate assessment agency report – “If America’s leaders don’t start listening to scientists, the whole world is going to pay a truly terrible price.” The scientists’ predictions include: - A global sea level rise of up to 8ft (2.4 metres) cannot be ruled out by the end of the century - Risks of drought and flooding will increase - There will be more frequent wildfires and devastating storms Running to nearly 500 pages, the report concludes that the current period is “now the warmest in the history of modern civilisation”.
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! Franck and Hertz directed low-energy electrons through a gas enclosed in an electron tube. As the energy of the electrons was slowly increased, a certain critical electron energy was reached at which the electron stream made a change from almost undisturbed passage through the gas to nearly complete stoppage. The gas atoms were able to absorb the energy of the electrons only when it reached a certain critical value, indicating that within the gas atoms themselves the atomic electrons make an abrupt transition to a discrete higher energy level. As long as the bombarding electrons have less than this discrete amount of energy, no transition is possible and no energy is absorbed from the stream of electrons. When they have this precise energy, they lose it all at once in collisions to atomic electrons, which store the energy by being promoted to a higher energy level. Learn More in these related Britannica articles: James FranckThe Franck-Hertz experiment gave proof of Niels Bohr’s theory that an atom can absorb internal energy only in precise and definite amounts, or quanta.… Gustav HertzTheir experiments showed that when an electron strikes an atom of mercury vapour, the electron must possess a certain energy (4.9 electronvolts [eV], in this case) in order for that energy to be absorbed by the atom. (This level of energy varies for different elements.) Hertz…
By peering into the haze of Saturn's huge moon Titan, scientists could learn more about how the atmospheres of alien worlds behave. A team of scientists has recently developed a technique to help unscramble the messages encoded in the light signatures of exoplanets located light-years from Earth, but wanted to test it out a little closer to home first. The scientists used data collected from 2006 to 2011 by the Cassini spacecraft in the Saturn system to examine Titan's hazy atmosphere and learn more about how it works. Scientists have known for a while that many exoplanets probably have atmospheres obscured by haze and smog. That haze creates a problem for astronomers wanting to study those far-away worlds. Just as scientists try to understand the properties of stars by analyzing the light they emit, scientists study alien worlds by observing what happens to light as it passes through those planets' atmospheres. Scientists wait until the exoplanet passes, or transits, in front of its star, then gather the light with a telescope and use a prism to separate the light into its individual wavelengths, in the process gathering information about the planet's atmosphere, including its temperature, structure and overall composition. [The Strangest Alien Planets (Gallery)] "It turns out that there's a lot you can learn from looking at a sunset," said Tyler Robinson in a NASA statement. Robinson, a postdoctoral research fellow at NASA's Ames Research Center, in California, led the team that produced the results. But if the exoplanet's atmosphere is hazy, the measurements gathered might not accurately reflect what the atmosphere is really like. For a while, scientists thought that the haze would affect atmosphere measurements, but modeling the effects in their calculations would be complicated and take a lot of computing power. To make the calculations simpler, astronomers downplayed the haze's effect. Scientists realized, though, that they could learn more about how the haze behaves by studying smoggy worlds in our own solar system, therefore, they turned their attention to Titan. "Their analysis provided results that include the complex effects due to hazes, which can now be compared to exoplanet models and observations," according to a NASA statement. Specifically, the analysis revealed that scientists might be able to get reliable information about only the upper reaches of an exoplanet's atmosphere. For Titan, that region equals 90 to 190 miles (150 to 300 kilometers) above the moon's surface. The team also found that haze affected short wavelengths of light more than other wavelengths. Scientists had previously believed that haze affected all wavelengths similarly. The technique devised by Robinson and his team can be applied to measurements taken of any extraterrestrial world, including planets like Mars and Saturn. "Ty's results are proof of concept that you can remotely detect features of molecules on distant, cold worlds," Sarah Ballard, the NASA Carl Sagan Fellow in Astronomy at the University of Washington, said. "I feel a renewed hope that mankind is one step closer to detecting the signatures of faraway life."
In simple terms Quantitative Revolution can be defined as the diffusion of statistical techniques in geography to make the subject and its theories more precise. Geography can be regarded as a science concerned with the rational development, and testing of theories that explain and predict the spatial distribution and location of various characteristics on the surface of the earth. In order to achieve this new defined objective of geography and to ascertain the real picture of a region, geographers started the use and application of quantitative tools and techniques and the quantitative geography was opposed, especially in 1960. After this revolution, the quantitative techniques and general system theory have been used extensively in geography. It is after the quantitative revolution that geographers started concentrating, more on field studies generating primary data, utilizing secondary data, and applying the sampling techniques. The old methods of induction which necessitated much routine plodding can now be replaced by the testing of hypothesis, law of models etc. Many of these theories were borrowed from other branches of social and physical sciences. For example location theory was borrowed from economics. Study of industrial location 1909 is an economic theory. Similarly crop intensity model of J.H. Von. Thunen 1826 is an economic theory. Hooven, losch and isards models were also borrowed by geographers from economics. Christaller used applied quantitative techniques in his study about the central places in Southern Germany. Interaction model was directly related with the Newton’s law of gravity. Advantages of Quantitative Techniques - All these techniques are firmly based on empirical observations and are readily verifiable. - They help in reducing a multitude of observations to a manageable number of factors. - They allow the formulation of structured ideas and theories which can be tested under the assumed conditions. - They help in deriving suitable models to understand the interaction of the evolved factors and their process within the models and with reference to observed facts. - They help in identifying tendencies and desired trends, laws and theoretical concepts. Disadvantages of Quantitative Techniques - The theories and models based on the basis of empirical data do not take into account the normative questions like beliefs, emotions, attitudes, desires, hopes and fears and therefore, can not be taken as the tools explaining exact geographical realities. - The preachers of quantitative techniques have sacrificed many good qualities of quantitative statements which are even now even quite useful. - It has been observe that generalization done in this technique brings quite exaggerated results. - They demand sophisticated data which are rarely attainable outside developed world. - The factorial design depend on the used of the costly computer time and considerable financial assistance which are rarely available to the individual researcher of areal variation. - Date generated and theories formulated are generally tested in developed countries and they cannot became the true representative of all the countries. So concept of a universal law which has been tested on a part of land cannot be true.
In seventh grade, emphasis is on developing skills such as: critical thinking and reading, study skills, independent learning, creative problem solving, and careful listening and speaking. Students develop time and materials management through maintaining organized binders, moving from classroom to classroom, balancing multiple homework assignments, taking timed tests, and working on long-term projects. In their interactions with faculty and their peers, students are expected to conduct themselves with honesty and respect. Seventh graders are encouraged to seek help and take ownership of their learning whenever possible. - Review operations with fractions, decimals, and percents - Become efficient with solving and checking multi-step equations - Solve and graph one-variable inequalities - Explore rates, ratios, percents, and probability - Learn rules of factoring, exponents, and square roots - Graph linear functions; find slope and intercepts - Introduce operations with polynomials - Develop logical reasoning skills - Improve reading skills by using active reading strategies - Develop the writing process: planning and prewriting, drafting, revising, editing, and proofreading - Explore grammar and mechanics, appropriate style, principles of organization, word choice, use of evidence and vivid language, and sentence fluency - Facilitate literal, figurative and abstract thinking - Encourage confidence and clarity in expressing and supporting opinions - Use the scientific method to properly conduct accurately designed experiments - Promote a feeling of responsibility for the natural environment and know how human activity can impact it. - Understand basic cell biology including the kingdoms of life, and the role of each in ecosystems. - Conceptualize basic genetics, genetic engineering, and inheritance through laboratory investigations involving plant husbandry and crosses. - PIP (Personal Interest Project) student-centered research project culminating in research paper written in scientific format, and formal oral presentation to peers. - Investigate and understand the human body systems including: the skeletal system, muscular system, integumentary system, circulatory system, respiratory system, digestive system, excretory system, nervous system, endocrine system, and reproductive system. - Believe that scientific inquiry is a dynamic, enjoyable process and appreciate the importance of being curious in everyday life. US history covering the settling of Jamestown through the Civil War Reading critically to discern information that is significant to the focus of study Research and documentation skills honed Analytical writing introduced: skill of comparing/contrasting facts and points of view Note-taking skills expanded: variety of note-taking forms introduced Study, organization and time-management skills are discussed and implemented - discover organizational tools suited to personal learning style Public speaking and presentation skills are a continued focus Debate and reasoning skills are developed, using evidence to back up assertions Class discussions: ability to voice individual perspective, hear others’ perspectives and develop essential questions to propel the level of discussion/inquiry Assessments: variety of forms are designed to allow students to express their understanding in multiple formats Think critically and apply this understanding both verbally and in written form Class is taught mostly in Spanish; students are encouraged to speak Spanish - Grammar concepts covered include: descriptive adjectives and adjective agreement, possessive adjectives, present tense of regular verbs, present tense of stem-changing verbs, verbs with irregular forms, ser and estar, direct object nouns and pronouns - Expand students’ vocabulary - Encourage oral practice of the language through group and paired activities - Develop students’ writing abilities through short writing assignments - Explore Spanish and Hispanic culture The coding curriculum is currently being updated. The following description is provided as a glimpse of the coding skills that our upper school students have been learning. Students will work on a comprehensive website using the skills that they learned and with attention to: appearance, content, functionality, and usability. Students should continue to develop their website throughout the trimester. This website will be presented as part of their final project at the end of the year. Project benchmarks will be due throughout the trimester to help students manage the development of their websites. The programming language Python has become the language of choice for many programmers for a wide variety of projects ranging from creating video games to automating tasks to powering sites like YouTube, Google, Yahoo Maps and many others. During the trimester, students will create games, apps, and dynamic graphics while learning Python. As part of their final project, students will create their own product (application, game, etc.) using Python, which they will then incorporate into their websites. These websites will be due by and presented during their final exam time slot. Grading of this final project will be based on a rubric that will be available to students when they start to build their websites. Students in this class may work at their own pace to complete a series of lessons which will be posted online and accessible to the student on any computer with an Internet connection. The lessons will be largely project-based and will challenge the students to solve problems independently and think analytically. During significant benchmarks throughout the course, students will take tests to demonstrate their knowledge of the specific topic/s that they are learning. Tower’s Arts Block program provides upper school students with opportunities in technology, visual arts and musical arts classes. Recent Arts Block trimester offerings include: handbells, photography, woodworking, oil painting, technology, jazz band, marimbas, clay, sculpture, video, chorus, part singing, unaccompanied voice work, multi-cultural works, a cappella and folk groups. - Develop relevant skills connected with the sport played - Learn and enjoy the benefits of physical activity - Develop a high level of sportsmanship and school spirit - Develop leadership skills and an appreciation for the value of teamwork - Seek personal levels of excellence - Experience playing on a competitive team - Identify the correlation between hard work and improvement The drama program offers opportunities for burgeoning actors and crew to spread their theatrical wings. Under the guidance of the drama directors, students are responsible for all aspects of the production; acting, singing, dancing, costume design, set design, stage tech, and booth tech. Our goal is for students to develop confidence in themselves, while challenging themselves to reach in new directions. Our productions are designed to highlight our upper school students' talents in order to create a show that our entire school community can enjoy. Our [oldest] came to Tower in 7th grade. It was a smooth transition–not only was it welcoming as a parent, it was welcoming as a student. We found the academics engaging and challenging. The time Tower spends on study skills, self-advocacy skills and confidence has been remarkable for our son! JUSTIN AND ANDREA DEAN, PARENTS OF JUSTIN '17, MATTHEW '19 AND VICTORIA '25
The peopling of the Americas: Genetic ancestry influences health At one time or another most of us wonder where we came from, where our parents or grandparents and their parents came from. Did our ancestors come from Europe or Asia? As curious as we are about our ancestors, for practical purposes, we need to think about the ancestry of our genes, according to Cecil Lewis, assistant professor of anthropology at the University of Oklahoma. Lewis says our genetic ancestry influences the genetic traits that predispose us to risk or resistance to disease. Lewis studies genetic variation in populations to learn about the peopling of the Americas, but his studies also have an impact on genetic-related disease research. Some 15,000-18,000 years ago, people came from Asia through the Bering Strait and began to fill the American continents. The Americas were the last continents to be populated, so Lewis wants to understand how this process happened. His recent study focuses on South America and asks what part of the subcontinent has the most genetic diversity. A complete understanding of this research depends on a very important population genetic process called the "founder effect." The geographic region with the most genetic diversity is characteristic of the initial or "parent" population. For example, a group of people leave a parent population and become founders of a new daughter population in an uninhabited geographic region. They typically take with them only a small set of the parent population's genetic diversity. This is called a founder effect. The world pattern of founder effects in human populations begins in Africa. The genetic diversity in the Middle East is largely a subset of the genetic diversity in Africa. Similarly, the genetic diversity in Europe and Asia is largely a subset of the genetic diversity in Africa and the Middle East. The genetic diversity of the Americas is largely a subset of that in Asia. As a result, DNA tells a story about human origins, which began in Africa and spread throughout the world Lewis is interested in the founder effects within the Americas with a particular focus on South America. At the outset, Lewis expected western South America to have a more diverse population than eastern South America because most anthropologists believe South America to have been peopled from west to east. Unexpectedly, the genetic data from the Lewis study was not consistent with this idea. In this new study, Lewis looked at more than 600 independent genetic markers called short-tandem repeats. These markers were dispersed throughout the human genome. They were initially published by Lewis and his colleagues in large scale collaboration; the dataset is the largest survey of Native American genetic diversity today. Surprisingly, genetic analysis of these data estimated more genetic diversity in eastern than western South America. This was not the first time Lewis observed this pattern. Lewis first observed this pattern in 2007 with his post-doctoral advisor using a more limited genetic dataset. The fact that the new genome-wide dataset provided a similar result was surprising; this result questions the most widely accepted scenario for the peopling of South America. While the focus of this study was South America, a similar and interesting pattern of genetic diversity emerged in North America. The pattern suggests another major founder effect in North America, but after the initial founder effect from Asia. A founder effect provides an opportunity for dramatic changes in the frequencies of genetic traits. Genetic alleles or traits may be rare in a parent population, but because of the founder effect they can become common or even lost in the daughter population. These traits include those that may predispose for a risk or resistance to disease. "We cannot assume that all Native American populations will have similar trait frequencies nor will they have similar expectations for genetic risk or resistance to disease. Rather, the history of founder effects in the Americas, and around the world, contributes to the understanding of how well one local population might reflect a broader community," says Lewis. A paper by Lewis on this subject is currently available on "Early View" in the scientific journal American Journal of Physical Anthropology. Source: University of Oklahoma
2020-02-16 23:13 Telling time first grade time activities, time centers, and so many more fun ideas to teach telling time to the hour and time to the half hour in fun hands on ways! Track the weather then compare how the weather changed over the duration of the unit. Neat way to chart the weather during our unit. This could be used during circle time in For practice telling time at home, have students record the time they do a special or unique activity on an analog clock and figure out the duration. Examples could include: basketball practice, dance class, watching their favorite TV show or eating a banana. They record the times their 3 activities start and end the printable below. duration of time activities for first grade Feb 22, 2015 Telling Time in First Grade. Have students play in partners, each with their own clock mat and color of dough. They both start at midnight with their dough hands pointing at the 12 on their mat. They take turns rolling the dice that have them go forward 1 or 2 hours, back 1 hour 1st grade Time Lesson Plans. 1st grade. Time. Current Filters (7 results): 7 filtered Use a story and activities to help them gain a better understanding of time. 1st grade. Math. Lesson plan. Telling Time with The Grouchy Ladybug. Lesson plan duration of time activities for first grade For this first grade math worksheet, kids practice telling time by reading digital time and converting it to analog by drawing hands on the face of a clock. Elapsed Time Using Two Clocks Worksheets. You may select 1 hour, 30, 20, 15, 10, 5, or 1 minutes increments for the starting Clock A. You may select the elapsed time to change the Hours and Minutes, just the Hours, or just the minutes. You may select the maximum range for the elapsed time not to exceed between 3, 6, Elapsed Time Worksheets. Create a number line to figure out the answers to elapsed time math problems. The problems on this worksheet require students to regroup minutes. (example: 1 hours 80 minutes 2 hours 20 minutes) duration of time activities for first grade Studyladder is an online english literacy& mathematics learning tool. Kids activity games, worksheets and lesson plans for Primary and Junior High School students in United States. Estimate the duration of time. Course Mathematics Grade Grade 1 Section Time Outcome Most likely duration for an event Estimate the duration of time. Months 1st Grade Interactive Math Skill Builders. 1st Grade Skill Builders Mathematics Activities. advertisement. Telling Time CCSS 1. MD. B. 3 Links verified on. 1. Analog to Digital Time Clocks. Quia quiz designed for firstsecond grade practice with time problems using real life routines and hours of the day. SEE MORE: 16. Exploring Time in Kindergarten. Home It explores how long particular activities take and compares the duration of two activities in a variety of ways. Set the analog clock to the first time you have agreed on. Example; Get out of bed, 7: 00. 7. Glue the hands on the paper clock to represent 7: 00. Feb 19, 2015 Student activities pt. 10: A Strange Day What time is it? Telling Time to the Half Hour Place Value Grouping Video for 1st and 2nd Grade Duration: 5: 36. Moxtek has created unique Pixelated Polarizers using an array of micropattered Nanowire grids. These versatile Pixelated Polarizers can be aligned with cameras, plane arrays, interferometers, and microbolometers without the need for mechanical polarizer switches. SAS GitHub resources for developers SAS repositories on GitHub provide code examples, libraries, and tools for developers. Visit the GitHub Resources page for a list of the most popular developer repos. Originally Answered: What is the difference between a tonic clinic seizure and seizures that are either tonic or clonic? The etymology of the word tonic is the Greek word tonikos from tonos which means tension or contraction, and the word clonic is the Greek word Leeching after bigbang. General MapleStory Forum Talk about MapleStory in general I3eowolf. whats the level difference between you and the monsters in order to leech? Posted: Leeching sucks now. You get like 110th the exp of the actual monster (less for bosses). Add or subtract the given number of hours tofrom Macau time to get the time in these cities. Note: Time zone differences will vary during the year, as different countries observe DST during different periods. Therefore, you should usually use The World Clock instead. About the World Clock. Daylight Saving Time. As nouns the difference between spider and insect is that spider is any of various eightlegged, predatory arthropods, of the order araneae, most of which spin webs to catch prey while insect is an arthropod in the class insecta, characterized by six legs, up to four wings, and a chitinous exoskeleton. The downlink frequency is the frequency which is used for transmission of signals from the satellite to the earth station receiver. Uplink frequency is different from downlink frequency for following reason: The satellite transmitter generates a signal that would jam its own receiver; if both uplink and downlink shared the same frequency. Jun 04, 2012 Beliebers are just fans that are supporting their idol. When it comes down to it, we do fight with the people that dislike Justin Bieber but that is because we are defending him like what any other devoted fan shouldwould do. Beliebers don't bash on Selena Gomez. Fans do. There is a difference between fans and Beliebers. Mucocutaneous Leishmaniasis Leishmaniasis Mucocutaneous American. Leishmaniasis is a parasitic infection that is endemic in some areas of Latin America, the Middle East, North Africa, and the Mediterranean. Mucocutaneous leishmaniasis is one of the three main clinical types of the disease. Biopsy of the lesion or a swab should be obtained for microbiological testing in order to confirm the diagnosis. Orchard Jobs. There are two main types of orchards that hire agricultural workers in the United States. First, jobs are available at large orchards producing food for major companies who use the fruit in food products. Most of these orchards work with a single corporation, and they may be independent to flatout owned by the corporation. Feb 19, 2015 Auto locking VS Electronic locking rear diff. Discussion in '2nd Gen. Tacomas Is it a true limited slip diff with clutch packs etc totally mechanical like a old school diff. or is it a ABS controlled thing that Toyota does with braking? tommythetech The Original 6 34 is Compatible with Toyota Tacoma ( ) SHORT Rubber Antenna Answer Wiki. Following are some bottlenecks in performance testing: Performance testing cannot be done manually since it requires heavy investment of human resources. Automation tools are needed to stimulate real users i. e Virtual users. Feb 12, 2018 If you were upgrading WordPress or your Theme and files were quarantined by (ARQ) due to a procedural error or server processing issueerror 43 rows Whether youre a Grammywinning producer or just getting started making music, theres a Here is a collection of our printable worksheets for topic Writing Introduction Paragraphs of chapter Writing Texts in section Writing. . A Shop our collection of 6th grade books and educational toys. Shop by, collections, popular mustreads, test prep& homework help and much more Sixth Grade Chapter Books for Ages 1112& Leveled Reading Comprehension Lists of NonFiction, Mystery etc. Jun 13, 2007 Elementary school is a school for students in their first school years, where they get primary education before they enter secondary education. The exact ages vary by country. In the United States, elementary schools usually have 6 grades with pupils aged between 6 and 13 years old, but the age can be up to 10 or 14 years old as well. In Japan, the age of pupils in elementary school ranges A certified, official Grade 11 National Senior Certificate report, which indicates promotion to Grade 12. Course Duration. The academic year ends 31 December 2016. Learners are required to complete ALL their assessments within a calendar year. Windows 10s free upgrade offer is over, according to Microsoft. But this isnt completely true. There are a bunch of ways you can still upgrade to Windows 10 for free and get a legitimate license, or just install Windows 10 and use it for free. Mar 16, 2013 If you have the first generation iPad, you can update to iOS. Later iPads can update to iOS 6.
Noxious weeds or nonnative invasive plants may appear pretty and harmless, but they pose serious environmental threats across the country and in America’s Crossroads, one of six designated regions in the NWTF’s America’s Bix Six of Wildlife Conservation plan. So what is an invasive plant species? It is defined as a species that is nonnative (or alien) to the ecosystem under consideration, and whose introduction causes or is likely to cause economic or environmental harm or harm to human health. What makes nonnative invasive plants so successful? For starters, nonnative invasive plants produce large quantities of seed and thrive in disturbed soil. Invasive plant seeds are often distributed by birds, wind or people unknowingly transferring them. Some nonnative invasive species have aggressive root systems that spread long distances from a single plant. Those root systems often grow so densely that they smother the root systems of surrounding vegetation, choking out the competition. And, some nonnative plants produce chemicals in their leaves or root systems, which will inhibit the growth of the native plants around them. WHAT ARE THE IMPACTS OF INVASIVE PLANTS? According the USDA Forest Service, nonnative invasive species have contributed to the decline of 42 percent of U.S. endangered and threatened species; and for 18 percent of these species, nonnative invasive species are the main cause of their decline. For example, nonnative invasive species compete directly with native species for moisture, sunlight, nutrients and space, leading to a decrease in overall plant diversity. That reduction in diversity and the establishment and spread of nonnative invasive species degrade wildlife habitat by creating single stands of nonnative vegetation and can ultimately reduce water quantity and quality and increase soil erosion. Tree of Heaven Appearance: Tree of Heaven (Ailanthus altissima) is a rapidly growing, typically small tree up to 80 feet in height and 6 feet in diameter. It has large leaf scars on the twigs. Foliage: The leaves are 1-4 feet in length with 10-41 leaflets. Tree of Heaven resembles native sumac and hickory species, but the notched base on each leaflet easily distinguishes it. Flowers: Has male and female reproductive systems on separate plants, and flowering occurs in early summer when large clusters of yellow flowers develop above the foliage. Fruit: Fruit produced on female plants are tan to reddish, single winged and can be wind- or water-dispersed. Ecological Threat: Tree of Heaven forms dense thickets that displace native species and can rapidly invade fields, meadows and harvested forests. This invasive tree is extremely tolerant of poor soil conditions and can even grow in cement cracks. Not shade tolerant, but easily invades disturbed forests or forest edges causing habitat damage. Introduced as an ornamental, it was widely planted in cities because of its ability to grow in poor conditions. Management and control efforts for this species continue across the U.S. at great economic cost. Appearance: Garlic Mustard (Alliaria petiolate) is an herbaceous, biennial forb. First-year plants are basal rosettes that bolt and flower in the second year. Recognized by a garlic odor that is present when any part of the plant is crushed. Foliage: Foliage on first year rosettes is green, heart shaped, 1- to 6-inch leaves. Foliage becomes more triangular and strongly toothed as the plant matures. Flowers: Second-year plants produce a 1- to 4-foot flowering stalk. Each flower has four small, white petals in the early spring. Fruit: Mature seeds are shiny black and produced in erect, slender green pods that turn pale brown when mature. Ecological Threat: Garlic Mustard is an aggressive invader of wooded areas throughout the eastern and central U.S. A high shade tolerance allows this plant to invade quality, mature woodlands, where it can form dense stands. These stands not only shade out native understory vegetation but also produce compounds that inhibit seed germination of other species. It is native to Europe and was first introduced during the 1800s for medicinal and culinary purposes. Appearance: Multiflora Rose (Rosa multiflora) is a multi-stemmed, thorny, perennial shrub that grows up to 15 feet tall. The stems are green to red arching canes that are round in cross section and have stiff, curved thorns. Foliage: Leaves grow on each side of a stem with seven to nine leaflets. Leaflets are oblong, 1-1.5 inches long and have serrated edges. Its fringed stalks usually distinguish it from most other rose species. Flowers: Small, white to pinkish, five-petaled flowers occur abundantly in clusters on the plant in the spring. Fruit: Small, red rose hips that remain on the plant throughout the winter. Birds and other wildlife eat the fruit and disperse the seeds. Ecological Threat: Multiflora Rose forms impenetrable thickets in pastures, fields and forest edges. It restricts human, livestock and wildlife movement and displaces native vegetation. It tolerates a wide range of conditions, allowing it to invade habitats across the country. Native to Asia and first introduced to North America in 1866 as rootstock for ornamental roses. During the mid-1900s, it was widely planted as a “living fence” for livestock control. Appearance: Leafy Spurge (Euphorbia esula) is an erect, perennial, herbaceous plant that grows from 2-3.5 feet tall. The stem is smooth and bluish-green. The plant produces a milky sap if stem is broken or a leaf is removed. Foliage: Leaves are lance shaped, smooth and 1-4 inches. They are arranged alternately along the stem, becoming shorter and more oval-shaped toward the top of the stem. Flowers: Yellow flowers develop in clusters at the apex of the plant in June. Fruit: Fruits are three lobed capsules that explode when mature, propelling brown mottled, egg-shaped seeds up to 15 feet away. Ecological Threat: Large infestations of leafy spurge give the landscape a yellowish tinge due to the yellow bracts. It invades prairies, pastures and other open areas. It is a major pest of national parks and nature preserves in the West. It can completely overtake large areas of land and displace native vegetation. This plant is native to Europe and was introduced accidentally into North America in the early 1800s as a seed contaminate. — Mark Hatfield, NWTF director of conservation administration
When a sound wave comes into contact with an absorbent material, one portion of the energy is reflected, another is absorbed into the material and a third passes through it. The sound absorption coefficient is the ratio of absorbed sound energy to incident sound energy and depends on the frequency. What is an acoustic panel? Acoustic panels correct the sound reverberation inside a closed space. In addition to their remarkable ability to absorb noise pollution, they combine both function and style! Sound absorption refers to when a sound wave touches a soft, malleable or porous material and is completely or partially absorbed. The absorbed sound energy is transformed into heat. If the sound wave touches a hard object and is reflected, this is sound reflection. Sound absorption influences how sound is perceived by users of a room. It controls the ambient noise level, prevents echo and its aftereffects, and increases speech intelligibility. The quality of a room’s sound absorption is determined by how the room is configured and the types of materials in it. Sound absorption prevents reverberation and echo from being produced. Reverberation and echo are by-products of the reflection of sound, at repeated intervals, off of surfaces and objects in the room. The reverberation time of a room is the time required for the sound pressure to decay to one one-thousandth of its initial value. This is equivalent to a 60 dB reduction in sound volume. The echo effect depends on the absorption surface of the room and on the volume, and can vary according to the frequency. Perception of acoustic comfort in a closed space is directly related to a standardized rate of reverberation based on how the room is used. Reverberation time is counted in seconds. The Sabine absorption coefficient is used for absorbed reverberation. Following the result of our assessment, we will provide you with a proposal including materials and installation, as applicable. In informal situations, speakers can be a distraction to each other as the sound volume is the same throughout the room. To be heard, each person starts to speak louder and louder until everyone is yelling. This is the "Cocktail Effect", the results of which can generate sound levels over 90 dBA, especially in restaurants and daycare centres. This level of sound has been observed and documented to cause irreversible hearing damage to individuals who are regularly exposed to it. The clarity of a voice or of speech is defined by the quality of the words transferred to the listeners. In a reverberating room with distracting background noise, it can be difficult to hear what is said. Loudness refers to the volume at which we perceive a sound. In a reverberant room, the perceived noise is much higher than in a room with sufficient absorption. A sound level decreases as the distance from the source of the sound increases. The design of the room and absorbent materials (shape, furnishings, surface finishes, etc.) influences the speed at which the sound level decreases with distance from the source. AUDITORY PERCEPTION – EAR SENSITIVITY – HEARING DAMAGE Hearing sensitivity varies from one person to the next. It also depends on the ranges of frequency being considered. The ear has been shown to be most sensitive to frequencies between 250 and 4,000 Hz, which is the range in which we find the sounds in our everyday lives and in language. At an equal volume, a very low-pitched sound will seem quieter than a medium-pitched sound. When an individual begins to experience hearing impairment, the first symptoms are trouble hearing high-pitched sounds and difficulty hearing conversations in noisy environments, followed by in all circumstances. Prolonged exposure to noise can cause irreversible hearing damage. New legislation has been adopted in recent years to prescribe a maximum sound level for daily exposure. Previously 90 dBA, the limit is now 85 dBA for an eight-hour period. These levels are within the audible frequency range. The prescribed levels for infrasound and ultrasound frequencies have not yet been clearly established. For comparison, the maximum allowable level for certain concerts and clubs is set at 105 dBA, which would not be permitted for longer than 3 minutes in the workplace. Even if a sound level is not sufficient to create hearing damage, there is still a risk of stress induced by noise exposure. This stress can lead to sleep problems, anxiety and even cardiovascular problems. Sound-absorbing products are made from porous materials and have technical characteristics that allow them to absorb sound waves, correcting internal acoustic issues such as reverberation. Panels made from compressed fibreglass, compressed volcanic mineral wool and various acoustic foams are high-performance products for closed spaces of any dimension. They can also be customized to integrate into any décor. They are an effective solution for offices, conference rooms, banks, company lobbies, movie theatres, concert venues and restaurants. Baffles are high-performance suspended panels, each with its own suspension system, used in large spaces such as gymnasiums, arenas, churches, industrial factories, mall food courts and any other medium or large enclosed space. Sound-absorbing materials must be directly exposed to the acoustic radiation. They can be secured or held in place on, for example, surfaces or structures, in perforated trays or on expanded metal.
Help your kids to develop their skill on counting numbers by giving them these free and printable number worksheets 1-100! With a bunch of interesting number exercises, these worksheets are created with number 1-100 fill in the blanks worksheets that will help your kids to practice their counting. These number worksheets contain many kinds of number 1-100 exercises that you can choose for your kids to do. Scroll down to check out the worksheets! In learning number, students need media that they can use to assist them in memorizing the numbers and its sequences. By working on number worksheets which provide many exercises dealing with number sequences, children will be indirectly assisted in memorizing the sequences of number 1 to 100 correctly. There are many other kinds of number worksheets 1-100 that you can save and print to assist your children in learning numbers from 1 to 100. These worksheets were specially designed with adorable pictures that will make your children interested in learning. Moreover, these worksheets will also help your children to understand more about numbers, the forms, and its sequences. Just simply click on the picture to enlarge and save it. By considering the grade and skill of your children, print these number charts as your children’s media in developing their knowledge in numbers! Have a nice learning and see you in the next post!
A new technique for analysing satellite images may help scientists detect and count stranded whales from space. Researchers recently tested a new detection method using Very High Resolution (VHR) satellite images to investigate the largest mass stranding of baleen whales yet recorded. The study, published this week in the journal PLoS ONE by scientists from British Antarctic Survey and four Chilean marine institutes, could revolutionise how stranded whales – which are dead in the water or beached – are detected in extremely remote locations. In 2015, over 340 baleen whales were involved in a mass stranding in a remote region of Chilean Patagonia. The stranding was not discovered for several weeks due to the remoteness of the region. Aerial and boat surveys assessed the extent of the mortality, but only several months after discovery. The new research studied satellite images covering thousands of kilometres of coastline, which provided a clearer insight into the extent of the mortality. They could identify the shape, size and colour of the whales, especially when the animals turned pink and orange as they decomposed. “The causes of marine mammal strandings are poorly understood and therefore information gathered helps understand how these events may be influenced by overall health, diet, environmental pollution, regional oceanography, social structures and climate change,” says Dr Jennifer Jackson, a whale biologist at British Antarctic Survey. “As this new technology develops, we hope it will become a useful tool for obtaining real-time information, and allow local authorities to intervene earlier to possibly help with conservation efforts.”
As we’ve reported recently, the likelihood of findings habitable Earth-sized worlds just seems to keep getting better and better. But now the latest calculations from a new paper out this week are almost mind-bending. Using what the authors call a “very careful extrapolation” of the rate of small planets observed around M dwarf stars by the Kepler spacecraft, they estimate there could be upwards of 100 billion Earth-sized worlds in the habitable zones of M dwarf or red dwarf stars in our galaxy. And since the population of these stars themselves are estimated to be around 100 billion in the Milky Way, that’s – on average – an Earth-sized world for every red dwarf star in our galaxy. And since our solar system is surrounded by red dwarfs – very cool, very dim stars not visible to the naked eye (less than a thousandth the brightness of the Sun) — these worlds could be close by, perhaps as close as 7 light-years away. With the help of astronomer Darin Ragozzine, a postdoctoral associate at the University of Florida who works with the Kepler mission (see our Hangout interview with him last year), let’s take a look back at the recent findings that brought about this latest stunning projection. Back in February, we reported on the findings from Courtney Dressing and Dave Charbonneau from the Center for Astrophysics that said about 6% of red dwarf stars could host Earth-size habitable planets. But since then, Dressing and Charbonneau realized they had a bug in their code and they have revised the frequency to 15%, not 6%. That more than doubles the estimates. Then, just this week we reported how Ravi Kopparapu from at Penn State University and the Virtual Planetary Lab at University of Washington suggested that the habitable zone around planets should be redefined, based on new, more precise data that puts the habitable zones farther away from the stars than previously thought. Applying the new habitable zone to red dwarfs pushes the fraction of red dwarfs having habitable planets closer to 50%. But now, the new paper submitted to arXiv this week, “The Radius Distribution of Small Planets Around Cool Stars” by Tim Morton and Jonathan Swift (a grad student and postdoc from Caltech’s ExoLab) finds there is an additional correction to the numbers by Dressing and Charbonneau numbers. “This is basically due to the fact that there are more small planets than we thought because Kepler isn’t yet sensitive to a large number that take longer to orbit,” Ragozzine told Universe Today. “Accounting for this effect and enhancing the calculation using some nice new statistical techniques, they estimate that the Dressing and Charbonneau numbers are actually too small by a factor of 2. This puts the number at 30% in the old habitable zone, and now up to about 100% in the new habitable zone.” Now, it is important to point out a few things about this. As Morton noted in an email to Universe Today, it’s important to realize that this is not yet a direct measurement of the habitable zone rate, “but it is what I would classify as a very careful extrapolation of the rate of small planets we have observed at shorter periods around M dwarfs.” And as Ragozzine and Morton confirmed for us, all of these numbers are based on Kepler results only, and so far, while there confirmed planets around M dwarfs, there are none confirmed so far in the habitable zone. “They do not use any results from Radial Velocity (HARPS, etc.),” Ragozzine said. “As such, these are all candidates and not planets. That is, the numbers are based on an assumption that most/all of the Kepler candidates are true planets. There are varying opinions about what the false positive rate would be, especially for this particular subset of stars, but there’s no question that the numbers may go down because some of these candidates turn out to be something else other than HZ Earth-size planets.” Other caveats need to be considered, as well. “Everyone needs to be careful about what “100%” means,” Ragozzine said. “It does not mean that every M dwarf has a HZ Earth-size planet. It means that, on average, there is 1 HZ-Earth size planet for every M dwarf. The difference comes from the fact that these small stars tend to have planets that come in packs of 3-5. If, on average, the number of planets per star is one, and the typical M star has 5 planets, then only 20% of M stars have planetary systems.” The point is subtle but important. For example, if you want to plan new telescope missions to observe these planets, understanding their distribution is critical, Ragozzine said. “I’m very interested in understanding what kinds of planetary systems host these planets as this opens a number of interesting scientific questions. Discerning their frequency and distribution are both valuable.” Additionally, the new definition of the habitable zone from Kopparapu et al. makes a big difference. As Ragozzine points out: “This is really starting to point out that the definition of the HZ is based on mostly theoretical arguments that are hard to rigorously justify,” Ragozzine said. “For example, a recent paper came out showing that atmospheric pressure makes a big difference but there’s no way to estimate what the pressure will be on a distant world. (Even in the best cases, we can barely tell that the whole planet isn’t one giant puffy atmosphere.) Work by Kopparapu and others is clearly necessary and, from an astrobiological point of view, we have no choice but to use the best theory and assumptions available. Still, some of us in the field are starting to become really wary of the “H-word” (as Mike Brown calls it), wondering if it is just too speculative. Incidentally, I much, much prefer that these worlds be referred to as potentially habitable, since that’s really what we’re trying to say.” However, Morten told Universe Today that he feels the biggest difference in their work was the careful extrapolation from short period planets to longer periods. “This is why we get occurrence rates for the smaller planets that are twice as large as Dressing or Kopparapu,” he said via email. He also thinks the most interesting thing in their paper is not just the overall occurrence rate or the HZ occurrence rate even, but the fact that, for the first time, they’ve identified some interesting structure in the distribution of exoplanet radii. “For example, we show that it appears that planets of roughly 1 Earth radius are actually the most common size of planet around these cool stars,” Morton said. “This makes some intuitive sense given the rocky bodies in our Solar System—there are two planets about the size of Earth, making it the most common size of small planet in our system too! Also, we find that there are lots and lots of planets around M dwarfs that are just beyond the detection threshold of current ground-based transiting surveys—this means that as more sensitive instruments and surveys are designed, we will just keep finding more and more of these exciting planets!” But Ragozzine told us that even with all aforementioned caveats, the exciting thing is that the main gist of these new numbers probably won’t change much. “No one is expecting that the answer will be different by more than a factor of a few – i.e., the true range is almost certainly between 30-300% and very likely between 70-130%,” Ragozzine said. “As the Kepler candidate list improves in quantity (due to new data), purity, and uniformity, the main goal will be to justify these statements and to significantly reduce that range.” Another fun aspect is that this new work is being done by the young generation of astronomers, grad students and postdocs. “I’m sure this group and others will continue producing great things… the exciting scientific results are just beginning!” Ragozzine said.
Almost any food or food additive can cause an allergic reaction in children. Common triggers in infant and young children include milk, soy, eggs, peanuts and wheat while older children can have allergic reactions to nuts and seafood. Food allergy symptoms manifest in a variety of ways depending on the individual which is why it can be difficult to properly diagnose and treat. A food allergy is an exaggerated immune response to diet and can present with a rash, GI or respiratory symptoms and even anaphylaxis. Non-immune response to diet includes conditions like lactose intolerance, irritable bowel syndrome, and gastroenteritis. Children can often outgrow allergies depending on the allergen and mechanism. The relationship between symptoms to foods must be assessed by skin testing where food reactions are present.
Hybridization is the process of combining two complementary single-stranded DNA or RNA molecules and allowing them to form a single double-stranded molecule through base pairing. In a reversal of this process, a double-stranded DNA (or RNA, or DNA/RNA) molecule can be heated to break the base pairing and separate the two strands. Hybridization is a part of many important laboratory techniques such as polymerase chain reaction and Southern blotting. DNA is usually found in the form of a double-stranded molecule. These two strands bind to one another in a complementary fashion by a process called hybridization. DNA naturally, when it is replicated, the new strand hybridizes to the old strand. In the laboratory we can take advantage of hybridization by generating nucleic acid probes which we can use to screen for the presence or absence of certain DNA molecules or RNA molecules in the cell.Lawrence C. Brody, Ph.D.
Researchers have solved the mystery of the Tully Monster, an oddly configured sea creature with teeth at the end of a narrow, trunk-like extension of its head and eyes that perch on either side of a long, rigid bar. “We had a very clear picture of what it looked like, but no clear picture of what it was.” The 300-million-year-old animal—which grew to only a foot long and was a vertebrate, with gills and a stiffened rod (or notochord) that supported its body—is part of the same lineage as the modern lamprey. “I was first intrigued by the mystery of the Tully Monster,” says lead author Victoria McCoy, who conducted her research as a Yale University graduate student and is now at the University of Leicester. “With all of the exceptional fossils, we had a very clear picture of what it looked like, but no clear picture of what it was.” For decades, the Tully Monster has been one of the great fossil enigmas: It was discovered in 1958, first described scientifically in 1966, yet never definitively identified even to the level of phylum (that is, to one of the major groups of animals). Officially known as Tullimonstrum gregarium, it is named after Francis Tully, the amateur fossil hunter who came across it in coal mining pits in northeastern Illinois. Thousands of Tully Monsters eventually were found at the site, embedded in concretions—masses of hard rock that formed around the Tully Monsters as they fossilized. Tully donated many of his specimens to the Field Museum of Natural History, which collaborated with Argonne National Laboratory and the American Museum of Natural History on the study that is published in the journal Nature. “Basically, nobody knew what it was,” says coauthor Derek Briggs, professor of geology and geophysics and curator of invertebrate paleontology at the Yale Peabody Museum of Natural History. “The fossils are not easy to interpret, and they vary quite a bit. Some people thought it might be this bizarre, swimming mollusk. We decided to throw every possible analytical technique at it.” Using the Field Museum’s collection of 2,000 Tully Monster specimens, researchers analyzed the morphology and preservation of various features of the animal. Powerful, new analytical techniques also were brought to bear, such as synchrotron elemental mapping, which illuminates an animal’s physical features by mapping the chemistry within a fossil. The researchers concluded that the Tully Monster had gills and a notochord, which functioned as a rudimentary spinal cord. Neither feature had been identified in the animal previously. “It’s so different from its modern relatives that we don’t know much about how it lived,” McCoy says. “It has big eyes and lots of teeth, so it was probably a predator.” Some key questions about Tully Monsters remain unanswered. No one knows when the animal first appeared on Earth or when it went extinct. Its existence in the fossil record is confined to the Illinois mining site, dating back 300 million years. “We only have this little window,” Briggs says. Source: Yale University
We often think of DNA as that simple, beautiful twisting double helix so popularized by modern culture. While this view isn’t exactly incorrect, it’s very limited. In real life, DNA is an immensely complex molecular structure, and only now are scientists beginning to understand the myriad of shapes it can take on. Using cryo-electron tomography, researchers are learning more about how DNA functions while in biological use, and specifically the many shapes and forms it can take. Cryo-electron tomography refers to the process of freezing biological samples. This microscopy technique makes it easier for researchers to observe tiny and frequently changing biological samples. DNA is a tightly coiled molecular structure, containing an estimated 3 billion base pairs. If fully unwound, DNA would stretch three feet (yes, three feet) in length. This massive structure, however, needs to fit inside the tiny nucleus of cells, where the DNA is stored when not being used. The only way to fit that much into a space so tiny is to wind it, very, very tightly. Of course, our cells need to access the data in DNA, from time to time, and cannot do so while it’s all wound up. In order to work with it in a variety of biological processes, our cells need to modify the shape of the DNA itself. When cells need to use DNA, it is pulled out of the nucleus, unwound, and then worked with. Our understanding of DNA once it is unwound has generally been very limited. Up until now, trying to anticipate and work with this myriad of shapes has been difficult, but scientists are now using 3-D imaging software and cryo-electron tomography to learn more about the twisting and turning shapes of DNA. Even with the massive power of modern computers, researchers aren’t yet working with the full length of DNA. Instead, researchers are working with small DNA “circles”, bending and twisting them to see how the DNA reacts and shapes. These tiny circles were about 10 million times shorter than an actual DNA strand, but by the standards of modern genetics, these structures themselves are quite massive and difficult to work with. A brief overview of DNA DNA was discovered by Francis Crick and James Watson in 1953. Needless to say, the discovery of DNA marked one of the biggest achievements of the 20th century. DNA stands for Deoxyribonucleic acid, which is one of the three macromolecules believed to be required for life. (The other two are proteins and carbohydrates). The science is immensely complex, but essential DNA provides instructions to the cells, telling them what to do. Everything from the structure of our bones, to the color of our eyes is determined by the instructions handed out from DNA. The only part of our body that doesn’t seem to function on the DNA found in the nucleus of our cells is the mitochondria, which actually has its own set of DNA instructions. For this reason, some believe that mitochondria originally evolved separately from most organic animal cells.
Kenya’s coffee history dates back more than 100 years. Coffee was first planted in Kenya’s coast province in 1885. The coffee plants were brought by French missionaries from the Indian Ocean island of Reunion. Around the same time, the scramble for Africa was taking shape through the Berlin Conference of 1885. Africa was divided into territories of influence by the European powers. The British Government founded the East African Protectorate in 1895 and soon after, opened Kenya’s fertile highlands to British colonial settlers. Early coffee production was a preserve of colonial settler community. The high demand for coffee in European markets made it Kenya’s main export crop by the 1920’s. The right of African farmers to grow coffee would however have to wait until the 1950’s. The agitation for self-determination and economic freedom led to the Land and Freedom uprising in 1952. As a response to the uprising, the colonial government initiated a land tenure reform plan that offered the African a system of farming whose production would provide economic incentives. This land reform is usually referred to as the Swynnerton Plan. In 1956, African farmers were allowed to grow coffee but they were restricted to a maximum of 100 coffee trees. In the 1960’s, the restrictions were removed and Africans could plant unlimited number of coffee trees. This marked the explosion of Kenya’s small-scale and medium scale coffee producer as more Africans got into coffee production or increased their production. The small-scale farmers formed co-operatives to take advantage of economies of scale and solidify collective bargaining power. Today there are over 600,000 smallholder producers organized into about 550 co-operatives. The smallholder farmers account for 75% of the land growing coffee and over half of production.
There’s many, many sports out in the world that involve sharply hitting a ball with something. Baseball, tennis, golf, cricket, polo, you name it. After being hit, a ball describes a trajectory determined by the gravity of the earth and the interaction of the ball with the atmosphere. This can be exploited in many sports, since different trajectories can be useful in different situations. Golf and baseball especially are built on tweaking air flow around the ball in order to make it do precisely what the athlete wants. Tennis too, which has been drawing a lot of interest with the dramatic underdog victories in the French Open. In the absence of an atmosphere (or local gravitational variations and things of that nature) it’s generally said that a projectile will follow a parabola. This is true in a constant gravitational field, but the earth is a sphere of finite size and thus does not in fact have a uniform field. It’s very close to uniform near the surface and so a parabola is an excellent approximation. But formally it’s not quite right. Atmosphere aside, a thrown projectile is in a very extended orbit about the center of the earth, and thus the trajectory is a section of an ellipse. So here’s a challenge problem that’s not so hard, but requires a little bit of mathematical fluency. It’s a Fermi problem, so estimates rather than exact numbers are what we’re looking for. Assume a baseball is hit into the outfield with a typical angle and initial speed. What is the order-of-magnitude difference in range between the trajectory approximated as a parabola and the trajectory approximated as an ellipse? As a hint that you don’t need, the difference is very, very small.
The earliest educational software simply transferred print material from the page to the monitor. Since then, the Internet and other digital media have brought students an ever-expanding, low-cost knowledge base and the opportunity to interact with minds around the globe—while running the risk of shortening their attention spans, isolating them from interpersonal contact, and subjecting them to information overload. The New Science of Learning: Cognition, Computers and Collaboration in Education deftly explores the multiple relationships found among these critical elements in students’ increasingly complex and multi-paced educational experience. Starting with instructors’ insights into the cognitive effects of digital media—a diverse range of viewpoints with little consensus—this cutting-edge resource acknowledges the double-edged potential inherent in computer-based education and its role in shaping students’ thinking capabilities. Accordingly, the emphasis is on strategies that maximize the strengths and compensate for the negative aspects of digital learning, including: Group cognition as a foundation for learning Metacognitive control of learning and remembering Higher education course development using open education resources Designing a technology-oriented teacher professional development model Supporting student collaboration with digital video tools Teaching and learning through social annotation practices The New Science of Learning: Cognition, Computers and Collaboration in Education brings emerging challenges and innovative ideas into sharp focus for researchers in educational psychology, instructional design, education technologies, and the learning sciences.
What is it? Water is a molecule called H2O that contains two atoms of hydrogen and one atom of oxygen. It’s a transparent, odorless liquid that you can find in lakes, rivers and oceans. It falls from the sky as rain or snow. Where does it come from? Fresh water is the result of the Earth’s water or hydrologic cycle. Basically, the sun’s heat causes surface water to evaporate. It rises in the atmosphere, then cools and condenses to form clouds. When enough water vapor condenses, it falls back to the surface again as rain, sleet, or snow. The process repeats itself in a never-ending cycle. The water we consume and use every day comes from two main sources: groundwater and surface water. When rainwater or melting snow seeps into the ground, it collects in underground pockets called aquifers, which store the groundwater and form the water table, another name for the highest level of water that an aquifer can hold. Water levels can reach the water table or fall well below it depending on such factors as rainfall, drought, or the rate at which the water is being used. Groundwater usually comes from aquifers through a drilled well or natural spring. Surface water flows through or collects in streams, rivers, lakes, reservoirs and oceans — not underground like groundwater. Surface water can be beautiful, even pristine-looking, but most of it isn’t directly fit for drinking. Fully 97% is found in the oceans and can’t be used for drinking because of its salt content. The other 3% of water is fresh, and most of that is locked up in ice or glaciers. How much do you use? A typical American uses 80-100 gallons of water every day. If that sounds like a lot, consider that the total includes not just drinking water, but also the water used for washing, watering lawns, and waste disposal. In fact, people actually drink less than 1% of the water coming into their homes. The rest goes for other purposes. Unless you have your own well, you likely have to pay something for the water you use. A typical U.S. household pays about $1.50 per 1,000 gallons, or $0.0015 per gallon. For a family of four using 100 gallons per person each day, that adds up to about $18 per month. Bottled water has a higher price tag, although it may be preferred for businesses or homes that want a low-maintenance source of quality drinking water. According to the Beverage Marketing Corp., the wholesale cost of domestic, non-sparkling bottled drinking water was $1.21 per gallon in 2011. Drinking water sold in 20-ounce bottles may cost more than $6 per gallon. Also, many homeowners have to pay for sewage (water that leaves the home). In the U.S., the average monthly cost for sewage is $84 a month. How does it get to your home or business? Typically, pipes bring the water supply from a facility that treats the water to your home or business. A well built and maintained distribution system of pipes helps ensure its quality. Another format to provide water specific for drinking to a home or business would be the installation of a water cooler or the delivery of bottled water. Raw and untreated water is obtained from an underground aquifer (usually through wells) or from a surface water source, such as a lake or river. It is pumped, or flows, to a treatment facility. Once there, the water is pre-treated to remove debris such as leaves and silt. Then, a sequence of treatment processes — including filtration and disinfection with chemicals or physical processes — eliminates disease causing microorganisms.When the treatment is complete, water flows out into the community through a network of pipes and pumps that are commonly referred to as the distribution system. Approximately 85% of the U.S. population receives its water from community water systems. Community water systems are required to meet the standards set by the U.S. Environmental Protection Agency (EPA) under the authority of the Safe Drinking Water Act (SDWA). A well is a strategically placed access point drilled into an aquifer, combined with a pump to withdraw the water and a basic filtering or screening system. Approximately 15% of the US population relies on individually owned sources of drinking water, such as wells, cisterns, and springs. The majority of household wells are found in rural areas. Water quality from household wells is the responsibility of the homeowner. Bottled water is popular. Studies suggest that half of all Americans drink bottled water from time to time, and about a third consume it regularly. As with tap water, the source of bottled water is usually a municipal water system or a natural spring, and from there it may go through additional purification. As a packaged product, bottled water is regulated under the guidelines of the U.S. Food and Drug Administration (FDA). To find out more, check out www.bottledwater.org.
Maize plants that have been inoculated with bacteria naturally present in the soil show improved resistance against a pathogenic fungus and a considerable reduction in the number of attacks by a herbivorous moth. It is the first time that such a double effect has been shown in maize. The results of these studies carried out at the University of Neuchβtel under the supervision of Brigitte Mauch-Mani have been presented at the PR-IR 11 congress on plant defences. A pathogenic fungus responsible for a disease called anthracnose has been causing serious damage to cereal crops on the North American continent, especially during the 1970's. "Colletotrichum graminicola is not limited to crops. It also affects golf courses, parks and private gardens where the grass changes to an unappealing brown colour, states Dirk Balmer, a PhD student in the molecular and cellular biology laboratory and co-author of the research. Luckily, the problem in Europe is not as serious since the cultivated varieties on this continent are naturally less sensitive to this disease." In order to test a way to improve the resistance of maize to this pathogen, Chantal Planchamp, the other principal co-author of this study and PhD student in the same group, inoculated the plants with a soil bacterium of the genus Pseudomonas. This microorganism is known to easily colonise maize seeds and roots without damaging the plant. The bacterium acts somewhat like a vaccine by halting the pathogenic fungus's proliferation. Furthermore, it seems that the presence of the bacterium helps to limit the herbivorous moth's attacks. The origin of this double action protection of maize still needs to be clarified, particularly through the analysis of the mechanisms activated by the plant, such as the production of hormones or specific metabolites. In parallel to the experiments of Chantal Planchamp, Dirk Balmer wanted to know how the plant reacts to a C. Graminicola infection. It turned out that an infection at the root zone triggers important resistance reactions that spread all the way to the leaves. An inverse effect (leaf to root) has also been shown, but not as pronounced. "It's the first time that a study shows the effects of a systemic resistance caused by this infection. This was possible because the fungus attacks both the roots and leaves of maize," adds Dirk Balmer. Cite This Page:
An Act of the US Congress concerning slavery. Following the Mexican–American War, the Compromise of 1850 had allowed squatters in New Mexico and Utah to decide by referendum whether they would enter the Union as “free” or “slave” states. This was contrary to the earlier Missouri Compromise. The Act of 1854 declared that in Kansas and Nebraska a decision on slavery would also be allowed, by holding a referendum. Tensions erupted between pro-and anti-slavery groups, which in Kansas led to violence (1855–57). Those who deplored the Act formed a new political organization, the Republican Party, pledged to oppose slavery in the Territories. Kansas was to be admitted as a free state in 1861, and Nebraska in 1867. Subjects: Warfare and Defence — United States History.
Flights of Fancy: Leonardo's Flying Machines by Gemma White Have you ever wondered how a bird flies? Have you ever run around mimicking a bird, flapping your arms, willing yourself to take off? Leonardo did. Well, we're not sure that he actually ran around, although that is fun to imagine. But it was his love for birds that provided a major source of inspiration for his endeavor to master the art of human flight. He thought that to understand how a bird flies would provide the key that would enable man to take to the air. So what did Leonardo learn from birds? They Make It Look So Easy Leonardo's study of birds was extensive and obsessive. Notebooks that he kept over many years show pages and pages of observations and drawings, detailing every aspect of a bird's flight. He made detailed anatomical drawings of a bird's body and wings. He observed the subtle movements of the wings in flight, the reaction of the wings against different wind conditions, and how a bird can drop from the sky and dive to the ground or remain stationary in the wind by using fine movements of its wings and tail. He also had the great insight to understand how a bird can achieve motion by exerting more pressure against the air than the air exerts against its body. Leonardo also performed many experiments. He constructed models to test the effect of shifting the bird's center of gravity. He built replicas of birds' and bats' wings, experimenting with different materials to test what would be the best material for a full-scale flying machine. Throughout all these vigorous investigations, Leonardo believed that “the bird is an instrument operating through mathematical laws, which instrument is in the capacity of man to be able to make with all the motions.” So he thought that by understanding the mathematics underlying a bird's movement, man could then mimic it and take to the air. Oops…a Power Problem! However, there was an important issue that Leonardo failed to address: How can man power a flying machine? Remember, this is in an age before the first engine, so all designs had to be powered by human strength. Leonardo observed that birds in flight had great reserves of strength. Birds could carry their own weight in prey—an eagle carrying a hare between its talons—and they also had enough power to double or triple their normal rate of speed in order to escape from a pursuer or dive to catch lunch. But Leonardo claimed that man had comparable reserves of strength in his legs, enough power to keep him in the air. Using all his acquired knowledge, Leonardo set to the task of designing a flying machine. He thought it was possible to re-create specifically the way a bird flies, and even on a larger scale. Around 1485, he drew detailed plans for a human-powered ornithopter—an aircraft with flapping wings. This design had a wooden framework, with the pilot's seat located inside a shell-shaped vessel. From this vessel, the pilot would operate two large batlike wings, which had pulleys connected to stirrups that he moved with his feet. This original design had the pilot lying down, but subsequent drawings showed him sitting or standing upright. However, the key to all these designs was the use of the legs to provide maximum muscle power. The plane's tail would provide flight and landing stability. Actually, Leonardo never built any of his designs. So…would they have been successful? Probably not! What We Know Now In order to fly, we must generate enough lift (upward motion). This is achieved by creating enough thrust or power (forward motion) to overcome the weight of the machine and the drag (friction) of the air—and this is where Leonardo became untied. Lift is generated by air flowing over an airfoil, be it a wing or a propeller. The shape of the airfoil causes air to flow over its two sides differently. The shape of the upper side of a wing causes air to flow over it at a lower pressure than it does across its underside. This pressure difference causes the wing to rise. Engines on planes push the wings forward so more air flows across them, creating more lift. Even though Leonardo realized that power was essential to success, the machines he designed would have been too heavy for man himself to generate enough thrust to power. If he had based his design on a hawk's wing, he may have been able to build a glider, but he couldn't have propelled any design, no matter what the wing shape. In fact, we had to wait another 400 years for our first powered flight—until 1903, when the Wright brothers successfully constructed an engine-powered plane that made human flying possible. - What kind of power source did Leonardo indicate would power his flying machines? [anno: Leonardo expected humans to provide the power source for his flying machines. In particular, Leonardo thought the leg muscles of a person could power the machines.] - Since the time of Leonardo, people have learned how to manufacture materials that are lightweight and strong. Using materials available today, what kinds of improvements might be made on Leonardo's flying machines? Do you think this would enable one of his designs to work? Choose Leonardo's helicopter, glider, or parachute design. Write a short paragraph describing what kinds of materials you would use to build this machine. Will the machine work with the new materials? Include details about why the machine would or would not work if it were made with modern materials. [anno: Answers will vary. Students may suggest building machines with titanium or aluminum metals and using nylon for wings and parachutes. The helicopter and glider might work in a limited range if the aerial screw sail for the helicopter were large enough and the wings on the glider were large enough, but the limiting factor would still be fatigue and the weight of the person powering the machine. The parachute has been designed to work successfully with a nylon chute.]
HELP SOLVE GLOBAL WARMING Every one of us impacts the earth and its resources—and we all have a role to play to help slow the effects of global warming and protect the planet for future generations. Reducing and offsetting carbon emissions (or greenhouse gas emissions) is one of the most powerful ways we can make a difference and do our part to help stop global warming. According to the World Wildlife Fund’s Living Planet Report 2012, we all need to act now to reduce carbon emissions that contribute to global warming, pollute the air and threaten vital eco-systems. What is Global Warming? Global warming is the increase of Earth’s average surface temperature due to the effect of greenhouse gases, such as carbon dioxide emissions from burning fossil fuels or from deforestation, which trap heat that would otherwise escape from Earth. This is a type of greenhouse effect. Where do carbon emissions come from? Carbon emissions are caused every time we burn fossil fuels such as natural gas, oil, and coal to produce energy. Turn on your washing machine and you are using energy, which creates carbon emissions. Wash in hot water and you’ll create even more carbon pollution. Nearly every action we take during an average day—from talking on our mobile phones to driving cars and taking public transit—creates carbon emissions. In fact, reading just one email creates an estimated 4 grams of carbon pollution! How are carbon emissions measured? The amount of carbon pollution you create is referred to as your carbon footprint. Generally, carbon is measured in tonnes or pounds so your carbon footprint represents an estimate of how many tonnes/pounds of carbon emissions you create in a given period, usually a year. The average person creates 48,488 pounds of carbon every year: that’s a lot. And every tonne of carbon created accelerates the impacts of climate change. What about other greenhouse gases? Other greenhouse gases like methane and nitrous oxide are produced most often from agricultural activity and raising livestock. Manufacturing creates greenhouse gases, as does construction, and deforestation. Trees play a critical role in absorbing carbon dioxide. Fewer forest means larger amounts of GHGs entering the atmosphere – and increased speed and severity of global warming. Even discarding organic food scraps, like apple cores and spoiled foods in landfills, creates greenhouse gas emissions. While it sounds pretty bleak…there is a positive side to the story. What can I do to reduce my carbon footprint? The good news is that there are simple, effective ways to reduce and even neutralize your carbon footprint to lead a greener, cleaner lifestyle that will go a long way to help save the planet. - It is important to understand how our everyday activities create pollution and contribute to our carbon footprint. You’re off to a great start already, just by learning more about this. - Once you know how much you contribute to the emission problem, you can be proactive to find ways to become more energy efficient in your everyday life. It won’t take long to get yourself on the road to reducing your carbon footprint. - Regardless of your personal efforts to reduce your carbon footprint, it is virtually impossible to eliminate it on your own. That’s where JustGreen LifeStyle comes in. JustGreen LifeStyle has a portfolio of certified carbon offset projects. These projects include renewable energy production and/or carbon emission reduction projects that reduce the amount of emissions released into the atmosphere. You can purchase carbon credits, otherwise known as carbon offsets from JustGreen to neutralize your carbon footprint, while supporting these environmentally friendly projects at the same time. This process helps to balance out your carbon emissions towards carbon neutrality. “Once you know how much you contribute to the emission problem, you can be proactive to find ways to become more energy efficient in your everyday life.”
3D Printing Lithium Ion Micro-Batteries Leland C. Smith, Bruce Dunn & Janet I. Hur | Aug 23, 2018 This article was originally published by Science TrendsA great example of how materials science can transform technology and society is Moore’s Law. Writing a half-century ago, Gordon Moore predicted that continuous improvements in semiconductor manufacturing would lead to a doubling of computing power roughly every two years. This prediction played out with astonishing accuracy and extraordinary results. Consider that the processor in a modern cell phone is about 100 million times faster than the computers available to Apollo-era astronauts. (1) Moore’s law and related trends have touched nearly every electronic component including memory, cameras, and various sensors. It is not only the computational power of modern electronics that has had such dramatic effects on how we live, but also the mobility of our devices. And since the dawn of electronics, mobility has required one thing – batteries. (2) As Moore’s Law was taking off, mobile computers and cell phones in the 1970s and 80s languished with bulky and underpowered batteries. It was not until the early 1990s that commercial lithium-ion batteries became available. This new battery technology, along with the relentless churn of improved processors, ultimately allowed the laptop computer and cell phone to become ubiquitous and indispensable pieces of modern life. There should be no confusion about whether battery technology is governed by Moore’s Law – it is not. (3) While the introduction of lithium-ion in the 1990s did produce significant improvements, a doubling of top-line battery performance (e.g. energy density) could occur on the order of decades, not years. However, there are still numerous features of battery technology that can be improved to give meaningful benefits to society in the near future. Safety, price, and sustainability are a few important and active areas of research. Another limitation of existing lithium-ion battery technology is the poor performance of small batteries. Compared to a large battery cell destined for an electric vehicle (EV), the smallest batteries on the market (about the size of a postage stamp) deliver only 5% as much energy on a volumetric basis. This problem is becoming more pressing with the continued push for even smaller electronics in fields like health care, wearables, and the internet of things. Much like Gordon Gecko’s cell phone, today’s miniaturized electronics are waiting for a better battery. The good news is, unlike doubling the range of an EV battery, improving small batteries does not require dramatically changing the underlying electrochemistry. Lithium-ion batteries, small and large, transport the same ions (lithium) through the same host materials (metal-oxides at the cathode/graphite, lithium metal or silicon at the anode). At its core, the problem of poor small battery performance is a question of manufacturing. Namely, how do we precisely control the geometry and assembly of battery materials on small scales? The commoditized large format lithium-ion batteries that we use every day are built in a surprisingly simple way. Much like a sandwich, they have three layers (anode, electrolyte, and cathode) that are assembled by applying thin layers of material over large areas using the manufacturing equivalent of a butter knife. The layers are then rolled up and placed inside a package, either a can or pouch. Unfortunately, these manufacturing techniques that work so well on a large scale become unwieldy as battery size decreases. Simple manipulations like spreading, cutting and rolling become increasingly difficult for batteries approaching 1 cm2 in footprint. The result is that small batteries contain less active material as a proportion of their total mass, so they contain less energy. Also, the total surface area between the anode and cathode tends to decrease resulting in a decreased flux of lithium-ions, which gives less power. The field of three-dimensional (3D) batteries aims to overcome these challenges by manipulating electrode geometry at the micron scale. As opposed to the flat sheets of material found in the sandwich-style batteries, 3D battery electrodes typically consist of hundreds or thousands of features with the dimensions near that of a human hair. Over two decades of 3D battery research, a primary objective was to discover a material that can conformally coat these tiny electrodes. In this work, researchers repurposed a common photoresist material as a 3D battery electrolyte. This discovery allowed them to develop a battery manufacturing scheme that resembles the way integrated circuits are built. By demonstrating a scalable method for forming sub-cm2 3D batteries with energy density approaching that of larger (e.g. cell phone) batteries, the research suggests that rapid advancement in small battery performance is possible. The manufacturing technology that put the power of a supercomputer in the palm of your hand seems to have something in store for batteries as well. These findings are described in the article entitled High Areal Energy Density 3D Lithium-Ion Microbatteries, recently published in the journal Joule. This work was conducted by Janet I. Hur, Leland C. Smith, and Bruce Dunn from the University of California, Los Angeles.
Bats exhibit all the characteristics of mammals: they have hair and give birth to live young who are fed on milk from mammary glands. However, bats take it to the skies as the only true flying mammals. Learn the basic biology of bats as well as their physical adaptations for survival. K - 12 Indoor or outdoor classroom Governor Mike Huckabee Delta Rivers Nature Center, Pine Bluff Education Program Coordinator, 870-534-0011 45 minutes - 1 hour Suggested Number of Participants: 10 - 30 - Learn the basic biology of bats. - Learn the physical adaptations of bats for survival. - Discuss the usefulness of bats in checking insect populations. *See glossary for definations Nearly 1,000 species of bats exist with 42 species in the United States and 18 of those in Arkansas. They can vary in size from just over two grams to more than two pounds. Though not found in the Americas, the flying fox bats have wingspans of up to six feet. - Discuss the characteristics that define mammals. - Using a diagram or picture, point out some of the unique features of bats. - Bats belong to the mammalian order Chiroptera, which means “hand wing.” The bones in a bat’s wings are actually the same as those in the human arm and hand. In the bat, though, the finger bones have been elongated and connected by a double membrane of skin to form the wing. - Female bats reproduce once a year. Breeding is usually in autumn, but the sperm is held until spring for fertilization. Gestation lasts a few weeks and bats usually give birth to one offspring per year, though some species have three to four at a time. Baby bats develop rapidly and begin flying two to five weeks after birth. - Discuss some of the behavioral specializations of bats such as echolocation, hibernation or migration. - Bats are nocturnal and hunt mostly at night. While they have relatively good eyesight, most depend on their highly developed echolocation to capture prey in the dark. The creatures are able to “see” objects in front of them by emitting up to 200 high frequency pulses of sound per second and listening for the rebound. Bats can detect objects as small as a piece of thread in their path. Once the animal has honed in on its prey (mostly insects), the bat will either catch it in its mouth or scoop up several insects using their wings and tail. - Most bat species spend winters in caves and move to trees or buildings in the summer. Some bats, however, spend their lives entirely in trees, moving to hollow trees in cold weather. Some bat species migrate hundreds of miles to and from winter roosts and generally return to the same cave throughout their lifetimes. - Answer any questions. - Why are bats important to the environment? - What is echolocation, and how do bats use it? - What might happen if a bat were disturbed during hibernation? Bat – flying mammals of the order Chiroptera that have modified forelimbs that serve as wings and are covered with a membranous skin extending to the hind limbs Chiroptera – order in which bats are classified Echolocation – system used by dolphins, bats and other animals to locate objects by emitting high-pitched sounds that reflect off the object and return to the animal's ears or other sensory receptors Hibernation – an inactive state resembling deep sleep in which certain animals living in cold climates pass the winter; body temperature is lowered and breathing and heart rates slow down; protects the animal from cold and reduces the need for food during the season when food is scarce Mammal – any of a class of higher vertebrates, including man, that produce milk for their young, have fur or hair, are warm-blooded and, with the exception of the egg-laying monotremes, bear young alive Migration – the regular, periodic movement of an animal population from one area to another, usually because of a change in temperature, food supply or the amount of daylight, and is often undertaken for breeding; mammals, insects, fish and birds migrate
One of the mysteries of the English language finally explained. An agent, such as radiation or a chemical substance, which causes genetic mutation. - ‘One possible explanation for these exceptional findings might be contamination of the ethanol with a mutagen.’ - ‘Chemicals that cause changes in DNA sequence, or mutations, are called mutagens.’ - ‘Chemical mutagens and ionizing radiation have long been used as plant mutagens in forward genetic studies.’ - ‘Detection of low frequency mutations following exposure to mutagens or during the early stages of cancer development is instrumental for risk assessment and molecular diagnosis.’ - ‘A host of carcinogens spew forth, along with poisons, mutagens, and mind-altering drugs.’ - ‘The purpose of this study was to determine if cooked meat containing only moderate concentrations of the known food mutagens would be detectably mutagenic.’ - ‘Cell-culture procedures, chemical mutagens, and radiation all have been applied in what people now refer to as traditional, or conventional, plant breeding for the better part of a century.’ - ‘The so-called Ames test is based on the fact that most carcinogens are mutagens (substances that damage DNA).’ - ‘Among the mutagens that have been used to induce mutations, chemical mutagens administered in various ways have become especially popular.’ - ‘Exposure to ionizing radiation or environmental mutagens and carcinogens may lead to genomic instability.’ - ‘Significantly, their findings were supported by both in vivo and in vitro experiments using reference mutagens.’ - ‘The integrity and stability of the genetic material is continuously being threatened by endogenous and exogenous factors such as chemical mutagens and radiation.’ - ‘Unlike most chemical mutagens, which tend to cause point mutations, rays tend to produce larger aberrations such as chromosome deficiencies or rearrangements.’ - ‘According to experimental data, some pesticides and organic solvents are considered potential chemical mutagens.’ - ‘We show here that these strains can be used to determine very easily the mutagenic specificity of various mutagens.’ - ‘Poisons, mutagens, and carcinogens might be created in harmful concentrations.’ - ‘But we can provoke a quicker second hit by treating the animals with a chemical mutagen or a carcinogen.’ - ‘These mutagens cause point mutations, because they change the genetic code at one point, so changing a protein's amino acid sequence.’ - ‘For genetics and breeding, it is fundamentally important to know the germline mutation rate induced by a mutagen.’ - ‘Firstly, risks to the descendants of trial participants because of the inadvertent modification of germ cells are not identical to those for chemical mutagens.’ 1940s: from mutation + -gen. Top tips for CV writingRead more In this article we explore how to impress employers with a spot-on CV.
Find the Letter Q: Five Little Ducks Five little ducks went out to play, but when their mom called them with a "quack, quack, quack, quack," only four came back! Maybe your child can solve this duck mystery. But first, let's count the Q's in the nursery rhyme! This worksheet offers practice recognizing and naming capital and lowercase letters of the alphabet and exposes kids to nursery rhymes and poetry. Kids completing this worksheet also practice counting.
This book was written to help mathematics students and those in the physical sciences learn modern mathematical techniques for setting up and analyzing problems. The mathematics used is rigorous, but not overwhelming, while the authors carefully model physical situations, emphasizing feedback among a beginning model, physical experiments, mathematical predictions, and the subsequent refinement and reevaluation of the physical model itself. Chapter 1 begins with a discussion of various physical problems and equations that play a central role in applications. The following chapters take up the theory of partial differential equations, including detailed discussions of uniqueness, existence, and continuous dependence questions, as well as techniques for constructing conclusions. Specifically, Chapters 2 through 6 deal with problems in one spatial dimension. Chapter 7 is a detailed introduction to the theory of integral equations; then Chapters 8 through 12 treat problems in more spatial variables. Each chapter begins with a discussion of problems that can be treated by elementary means, such as separation of variables or integral transforms, and which lead to explicit, analytical representations of solutions. The minimal mathematical prerequisites for a good grasp of the material in this book are a course in advanced calculus, or an advanced course in science or engineering, and a basic exposure to matrix methods. Students of mathematics, physics, engineering, and other disciplines will find here an excellent guide to mathematical problem-solving techniques with a broad range of applications. For this edition the authors have provided a new section of Solutions and Hints to selected Problems. Suggestions for further reading complete the text. This creative yet mathematically rigorous guide to teaching elementary and middle-grade mathematics emphasizes empowering children to learn abstract concepts based on mathematical techniques used around the world. Some b&w illustrations. Annotation c. Book News, Inc., Portland, OR (booknews.com)
, also spelled chickenpox , is a common childhood disease[?] caused by the varicella-zoster virus[?] (VZV), also known as human herpesvirus 3 (HHV-3). It is characterized by a fever followed by itchy raw pox[?] or open sores. It is rarely fatal: if it does cause death, it is usually from varicella pneumonia , which occurs more frequently in pregnant women. Chicken pox has a two week incubation period and is highly contagious by air transmission two days before symptoms appear. Therefore chicken pox spreads quickly through schools and other places of close contact. As the disease is more severe if contracted by an adult, parents have been known to ensure that their children became infected before adulthood. Later in life, virus remaining in the nerves can develop into the painful disease, shingles. A chicken pox vaccine is now available, and is now required in many states for children to be admitted into elementary school. In addition, effective medications (e.g., acyclovir[?]) are available to treat chicken pox in healthy and immunocompromised persons. All Wikipedia text is available under the terms of the GNU Free Documentation License
|Part of the series on| The Roman Republic is the term used to refer to the second era of Roman history, between the kingship and the empire. The origins of the Republic are dubious and legendary at best, and utterly fictional at worst. However, the Romans had a concrete sense of the foundation date of their city, and referred to it frequently in histories. From Roman reckoning, historians place the pivotal events of the beginning of the Roman Republic at 509 B.C. Historians also debate the time of the fall of Rome, based on their private conceptions of what Roman republicanism truly meant. In a sense, the end of the Republic was inevitable after the rise of Marius, but the true end of the Republic is often placed at either the death of Julius Caesar when Republican government first truly ceased, the Battle of Actium (Augustus' victory over Mark Antony), or the Constitutional Settlement of Augustus, which recognized that ruler as princeps, or "first citizen," with plenary power over Roman holdings. Most historians agree to use the date of the Constitutional Settlement as the date when all possibility of the restoration of the Republic finally ended, bringing the Republic to its true close, in 27 B.C. Etruscan Influence on Rome Rome itself was founded in the eighth century B.C. Legend said that Romulus had founded the city after killing Remus. But in actuality, it was founded by shepherds and farmers. The name means "river city" and indicates it was located on the Tiber River. In northern Italy, the Etruscans, a people from Turkey, ruled. They conquered Rome and installed kings there. A simple farming village became a civilized trading settlement. The Etruscans introduced the arch, the aqueduct, building statues of pagan gods, and brought order and a sense of pride and uniqueness to the Romans. In short, they advanced Rome. The Overthrow of the Tarquins One of the Etruscan kings of Rome was Tarquin the Proud. In 509 B.C., some wealthy landowners ousted him and made Rome a republic, or government where people elected their leaders. A senate was founded for the patricians (wealthy nobles and landowners) to vote in while a comitia was founded for the plebeians (middle class and poor) to represent themselves in. A constitution was written. In the republic, the patricians had more power. In the family, the father, or paterfamilias, had ultimate control over his wife and children. The Gauls and Other Foundation Myths Early in Republican history, the city was besieged by the Gauls, led by Brennus, and ransomed. This was the first incursion upon the soil of the city, and the last time enemies passed the sacred pomerium (border line) of the city until 410 A.D. The account is romanticized in Titus Livius|Livy's Roman History, where he describes that but for the efforts of the scorned hero Camillus, the Gauls would have utterly razed the city. Instead, the ragtag remnants of the Roman Army were held the Capitoline Hill - one of the Seven Hills of Rome - against the Gauls in stalemate, ceding the rest of the city to the advancing armies. After this stalemate had lasted for some time, the Gauls offered to leave for a sum of gold. The Romans quickly accepted, and began to carry out their gold, and weigh out the appropriate price on a Gallic balance scale. The Romans soon discovered that the Gauls had deliberately tinkered with the scales so as to weigh the gold lighter than it actually was, to trick the Romans into paying more. As the Romans protested, and prepared to rescind the treaty, Brennus is said to have yelled "Vae Victis!", or, "Woe to the Conquered!", upon which he threw his sword onto the scales as an additional counterweight to the Roman gold. The Romans, their land savage and their gold extorted from them, set to rebuilding the city. The Gauls were never to return. However, the rebuilding of the city proved difficult. Titus Livius Livy tells that the Romans almost considered abandoning the city of Rome, and moving wholesale to Alba Longa, the mythical nearby city of Aeneas, to begin their lives anew there. However, after an impassioned speech by Camillus, the Romans instead chose to rebuild Rome, at a breakneck pace. Livy blames this hurried reconstruction for the disorganization and haphazard layout of the Roman streets, when compared with the ordered city plans of provincial capitals and colonies. Much of this account was likely sensationalized, to glorify the early roots of Rome in correlation with the image programme of Augustus, Livy's direct patron. The Conquest of Italy Early Rome was a tiny settlement surrounded by hostile kingdoms. The Romans formed alliances with neighboring cities. When they defeated an enemy, they ruled it so they would be safe. As there were always new enemies, the Romans kept conquering. In 272 B.C., all of mainland Italy was ruled by the Roman Republic. The Struggle of the Orders: the Gracchi The Punic Wars: Nascent Roman Imperialism Meanwhile, Sicily and Sardinia were ruled by Carthage, a wealthy and powerful trading metropolis in North Africa. Disdaining commerce, the Romans at first did not mind the Carthaginians ruling the western Mediterranean. But after conquering Italy, the Romans feared Carthage might attack from Sicily. Plus, during a civil war in Sicily Rome and Carthage aided opposing groups. In 241 B.C. a war began. It was the first of the Punic Wars, so called because punic was the Latin word for Carthage. After developing a navy, the Romans won. The Jugurthine War, Marius, and the Beginning of the End The Long Fall Sulla and Pompey: "More Worship the Rising Sun than the Setting" The First Triumvirate The First Triumvirate was an alliance formed between Julius Caesar, Pompey, and Marcus Crassus formed in 60 BC. The purpose of the triumvirate was for the three members to gain political power through their mutual support for each other. While the triumvirate did allow for the three to gain much political power, it ultimately fell apart due to the ambitions of each member. The Fall of the Republic In Janurary of 49 BC, Julius Caesar crossed the Rubicon River in Northern Italy and proclaimed "alea iacta est" or "the die is cast." This was an act of treason against the Roman Republic, and so Caesar began his civil war against Pompey. Caesar went on to crush the legions of Pompey, and then used his legions to force the Senate to declare him Dictator for life. In 44 BC, in a conspiracy led by Brutus, Julius Caesar was assassinated by a number of senators. His assassination outraged up the public, and led to the conditions for the the republic to replaced by an empire. Caesar left his wealth to his adopted son, Octavian (later renamed Augustus Caesar). Octavian allied himself with Mark Antony and Lepidus. The three of them formed the Second Triumvirate and effectively ruled Rome. The ambitions of each member tore the alliance apart, forcing Lepidus to exile and leading to Mark Antony's suicide, leaving Octavian as the sole ruler of Rome. It took several years for Octavian to set up his government, but he ultimately ended the Roman Republic by 27 BC, reforming the Roman Republic in the Roman Empire. From this period was derived the very word republic, plus the words senate, patricians, constitution, and plebes (plebeians). When the United States Constitution was written in 1787, the nation was modeled after Rome and became a republic. A Senate was formed, and also checks and balances, enshrined in the ancient republic, kept each branch of government from growing too powerful.
Lithium nitride, Li3N, is a dark-coloured substance; it is formed at the ordinary temperature on exposing metallic lithium to the air. Calcium nitride, Ca3N2, is a greyish-yellow substance; and magnesium nitride, Mg3N2, a yellow powder. Combination takes place readily with great evolution of heat when a mixture of dry lime with magnesium powder is heated to dull redness in a current of nitrogen; this affords a convenient method of separating nitrogen from the indifferent gases of the atmosphere, and preparing the latter in a state of purity. Boron nitride, BN, is a white amorphous powder ; it can also be produced by heating to redness a mixture of boron oxide with ammonium chloride, until excess of the chloride has volatilised. Besides these compounds, which may be regarded as the analogues of the oxides, a series of nitrides is known, which correspond in formula with hydrazoic acid, HN3. The starting-point for these compounds is sodamine, NaNH2 (see below). This compound is heated to 300° in a series of small flasks in a current of nitrous oxide, when the following reaction takes place: 2NaNH2 + N2O = NaN3 + NaOH + NH3. The change which takes place is more obvious when the reaction is conceived to occur in two stages: + H2O; and NaNH2 + H2O = NaOH + NH?. The product of the reaction is dissolved in water, acidified with dilute sulphuric acid, and distilled : NaNg.Aq + H2SO4. Aq = HN3.Aq + NaHSO4.Aq. The distillate, which is a dilute solution of hydrazoic acid, has a peculiar odour, and if its vapour be inhaled, fainting may result; it is necessary to take precautions to distil it in a good draught. The solution has an acid reaction; salts may be prepared by neutralisation with the hydroxides or carbonates of the metals. The ions, - N3, are colourless, and the salts of colourless ions are themselves white. Those of lithium, sodium, potassium, magnesium, calcium, strontium, barium, and zinc are crystalline; their formulas are M'N3 and M/;(N3)2 respectively. Silver hydrazoate, AgN3, closely resembles the chloride in appearance and in insolubility ; it is, however, dangerously easy to explode, and should be prepared dry only in minute quantity, and treated with the utmost precaution. Titration with a deci-normal solution of silver nitrate affords a convenient method of determining the strength of a solution of hydrazoic acid, or of analysing the hydrazoates; it is easy to recognise the point when all hydrazoic acid has been removed as the insoluble silver salt.
Analyzing and interpreting data Slide presentation showing how to scale an axis and make a dot plot. - 6, 7, 8, 9+ - Life Science - Growth, Development and Reproduction of Organisms - Describing variability, graphing data - Dot plot, Frequency plot Scale, proportion, and quantity Recognize a statistical question as one that anticipates variability in the data related to the question and accounts for it in the answers. Understand that a set of data collected to answer a statistical question has a distribution which can be described by its center, spread, and overall shape. Display numerical data in plots on a number line, including dot plots, histograms, and box plots. Using mathematics and computational thinking Obtaining, evaluating, and communicating information
Artificial nitrogen products are often petrochemical in origin. The application of artificial nitrogen, to grassland and to other agricultural and horticultural crops, has been a method of ‘boosting’ plant growth and productivity, in modern times. The benefits in terms of weight of product are undeniable. Equally undeniable is the concept that quantity is not necessarily quality. Traditional methods of cropping restrict productivity in order to enhance quality, in many spheres of food and commodity production. There must be a reason for this. Artificial nitrogen treatment of crops does several things to our food and our environment: - It raises levels of potentially toxic nitrates, nitrites, amides, amines and other non-protein nitrogenous (NPN) compounds in the plant. - It raises sugars (non-structural carbohydrates – NSC) in the plant. - It decreases fibre in the plant. These three factors can increase the risk of laminitis in horses, ponies and donkeys. - It raises Potassium and lowers especially Magnesium (this is very dangerous for ruminants that eat grass treated in this way, making them prone to ‘grass staggers'). - It lowers the levels of trace minerals in the ground, thus impoverishing the soil. The crops are also short in these valuable nutrients. - It raises the level of nitrogen in drainage water (ditch water), thus increasing harmful algal overgrowth in rivers and streams. - It raises nitrates etc. in our drinking water. - It damages vital soil microflora. - It reduces plant diversity (biodiversity) in pasture land. - It is a factor in decreasing habitat quality for many insects and therefore birds such as skylarks and lapwings, who depend upon such species and who have now virtually disappeared from our pasture land and meadows. These effects are highly undesirable for horses that eat grass or hay from ground treated in this way. These adverse effects may be sufficient to negate the benefit of herbal medicine, homeopathy etc. These effects are also undesirable to humans who consume food (fruit, vegetables and cereals) grown in this way. Poor nutrition results and health inevitably suffers. Because these effects are so serious to horses, we recommend that horses should not be given access to grass, grass products and grass conservation products that have been treated in this way. Traditional methods of returning nutrients to the soil are advocated. For this reason, the AVMC offers advice on pasture management to clients. More traditional methods and, especially, organic methods, are infinitely preferable, provide better nutrition and do not adversely affect the quantity of grass ‘keep' for horses. Soil health, plant health, animal health and human health are inextricably linked.
Teaching sentence variety This section of teaching for progression in writing introduces a set of teaching approaches, with both reading and writing as their contexts, to help build, develop and secure pupils' understanding and use of sentence variety in their writing. Pupils will develop an increasingly mature understanding of a wide range of sentence structures and lengths, and will use them to create variety in their own writing. This is expressed in more detail in the substrand of the Framework. Vary sentence length and structure in order to provide appropriate detail, make clear the relationship between ideas, and create effects according to task, purpose and reader. Draw on their knowledge of a wide variety of sentence lengths and structures, including complex sentences, and apply it to their own writing to clarify ideas and create a range of effects according to task, purpose and reader. Deploy appropriately in their own writing the range of sentence structures used by writers to enhance and emphasise meaning, aid cohesion and create a wide range of effects according to task, purpose and reader. Select from the wide range of sentence structures used by writers, and shape, craft and adapt them in their own writing for particular effect with clear consideration given to the variety of audiences, tasks and purposes. Shape, craft and adapt sentence structures, selecting from the wide repertoire of styles and types deployed by writers, and apply them accurately, creatively and appropriately to achieve impact and effect. Shape sentences in apt and accurate ways that demonstrate either economy of expression, elaborate development, or both, in order to create original and sophisticated effects and impact. What aspects could be taught? To become fully effective as writers, pupils need to know: - correct subject terminology – subordinate clauses, finite and non-finite verbs, and so on - how to vary sentence structures, using a variety of simple, compound and complex sentences - how to extend their use and control of complex sentences by recognising, exploring and using subordinate clauses in a variety of positions within the sentence - how to use connectives with attention to meaning, not just as writing prompts - the interrelationship between sentence structure and punctuation, and how punctuation is used to clarify and enhance meaning - the categories of sentences – statements, questions, commands and exclamations - how to avoid over-coordination in sentences - how to recognise that simple sentences are not always short or unsophisticated and are a vital tool for manipulating the reader.
By Olta Totoni The Western Democracies are the essential references for those countries and states that are in their beginnings. Countries like The United Kingdom and The United States of America with consolidated democracies make the difference and they also intertwine the old with the modern. The United States of America a state of immigrants, a mosaic where every piece represents a certain state in comparison with The United Kingdom a monarchy where the monarch gives her royal consent for economical and political facts. The influence of United Kingdom has been apparent throughout the history of the United States of America. From the colonies till now the situations have changed. They are two separate countries that collaborate freely with each other and offer equal chances for the relevant citizens. Why is the United States of America considered to be a point of reference for all the western democracies? This federal state offers equal opportunity and it is characterized by freedom. Americans learn about the political system at an earlier age. The children learn how to vote, to elect the committees, to make a campaign etc. The voting process, the notion of the responsibility, the idea of the minority and majority are familiar for all the American citizens. The heart of the American democracy is the transparency and it is manifested in different aspects. First, the American citizens have the right to participate in different meetings. As Americans they have the right to express their thoughts freely. They have their freedom of speech. Second, the American citizens have the right to express their point of view by sending letters or electronic mails. The law for the freedom of the information has helped the American citizens to use public documents in order to understand better different situations, historical facts etc. The face of America is changing quickly. What in the United Europe can be seen as a theory, in the United States it is considered to be practice. The power in the United States is well-organized and this is done in order not to gain terrain and misuse the role. (Checks and Balances). On the other hand, why is the United Kingdom considered to be a point of reference for all the western democracies? The United Kingdom is a parliamentary democracy. It is a constitutional monarchy. The Sovereign is the head of the country and the government. There is no written constitution in comparison with the United States where the constitution is the “supreme law in the country”. In her role as a monarch the Queen is the head of the Executive power and plays an integral role in the legislation. The Parliament has deep roots in the history of the United Kingdom. The medieval kings had to provide amounts of money for the public and private expenditure. If there were a need for extra resources, the sovereign needed help from the barons in meetings which were handled several times in a year. During the thirteenth century the English kings had not enough income for their expenditure. They invited the barons, the representatives of the villages, towns in order to tax the additional taxes. This council included those representatives which were gathered on the basis of their title and that was later divided into two parts The House of the Commons and The House of the Lords. The titles of the Crown derive partially from the Statute and partially from the descendants. The inheritance of the titles has been from one to another generation starting from the sons of the Sovereign that inherit the throne. When a daughter reigns she becomes a queen and she has equal rights with a king. What is the role of the Monarchy in the United Kingdom? The role of the Monarchy is very important. Throughout the years the British people have had second thoughts to keep the monarchy or not but they unanimously have voted for keeping it as a form of government. The role of the Queen is also very important. She has an important duty connected with the House of the Commons and the House of the Lords. Her role is preserved based on tradition and inheritance. The access of the public during the work of the two houses is an essential factor. The work of the two Houses is broadcasted on TV, on the Radio and in the Internet in a registered form. The official reports are published daily in relevant websites. This way shows how much these democracies are developed. Totally different is the model that the United States offers because the government of the United States is that ‘of the people, by the people, for the people …’ The United Kingdom and The United States of America are two democracies that despite of being the same in their values and principles, their differences are visible and this does not have to do only with their geographic aspects. First, The United States of America is a democracy based on the President where The United Kingdom is based on the Prime Minister. Throughout the history the United States of America was governed by different presidents. Most of them have been military men. They were elected through the free will of the people and through the most democratic principles. The United Kingdom follows a tradition of Kings and Queens that through wars and battles have created a strong nation with strong roots and with an important position in the International arena. Either the United Kingdom or the United States of America have diplomatic, cultural, economic and political relations. Second, the governmental structure in the United Kingdom includes many traditions whereas the American system is flexible. For example, the election period of the president of the United States is scheduled up to 4 years. In the United Kingdom the Prime Minister can call a general election at any time within the five years. Third, the United States is the first economy in the World. It is in an economical ‘fight’ with China of ‘Who will be the next Super Power? The United Kingdom is the fifth economy in the world but it still remains one of the most industrialized countries in the world. Fourth, the resign of the President can cause early elections or the vice president technically can take his office. In the United Kingdom, The Queen can elect the person she thinks is appropriate for the position. For example, Winston Churchill was appointed by the Queen during the World War II. These states offer the best models of the democracy in the world. Despite of the governmental systems (The United States a federal state and The UK a monarchy) these two democracies are a good example of the freedom and rights of the people. Olta Totoni is a British and American Studies Expert and Researcher at the University of Tirana, Albania.
Food Waste, Methane and Climate Change By Climate Central As Thanksgiving is a time to be grateful and celebrate, we can sometimes overdo it. Frequently, the food from Thanksgiving dinner doesn't all get eaten, and while that food often makes for good leftovers, some inevitably gets thrown away. The USDA estimates 35 percent of turkey meat cooked at Thanksgiving gets wasted. Food waste isn’t limited to Thanksgiving. Amazingly, up to 40 percent of all food produced in the U.S. intended for consumption is not eaten, which equates to about 20 pounds of food per person each month. Food that gets thrown out ends up in landfills, where it gradually rots and releases methane, a strong greenhouse gas. Globally, If food waste could be represented as its own country, it would be the third largest greenhouse gas emitter, behind China and the U.S. The decay of food waste in landfills is not the only source of greenhouse gases. The resources needed to produce the food also have a carbon footprint. Globally, the effect of processing the food that is wasted is equal to about 3.3 billion tons of CO2. To cut back on the effect, check to see if you can donate some of your leftovers to local shelters, send the leftovers home with your guests, or freeze them, where they can keep for up to a year.
Declaration of Independence(1776) On July 4, 1776, the Continental Congress adopted Thomas Jefferson’s amended draft of the Declaration of Independence from Great Britain. The initial effort by the framers of the American system of government was not to separate from Great Britain but to be accepted by Great Britain as an equal partner—to be given the rights and responsibilities of full British citizenship. Early efforts to secure such acceptance met with contempt and refusal. By 1776 the Founding Fathers were forced to make a choice: subservience or revolution. Concluding that independence was their only option, they felt, as men of the Enlightenment, in the Age of Reason, that they were bound by honor to declare to the world the reasons for their radical act. The Continental Congress thus selected a committee of five men to draft a statement calling for independence. That committee—consisting of Thomas Jefferson, John Adams, Benjamin Franklin, Robert R. Livingston, and Roger Sherman—surprisingly chose the... The Declaration of Independence (National Archives and Records Administration)View Full Size
Wounds heal much faster when scientists block a specific protein in the skin, a new study has found. Although it's not clear exactly how the treatment works, the research could have broad significance not only for skin injuries such as deep cuts and burns, but also for internal injuries ranging from spinal cord lesions to corneal abrasions. Most scientists believe that wounds need to trigger a powerful inflammatory response from the immune system in order to heal. Developmental biologist David Becker at University College London wondered about the role that tiny junctions between cells play in this immune response. Proteins called connexins populate these gap junctions. Becker and his colleagues in London and at the University of Auckland in New Zealand focused on connexin 43, which congregates at an injured site within hours; it's thought to help blood vessels dilate, boosting the number of white blood cells that can travel to the site. These cells launch the body's inflammatory response to injury. To test whether connexin 43 was a big player in wound repair, Becker's group formulated a special gel containing short stretches of RNA. This "antisense" RNA was the genetic mirror image of the connexin 43 gene and designed to blunt its activity. The researchers applied the gel to skin wounds on the backs of mice. To their surprise, having less connexin 43 seemed like a good thing: The treated animals healed 50% faster than control mice. The researchers also found that, intriguingly, the gel cut by 20% the number of white blood cells that zoomed up to a wound, the team reports in the 30 September issue of Current Biology. This suggests that inflammation doesn't necessarily speed wound healing. And once the gel diffused through skin and into the bloodstream, the researchers found, the antisense quickly broke down, suggesting that it wouldn't impact other tissues. "It is certainly new," says Harvard University cell biologist Daniel Goodenough. Although some scientists have experimented with connexins and wounds, none has shown that disrupting one speeds up healing, he adds. Becker's team is now applying the gel to other body tissues; they've already found that injecting it into spinal cord and brain lesions, or into an injured cornea, also facilitates the normal healing process.
Understand and apply theorems about circles 2. Identify and describe relationships among inscribed angles, radii, and chords. Include the relationship between central, inscribed, and circumscribed angles; inscribed angles on a diameter are right angles; the radius of a circle is perpendicular to the tangent where the radius intersects the circle. Note: We used the Math Nspired activity Circles – Angles and Arcs as a guide for this lesson. We started off our lesson on Angles and Arcs just a little differently than before. I always have students explore – to pay attention to what is changing and what is staying the same in a geometric figure. But I am trying to learn from many of you out there, and so I asked students to estimate first. We say that ∠ABC is inscribed in the circle. What is its measure? It turns out that just over 50% of the students correctly estimated the measure. I showed the students how others answered, but I didn’t show them the correct answer. Next we played with our dynamic geometry software to find out what students noticed about inscribed angles and central angles that intercept the same arc. What happens when a right angle is inscribed in a circle? What is the relationship between the central angle and its intercepted arc? What is the relationship between the inscribed angle and its intercepted arc? What is the relationship for inscribed angles and central angles that intercept the same arc? Students made observations. Before we formalized their observations, I sent another Quick Poll. We were up to just over 75% correct. After formalizing the observations together, students worked through a few exercises in their groups. Our next major exploration was to determine the relationship for opposite angles in a cyclic quadrilateral. Students explored by themselves, and then I made someone the Live Presenter. What did you notice? B.K. had noticed that the opposite angles had a sum of 180˚. Can we make sense of why that happens? What’s true about the opposite angles of a cyclic quadrilateral? -They’re inscribed angles. -Their intercepted arcs make the whole circle. Why is that significant? The intercepted arcs add to 360˚. The two inscribed angles are half that sum, 180˚. The opposite angles of a cyclic quadrilateral are supplementary. And a Quick Poll to assess student understanding: And the results. 65% have it correct. What happened to those who got 95˚? What happened to those who got 190˚? And so the journey continues … as we are learning to answer the question being asked, and not just giving the first calculation or even the second calculation that we get when we make sense of the given information.
- About Us - School Services - Contact Us During their primary school education English speaking pupils have already started to study Spanish as a foreign language and they are introduced into the study of different subjects from the Spanish curriculum which they continue during Key Stage 3 (years 7 to 9). It is compulsory for all our students to learn Spanish. All students take the GCSE Spanish. There is always a group of Year 11 students who prepare for this examination but there is also the option of taking it at the end of Year 10 for those pupils who have been in the school longer. Spanish follows the AQA specification. The course comprises four different skill areas: Listening, Speaking, Reading and Writing, each of them being worth a 25% of the final grade. It is up to the member of staff in charge of each group whether to enter the students for Higher or Foundation Tier but it is unusual to have them doing Foundation. However, this option is always available for those joining the school during year 11 or the Sixth Form. This course will prepare our students to understand and communicate in spoken and written Spanish in various different styles. It is a large part of the course to study in depth the culture and political situation of Spanish speaking countries. Spanish follows the Edexcel specification. To obtain the A Level qualification students must pass four different units. Year 12 usually prepare for Units 1 and 2 (Advanced Subsidiary GCE) and they complete their full A Level GCE in Spanish (Units 3 and 4). Social Sciences bring together all disciplines whose object of study is linked to the activities and behaviour of human beings. They analyse material and symbolic aspects of society. These areas have become an essential part of education in any country. During the cycle of secondary education, students study the following contents: YEAR 8: Physical Geography, Human Geography , Prehistory , Ancient History . YEAR 9: Medieval History , Modern History . YEAR 10: Physical Geography, Human Geography . YEAR 11: Contemporary World History. History of Spain . The Department of Social Sciences has ongoing activities concerning non-formal education , where students acquire new knowledge. Visits to sites, monuments , museums or professional activities (Archaeology workshop) are of essential importance for practical learning within Social Sciences.
Monkey Business – Rounding to Nearest Ten I’ve been working on a comprehensive Notebook lesson called Monkey Business that helps students learn and review basic procedures for rounding to the nearest 10 and the nearest 100. In this lesson, I created a fun review activity called Banana Rama where the goal is to collect 10 bananas as fast as possible by touching on the correct answer to a rounding problem. I thought it would be useful for teachers if I put the Banana Rama activity into a separate Notebook file and place it in the Free Resource section of The Notebook Gallery website. On the last page of the file, I also included instructions for how to create new versions of the activity that use different numbers so teachers can continue to reinforce rounding to 10 without using the same numbers over and over again. The complete Monkey Business lesson will be available on The Notebook Gallery resource site in the next day or two. The 80+ page lesson introduces the concept of rounding, provides a fun method for remembering the rules for rounding, and reinforces the concept of rounding through examples and assessment activities. Sharing Is Caring! The Notebook Gallery is not associated with Smart Technologies.
Conjunctions are words which join together two words or two phrases or two sentences. You can combine entire sentences using conjunctions to make them more compact. In the lesson below you will learn about conjunctions and kinds of conjunctions. - Ali and Raza are good writers. (It is a short way of saying, Ali is a good writer and Raza is a good writer). - Ali and Wali are good boxers. It is a short way of saying, Ali is a good boxer and Wali is a good boxer. Conjunctions and must be carefully distinguished from relative pronouns, relative adverbs and prepositions, which are also connecting the words like: - This is the house that Ali bought. (Relative pronoun) - This is the place where he was murdered. (Relative adverb) - He sat beside Milad. (Preposition) - Take this and give that. (Conjunction) Kinds of Conjunctions There are three kinds of conjunctions which join different kinds of grammatical structures. 1. Coordinating conjunction 2. Subordinating Conjunction 3. Correlative conjunction Each of the three kinds of conjunctions: coordinating, subordinating and correlative conjunctions serve a unique purpose. 1. Coordinating Conjunctions: Coordinating conjunctions connect two words or groups of words with similar values. They may connect two words, two phrases, two independent clauses or two dependent clauses. In each of the following sentences the coordinating conjunction “and” connects equal words or groups of words: - John and Reggie stayed up all night practicing their guitars. (Connects two words) - They sent the items over the river and through the woods. (Connects two phrases) - Several managers sat with their backs to us, and I could almost hear them snickering at us lowly workers. (Connects two clauses) There are only seven coordinating conjunctions in the English language, and they are often remembered by using the acronym “FANBOYS”: for, and, nor, but, or, yet and so. - You can study hard for exam or you can fail. - That is not what I meant to say, nor should you interpret my statement. - John plays basketball well, yet his favorite sport is hockey. Coordinating conjunctions are divided into four kinds according to their function in a sentence. A) Cumulative / Copulative Conjunctions: They add one statement to another statement. - He came here, and I left there. B) Adversative Conjunctions: The express opposition or contrast between two statements. - He was slow, but he was sure C) Disjunctive / Alternative Conjunctions: They express a choice between two alternatives. - She must clean, or she must leave. D) Illative conjunctions: They express an inference, All precautions must have been neglected: for the plague spread rapidly. 2. Subordinating Conjunctions: Subordinating conjunctions connect two groups of words by making one into a subordinating clause. The subordinating clause acts as one huge adverb, answering the questions “when” or “why” about the main clause, or imposing conditions or opposition on it. Here are some examples of subordinating conjunctions changing a clause into adverbial subordinating clauses in different ways: - I can go shopping after I finish studying for my exam. (when) - Because the knight was young, he decided to take a walk. (why) - I’ll give you a dime if you give me a dollar. (condition) - Although he never figured out why, Hanna winked on her way out the door. (opposition) Note: The subordinating conjunction does not always come between the two clauses it connects. Often, it comes at the beginning of the first clause. 3. Correlative Conjunctions: Correlative conjunctions are always used in pairs. They are similar to coordinating conjunctions because they join sentence elements that are similar in importance. The following are some examples of coordinating conjunctions: - Both John and Max made the football team this year. (Both, and) - Neither John nor Max made the football team this year. (Neither, nor) - Not only did John make the football team, but he also became one of the strongest players. (Not only, but also) - Either Mom or Dad will pick you up. (Either, or)
When none of the elements in a compound is a metal, no atoms in the compound have an ionization energy low enough for electron loss to be likely. In such a case, covalence prevails. As a general rule, covalent bonds are formed between elements lying toward the right in the periodic table (i.e., the nonmetals). Molecules of identical atoms, such as H2 and buckminsterfullerene (C60), are also held together by covalent bonds. Lewis formulation of a covalent bond In Lewis terms a covalent bond is a shared electron pair. The bond between a hydrogen atom and a chlorine atom in hydrogen chloride is formulated as follows: In a Lewis structure of a covalent compound, the shared electron pair between the hydrogen and chlorine ions is represented by a line. The electron pair is called a bonding pair; the three other pairs of electrons on the chlorine atom are called lone pairs and play no direct role in holding the two atoms together. Each atom in the hydrogen chloride molecule attains a closed-shell octet of electrons by sharing and hence achieves a maximum lowering of energy. In general, an incomplete shell means that some attracting power of a nucleus may be wasted, and adding electrons beyond a closed shell would entail the energetic disadvantage of beginning the next shell of the atom concerned. Lewis’s octet rule is again applicable and is seen to represent the extreme means of achieving lower energy rather than being a goal in itself. Read More on This Topic crystal: Types of bonds The properties of a solid can usually be predicted from the valence and bonding preferences of its constituent atoms. Four main bonding types are discussed here: ionic, covalent, metallic, and molecular. Hydrogen-bonded solids, such as ice, make up another category that is important in a few crystals. There are many examples of solids that have a single bonding type, while other solids have a... A covalent bond forms if the bonded atoms have a lower total energy than the widely separated atoms. The simplest interpretation of the decrease in energy that occurs when electrons are shared is that both electrons lie between two attracting centres (the nuclei of the two atoms linked by the bond) and hence lie lower in energy than when they experience the attraction of a single centre. This explanation, however, requires considerable modification to capture the full truth about bonding, and it will be discussed further below when bonding is considered in terms of quantum mechanics. Lewis structures of more complex molecules can be constructed quite simply by extending the process that has been described for hydrogen chloride. First, the valence electrons that are available for bonding are counted (2 × 1 + 6 = 8 in H2O, for example, and 4 + 4 × 7 = 32 in carbon tetrachloride, CCl4), and the chemical symbols for the elements are placed in the arrangement that reflects which are neighbours: Next, one bonding pair is added between each linked pair of atoms: The remaining electrons are then added to the atoms in such a way that each atom has a share in an octet of electrons (this is the octet-rule part of the procedure): Finally, each bonding pair is represented by a dash: (Note that Lewis structures do not necessarily show the actual shape of the molecule, only the topological pattern of their bonds.) In some older formulations of Lewis structures, a distinction was made between bonds formed by electrons that have been supplied by both atoms (as in H−Cl, where one shared electron can be regarded as supplied by the hydrogen atom and the other by the chlorine atom) and covalent bonds formed when both electrons can be regarded as supplied by one atom, as in the formation of OH− from O2− and H+. Such a bond was called a coordinate covalent bond or a dative bond and symbolized O → H−. However, the difficulties encountered in the attempt to keep track of the origin of bonding electrons and the suggestion that a coordinate covalent bond differs somehow from a covalent bond (it does not) have led to this usage falling into disfavour. Advanced aspects of Lewis structures Test Your Knowledge Microscopes and Telescopes: Fact or Fiction? The Lewis structures illustrated so far have been selected for their simplicity. A number of elaborations are given below. First, an atom may complete its octet by sharing more than one pair of electrons with a bonded neighbour. Two shared pairs of electrons, represented by a double dash (=), form a double bond. Double bonds are found in numerous compounds, including carbon dioxide: Three shared pairs of electrons are represented by a triple dash (≡) and form a triple bond. Triple bonds are found in, for example, carbon monoxide, nitrogen molecules, and acetylene, shown respectively as: A double bond is stronger than a single bond, and a triple bond is stronger than a double bond. However, a double bond is not necessarily twice as strong as a single bond, nor is a triple bond necessarily three times as strong. Quadruple bonds, which contain four shared pairs of electrons, are rare but have been identified in some compounds in which two metal atoms are bonded directly together. There is sometimes an ambiguity in the location of double bonds. This ambiguity is illustrated by the Lewis structure for ozone (O3). The following are two possible structures: In such cases, the actual Lewis structure is regarded as a blend of these contributions and is written: The blending together of these structures is actually a quantum mechanical phenomenon called resonance, which will be considered in more detail below. At this stage, resonance can be regarded as a blending process that spreads double-bond character evenly over the atoms that participate in it. In ozone, for instance, each oxygen-oxygen bond is rendered equivalent by resonance, and each one has a mixture of single-bond and double-bond character (as indicated by its length and strength). Lewis structures and the octet rule jointly offer a succinct indication of the type of bonding that occurs in molecules and show the pattern of single and multiple bonds between the atoms. There are many compounds, however, that do not conform to the octet rule. The most common exceptions to the octet rule are the so-called hypervalent compounds. These are species in which there are more atoms attached to a central atom than can be accommodated by an octet of electrons. An example is sulfur hexafluoride, SF6, for which writing a Lewis structure with six S−F bonds requires that at least 12 electrons be present around the sulfur atom: (Only the bonding electrons are shown here.) In Lewis terms, hypervalence requires the expansion of the octet to 10, 12, and even in some cases 16 electrons. Hypervalent compounds are very common and in general are no less stable than compounds that conform to the octet rule. The existence of hypervalent compounds would appear to deal a severe blow to the validity of the octet rule and Lewis’s approach to covalent bonding if the expansion of the octet could not be rationalized or its occurrence predicted. Fortunately, it can be rationalized, and the occurrence of hypervalence can be anticipated. In simple terms, experience has shown that hypervalence is rare in periods 1 and 2 of the periodic table (through neon) but is common in and after period 3. Thus, the octet rule can be used with confidence for carbon, nitrogen, oxygen, and fluorine, but hypervalence must be anticipated thereafter. The conventional explanation of this distinction takes note of the fact that, in period-3 elements, the valence shell has n = 3, and this is the first shell in which d orbitals are available. (As noted above, these orbitals are occupied after the 4s orbitals have been filled and account for the occurrence of the transition metals in period 4.) It is therefore argued that atoms of this and subsequent periods can utilize the empty d orbitals to accommodate electrons beyond an octet and hence permit the formation of hypervalent species. In chemistry, however, it is important not to allow mere correlations to masquerade as explanations. Although it is true that d orbitals are energetically accessible in elements that display hypervalence, it does not follow that they are responsible for it. Indeed, quantum mechanical theories of the chemical bond do not need to invoke d-orbital involvement. These theories suggest that hypervalence is probably no more than a consequence of the greater radii of the atoms of period-3 elements compared with those of period 2, with the result that a central atom can pack more atoms around itself. Thus, hypervalence is more a steric (geometric) problem than an outcome of d-orbital availability. How six atoms can be bonded to a central atom by fewer than six pairs of electrons is discussed below. Less common than hypervalent compounds, but by no means rare, are species in which an atom does not achieve an octet of electrons. Such compounds are called incomplete-octet compounds. An example is the compound boron trifluoride, BF3, which is used as an industrial catalyst. The boron (B) atom supplies three valence electrons, and a representation of the compound’s structure is: The boron atom has a share in only six valence electrons. It is possible to write Lewis structures that do satisfy the octet rule. However, whereas in the incomplete octet structure the fluorine atoms have three lone pairs, in these resonance structures one fluorine atom has only two lone pairs, so it has partly surrendered an electron to the boron atom. This is energetically disadvantageous for such an electronegative element as fluorine (which is in fact the most electronegative element), and the three octet structures turn out to have a higher energy than the incomplete-octet structure. The latter is therefore a better representation of the actual structure of the molecule. Indeed, it is exactly because the BF3 molecule has an incomplete-octet structure that it is so widely employed as a catalyst, for it can use the vacancies in the valence shell of the boron atom to form bonds to other atoms and thereby facilitate certain chemical reactions. Another type of exception to the Lewis approach to bonding is the existence of compounds that possess too few electrons for a Lewis structure to be written. Such compounds are called electron-deficient compounds. A prime example of an electron-deficient compound is diborane, B2H6. This compound requires at least seven bonds to link its eight atoms together, but it has only 2 × 3 + 6 × 1 = 12 valence electrons, which is enough to form only six covalent bonds. Once again, it appears that, as in hypervalent compounds, the existence of electron-deficient compounds signifies that a pair of electrons can bond together more than two atoms. The discussion of the quantum mechanical theory of bonding below shows that this is indeed the case. A number of exceptions to Lewis’s theory of bonding have been catalogued here. It has further deficiencies. For example, the theory is not quantitative and gives no clue to how the strengths of bonds or their lengths can be assessed. In the form in which it has been presented, it also fails to suggest the shapes of molecules. Furthermore, the theory offers no justification for regarding an electron pair as the central feature of a covalent bond. Indeed, there are species that possess bonds that rely on the presence of a single electron. (The one-electron transient species H2+ is an example.) Nevertheless, in spite of these difficulties, Lewis’s approach to bonding has proved exceptionally useful. It predicts when the octet rule is likely to be valid and when hypervalence can be anticipated, and the occurrence of multiple bonds and the presence of lone pairs of electrons correlate with the chemical properties of a wide variety of species. Lewis’s approach is still widely used as a rule of thumb for assessing the structures and properties of covalent species, and modern quantum mechanical theories echo its general content. The following sections discuss how the limitations of Lewis’s approach can be overcome, first by extending the theory to account for molecular shapes and then by developing more thorough quantum mechanical theories of the chemical bond.
Recently added item(s) You have no items in your shopping cart. Personalize your experience with Penn State Extension. Subscribe today! Save For Later Print Updated: August 8, 2017 Crown gall can be recognized readily by the formation of tumors or galls on tree roots and crowns. Occasionally, the galls can be seen above ground on trunks or branches. Young galls are light in color and become dark and hard with age. When galls are numerous, or if they are located on major roots or the crown, they might disrupt the flow of water and nutrients. These trees show reduced growth, an unhealthy appearance, and possibly nutritional deficiency symptoms. The bacteria causing crown gall are distributed widely in numerous soils and can attack many different kinds of plants. Soil might become contaminated if planted with infected nursery stock. Bacteria entering the plant must do so through a wound. Wounds commonly are made during digging and tree-planting operations, by tillage equipment, and by injury from root-feeding insects and nematodes. Secondary galls can develop a considerable distance from the initial infection. These can be formed in the absence of the crown gall bacteria, apparently due to a tumor-inducing substance produced at the site of the original infection. Avoid planting infected nursery stock or wounding trees at the time of planting. Thank you for your submission!