content
stringlengths
275
370k
On March 12, 1947, President Harry S. Truman rolled out his policy of Cold War containment, in an address known today as the Truman Doctrine. Limited to countries under threat by Communism, the Truman Doctrine enabled military, economic, and political assistance. The policy was key to the early Cold War between the United States and Russia, as it specifically targeted Greece and Turkey as countries in which to forestall communism. It may be that the Truman Doctrine failed, as both countries subsequently were led by right-wing regimes in the years following his speech. Nevertheless, the Truman Doctrine is seen today as a declaration of the Cold War. Indeed, his speech outlined the goals of U.S. Cold War foreign policy, highlighting how the United States needed to provide military and economic assistance to protect nations from communist aggression. Hand in hand with the Truman Doctrine was the Marshall Plan, which was signed into law on April 3, 1948. In a speech only three months after Truman’s address, George C. Marshall introduced the idea of economic support to Europe. Between 1948 and 1951, over $13 billion helped finance the economic recovery of Europe. One of the countries to receive the financial help was France. In 1948, the European Recovery and American Aid, which was the preliminary study for the Marshall Plan, cited France to be at risk of Communist takeover, describing the country, “…as the Communists well know, [as] the key to Western Europe. The issue is whether the French economy can be wrecked to the point where chaos and civil strife will permit the imposition of a Communist dictatorship or whether democracy can survive and be reinforced…” The stakes were perceived to be very high as the French government included a substantial number of Communists. In 1945, parliament consisted of 586 seats, of which 365, or sixty-two percent, were held by the Parti Communiste Français (PCF). In fact, the greatest threat to the French Presiden Charles De Gaulle was the PCF, whose membership across France stood at half a million people, the largest French left-wing party in a number of national elections. In planning foreign policy, American diplomats sought to create symbols of freedom and democracy that would help remind European citizens of the redemptive power of democracy. George F. Kennan, the Foreign Service Officer who developed the policy of containment, was informed by his colleagues that the “symbols of nationalism in France and Italy and in Germany are essentially bankrupt and in danger of being captured by reactionary and neo-fascist political elements which we do not wish to support…” The Department of State launched several cultural initiatives, including the major art exhibition Advancing American Art (1946). This show was meant to tour Eastern Europe, the Caribbean and Latin America – and in October of 1946 it was split into two touring groups. The Eastern Hemisphere exhibition traveled from New York to Paris, and then to Prague, Czechoslovakia. The other section opened in Havana, Cuba in late 1946 and then traveled to Port-au-Prince, Haiti. Due to concerns about “un-Americanism,” however, the tour was stopped before the exhibition could show in Hungary, Poland, or Venezuela. Can you think of any major American monuments or works of art in Europe or elsewhere that may speak to the fears and ideas of the early Cold War?
"Insights into Math Concepts" is the math line developed by Conceptual Learning Materials. Rather than focusing on calculations, exercises solidify and extend concepts through series of short, thought-provoking exercises designed to build math intuition and number sense. Designed by a Montessori teacher and curriculum coordinator, exercises have a simple, uncluttered format. Most students find the steady progression through the various exercises rewarding and motivating. Inviting success and confidence, the material soon nudges them into concepts normally considered quite challenging. Concept links: Early Math Place Value Number Lines Sequencing Numeration Problem Solving Mixed Skills Money Time Geometry Fractions Decimals Percent Algebra/Pre-algebra Support Materials Discounted Sets Blackline of a Manipulative
In April, male lesser prairie-chickens compete for access to females via a lekking system. Males gather in a display ground known as a ‘lek’ and perform elaborate courtship displays before the female selects her mate (2). These displays involve inflating the red throat sacs, raising the neck and tail feathers and making short jumps into the air (2). Following mating, the male takes no further part in the care of his offspring. Females lay a clutch of 12 – 15 eggs in a nest hidden amongst the grass and incubate them for around one month (2). Lesser prairie-chickens feed on a variety of items including seeds, grain and insects in the warmer months (3). If the winter is particularly harsh, these birds will burrow into the snow to provide warmth (2).
If you have been a student of biology, you must be aware of horseshoe crabs. These creatures, hailed as ‘living fossils’, inhabited the Earth some 445 million years ago. The surprising thing is they are not crabs, not even crustaceans, and are classified under chelicerates, a subphylum that also includes arachnids. So, how on earth did they manage to survive without getting extinct? This question had opened a whole new field of research into these creatures and what was found was most astounding. Here are some highlights: ● The horseshoe crab blood color is bright blue. This is because horseshoe crabs rely on hemocyanin to transport oxygen throughout the body. This contains copper, which turns bluish-green when it oxidizes. In the case of vertebrates, it is the haemoglobin that transports oxygen and contains iron that makes blood’s color red. ● The blood of a horseshoe crab has remarkable antibacterial properties. It has amebocytes, instead of white blood cells, to fight infection. ● So effective are amebocytes in fighting bacterial contamination that it takes only 45 minutes to coagulate around one part in a trillion of bacterial contamination. The white blood cells in vertebrates, on the other hand, take two days for the same feat. These properties of horseshoe crab’s blood are of immense medical interest. A million crabs are harvested each year for blood use. At present, the amebocytes in the blood are used for testing medical equipment and vaccines prior to use. For this, the amebocytes are introduced into a sample. If the cells coagulate, it indicates the presence of bacteria and the product is not ready for use as yet. However, the synthesis of crab’s blood is still in its infancy. The declining crab population due to over harvesting in North America has prompted the authorities to focus on their conservation. Now, instead of killing the horseshoe crabs for their blood, harvesters draw only 30 percent of their blood and return them to the ocean to be harvested again. It has been found that a horseshoe crab manages to survive on 70 percent blood, but not lower than this. One downside is a reduction in breeding in females who have been bled. Despite this, the temptation to bottle this horseshoe crab blood, priced at $15,000 per-litre, is too great to resist. Horseshoe crabs are rightly hailed as life savers. Had it not been for the miraculous properties of its blood, millions of people might have died from germ-infected injections.
A closely interbedded deposit of sand and mud, generated in environments where current flow varies considerably. The three main types of heterolithic bedding are called flaser, wavy, and lenticular. Flaser bedding is characterized by cross-laminated sands with thin mud drapes over foresets. Wavy bedding consists of rippled sands with continuous mud drapes over the ripples. Lenticular bedding consists of isolated lenses and ripples of sand set in a mud matrix. Heterolithic sediments can be deposited in storm-wave influenced shallow marine environments, river floodplains, tidal flats, or delta-front settings where fluctuating currents or sediment supply permit the deposition of both sand and mud. Subjects: Earth Sciences and Geography.
Exploring Colors: Floating S Experiment This is a really cool experiment I saw at a workshop I attended. Pitcher of water Printable recording sheet and crayons or markers In advance, gather all materials. Place at least 3 Skittles in each child’s cup. I gave each child about 6, so they could eat a few, but they will need 3 for the experiment. Find a stable surface for the experiment (the floor, a table that does not shake). Print and copy the Recording Sheets for each child. I gave the children a bowl, a small cup of water, and a small cup of Skittles candy. The children were instructed to pour the water into the bowl (you only need enough water so that it will cover the Skittles.) Then they choose three different colors of Skittles to place in the bowl at the edges and spaced out so they weren’t touching each other (I let them eat the rest). The children observed what happened to the colors (the colors will spread out and eventually blend with other colors and the three S’s float to the top). I recorded their verbal observations on paper and the children recorded their observations by drawing what they saw in the bowl (see example below). An important note: this experiment only works well if the bowl sits on a very stable surface, such as a table that does not shake or the floor. Make sure the children understand not to touch or move the bowl. Be sure to place the Skittles with the S side up.
"The use of homing pigeons to carry messages is as old as Solomon, and the ancient Greeks, to whom the art of training birds came probably from the Persians, conveyed the names of Olympic victors to their various cities by this means. Before the electric telegraph this method of communication had a considerable vogue amongst stockbrokers and financiers. "The Dutch government established a civil and military pigeon system in Java and Sumatra early in the 19th century, the birds being obtained from Bagdad. "Details of the emplyment of pigeons in the siege of Paris in 1870-71 will be found in the article Post and Postal Service: France. This led to a revival in the training of pigeons for military purposes. Numerous private societies were established for keeping pigeons of this class in all important European countries; and, in time, various governments established systems of communication for military purposes by pigeon post. "When the possibility of using the birds between military fortresses had been thoroughly tested attention was turned to their use for naval purposes, to send messages between coast stations and ships at sea. They are also found of great use by news agencies and private individuals. Governments have in several countries established lofts of their own. Laws have been passed making the destruction of such pigeons a serious offence; premiums to stimulate efficiency have been offered to private societies, and rewards given for destruction of birds of prey. "Pigeons have been used by newspapers to report yacht races, and some yachts have actually been fitted with lofts. It has also been found of great importance to establish registration of all birds. (((mjr: bird escrow? Clipper birds?))) "In order to hinder the efficiency of the systems of foreign countries, difficulties have been placed in the way of the importation of birds for training, and in a few cases falcons have been specially trained to interrupt the service in war-time, the Germans having set the example by deploying hawks against the Paris pigeons in 1870-71. "No satisfactory method of protecting the weaker birds seems to have been evolved, though the Chinese formerly provided their birds with whistles and bells to scare away birds of prey. "In view of the development of wireless telegraphy, the modern tendency is to consider fortress warfare as the only sphere in which pigeons can be expected to render really valuable services. Consequently, the British Admiralty has discontinued its pigeon service, which had attained a high standard of efficiency, and other powers will no doubt follow the example. Nevertheless, large numbers of the birds are, and will presumably continue to be, kept at the great inland fortresses of France, Germany, and Russia. (((POST AND POSTAL SERVICE: FRANCE))) "The ingenuity of the French postal authorities was severely tried by the exigencies of the German War of 1870-1. The first contrivance was to organize a pigeon service carrying microscopic despatches prepared by the aid of photographic appliances. The number of postal pigeons employed was 363 if which number 57 returned with despatches. "During the height of the siege the English postal authorities received letters for transmission by pigeon post into Paris by way of Tours subject to the regulation that no information concerning the war was given, that the number of words did not exceed twenty, that the letters were delivered open, at 5d a word, with a registration fee of 6d prepaid as postage. At this rate the postage of the 200 letters on each folio was L40, that on the eighteen pellicles of sixteen folios each, carried by one pigeon, L11,520. Each despatch was repeated until its arrival had been acknowledged by balloon post; consequently many were sent off twenty and sometimes more than thirty times. "The second step was to establish a regular system of postal balloons, fifty one being employed for letter service and six for telegraphic service. To M. Durnouf belongs much of the honour of making the balloon service successful. On the basis of experiments carried out by him a decree of the 26th of September 1870 regulated the new postal system. Out of sixty-four several ascents, each costing on the average L200, fifty-seven achieved their purpose, notwithstanding the building by Krupp of twenty guns, supplied with telescopic apparatus, for the destruction of the postal balloons. Only five were captured, and two others lost at sea. "The aggregate weight of the letters and newspapers thus aerially mailed by the French post office amounted to about eight tons and a half, including upwards of 3,000,000 letters; and besides the aeronauts, ninety-five passengers were conveyed. "The heroism displayed by the French balloon postmen was equalled by that of many of the ordinary letter carriers in the conveyance of letters through the catacombs and quarries of Paris and its suburbs, and, under various disguises, often through the midst of the Prussian army. Several lost their lives in the discharge of their duty, in some cases saving their dispatches by the sacrifice."
“The Exequy” is an elegy of 120 lines of iambic tetrameter couplets, a verse form popular in a wide variety of early seventeenth century English lyrics. The second line fittingly designates the poem a “complaint” (or lament), and it appropriately sustains a tone of grief over a personal loss throughout. Henry King wrote the elegy on the death of his wife Anne, who died seven years after they were married, having borne him five children. Although first-person speakers are never identical with the authors, the speaker of “The Exequy” reflects, with reasonable accuracy, King’s personal grief over the loss of his wife. He originally gave his elegy the subtitle “To His Matchlesse Never To Be Forgotten Freind.” The text is divided into eleven verse paragraphs of varying lengths, ranging from two to eighteen lines. Essentially, the speaker expresses his grief, develops a meditation on time, and looks to the future. In the opening paragraph, the poet establishes an elegiac tone through an address to the burial site, the “Shrine,” offering poetry instead of flowers as a fitting adornment for his “Dear Loss.” In the second paragraph, the address turns to the dead wife as the object of the speaker’s meditation and emotion. She has become his book or library, and his only business, which he peruses though blinded by tears. Paragraph 3 introduces images and metaphors related to the cosmos. Grief reminds him that she died before reaching the... (The entire section is 571 words.)
What is Pulmonary Alveolar Proteinosis (PAP) Syndrome? Pulmonary Alveolar Proteinosis (PAP) is a not a single disease – it is a rare syndrome or condition that can occur in several different diseases. The syndrome is caused by the build up of surfactant in the lungs that makes breathing difficult. Normally, surfactant is present as a very thin layer on the lung surface. It helps keep the millions of tiny air sacs (alveoli) stay open as we breath. The thin layer of surfactant is maintained by balanced production and destruction inside alveoli. Alveolar macrophages are special cells inside alveoli that remove excess surfactant from alveoli. This helps keep the surfactant layer thin and useful. Macrophages require stimulation by a protein called GM-CSF in order to function correctly and remove surfactant. PAP occurs when something happens that disturbs the balance of surfactant production and removal. When this occurs, surfactant builds up inside the alveoli over time. Eventually, the alveoli fill up completely with surfactant leaving no room for the air we breath to enter. The result is that oxygen can’t get into the blood as easily, which causes a feeling of breathlessness. What causes PAP? The different diseases in which PAP occur can be divided into three groups: Primary PAP, Secondary PAP, and Disorders of surfactant Metabolism. Primary PAP occurs when something prevents GM-CSF from stimulating alveolar macrophages. This reduces their ability to remove excess surfactant and causes PAP. There are two diseases in this group autoimmune PAP and hereditary PAP. Autoimmune PAP (aPAP) is a disease that develops when a person’s immune system begins making proteins that attack GM-CSF. These proteins are called GM-CSF autoantibodies and large amounts of them prevent GM-CSF from helping alveolar macrophages remove excess surfactant. This results in surfactant accumulation and the development of PAP. Autoimmune PAP occurs mostly in adults but can occur in young children. Hereditary PAP (hPAP) is a genetic disease that causes GM-CSF to not be recognized by alveolar macrophages. This prevents GM-CSF from helping alveolar macrophages remove excess surfactant. This results in surfactant accumulation and the development of PAP. Normally, GM-CSF is recognized by proteins on the surface of alveolar macrophages (and other cells) called GM-CSF receptors. Similar to the way a key fits into and turns a lock to open it, GM-CSF fits into these surface proteins and ‘opens’ or, rather, activates them. This causes alveolar macrophages to be able to remove excess surfactant. In hPAP, mutations occur in the genes that serve as blueprints for making GM-CSF receptors. The result is that when the blueprint containing ‘bad’ instructions is used, the GM-CSF receptors are abnormal and don’t recognize GM-CSF. Some mutations are so severe that the GM-CSF receptors aren’t even made. Hereditary PAP occurs mostly in children but can occur in adults. Secondary PAP (sPAP) occurs when something reduces either the number of alveolar macrophages or reduces their ability to function to remove surfactant. In either case, there are less alveolar macrophages inside alveoli to remove excess surfactant. This results in surfactant accumulation and the development of PAP. Secondary PAP can occur in diseases that affect the formation of blood cells. It can also occur after breathing in toxic dusts. Secondary PAP is more common in adults but can occur in children. Disorders of surfactant production (DSP) are genetic diseases that cause abnormalities in proteins needed to make surfactant. This results in the production of abnormal surfactant. The disease that occurs depends on which gene is affected and exactly what mutation is present. Some mutations result in death in the first hour of life and some slowly scar the lungs over time and appear in children, adolescents, or adults. What are the symptoms of PAP? PAP usually causes a feeling of breathlessness (a feeling of being unable to breathe easily) that starts slowly and gets worse over time. Doctors refer to this as “progressive dyspnea of insidious onset”. Some patients develop a dry cough. Less commonly – in less than 5% of people with PAP, a frothy, whitish phlegm may accompany the cough. Rarely blood-streaked phlegm, chest pain, and fever may also be present, which indicate that infection is probably also present. How is PAP diagnosed? Evaluation of a person's symptoms and clinical tests cannot identify what disease is causing PAP. Chest x-rays reveal the presence of whitish, fluffy shadows throughout the lungs. Chest CT scans reveal areas of increased whitish shadows next to areas of normal appearing lung. Chest x-rays and CT scans can suggest but not prove that PAP syndrome is present. This is because other diseases can have a similar appearance. A lung biopsy can determine if PAP syndrome is present but can not identify which disease is causing PAP in any patient. Blood tests identify the disease in most patients with PAP. Detection of GM-CSF autoantibodies in the blood is used to identify aPAP. Detection of genetic mutations in genes for GM-CSF receptors is used to identify hPAP. Secondary PAP is diagnosed on the history and clinical findings. Detection of genetic mutations in genes needed to make surfactant is used to identify DSP. How common is PAP? Current estimates suggest that 40,000 - 50,000 PAP patients exist worldwide. What is the natural history of PAP? Symptoms may improve spontaneously in a small percentage of patients. More commonly, symptoms persist for long periods of time or progress more rapidly. What is the treatment for PAP? The most common treatment for PAP is whole lung lavage, a procedure used to ‘wash’ surfactant out of the lung. This therapy is useful in aPAP, hPAP, and some types of sPAP but is not useful in DSP. Several potential treatments for PAP are in development and testing. Inhaled GM-CSF is an interesting and promising approach for aPAP. Pulmonary macrophage transplantation is a potential therapy that is in development for hPAP.
All learning styles aren’t necessarily created equal. Visual learning involves reading information in order to understand it, while auditory learning requires listening to information to remember it. While these types of learning may prevail in certain contexts, a third type of learning has been shown to be more effective in terms of equipping learners with a comprehensive understanding of the material and preparing them for their careers. Hands-on learning, also called kinesthetic learning, is a form of learning that involves combining elements of visual and auditory learning with practical participation from a student in the form of movement. Through hands-on learning, students become more familiar with the processes and skills they’ll use in the future. At Oxford College, students have the opportunity to engage in hands-on learning in order to obtain the skills necessary to succeed in their future line of work. Read on to discover why hands-on learning is so important when training for a career. Hands-On Learning Improves the Ability to Retain Information When students have access to hands-on training, they have an opportunity to interact with, discuss, and engage with the material they’re learning. Instead of absorbing information solely by reading or listening, students are practicing the techniques, processes, and skills that they’re learning about with their own two hands. This type of physically engaged learning requires participation from both sides of the brain. The left side of the brain takes care of analyzing and listening, while the right side processes visual and spatial information. When these different learning styles are combined, the brain is able to form stronger connections with the material, retaining more information. The benefits that hands-on learning has for a student’s ability to retain information are extremely valuable, especially when it comes time for students to apply what they’ve learned in a workplace setting. After completing a hands-on career college program, students will be able to begin working with confidence, equipped with extensive knowledge as a result of their training. With Hands-On Learning, Students in Career College Have an Opportunity to Practice As the saying goes, practice makes perfect. Hands-on learning means that students have an opportunity to gain familiarity with the material under the guidance of an industry professional. When learning in this environment, students are able to make mistakes, ask questions, and correct their work without worrying about the consequences that their errors might have in the real world. Hands-on training provides students with a controlled and safe environment in which they can master the skills and processes they’ll use in their future career. Not only does this prevent students from making the same mistakes in the workplace, but it helps them to learn organically through the process of trial and error as they train for a career. Hands-On Learning Ensures a Better Transition to Workplace Environments Workplace transitions can be difficult when a student has only gained experience within a traditional classroom setting. Hands-on learning reduces the disparity between the work that a student does in the classroom and the work they’ll do in their future career. By gaining real-world practice, students can integrate what they’ve learned into their responsibilities in the workplace, easing the career transition. Additionally, employers recognize the value of hands-on learning—often preferring candidates who have tangible experience with the processes and skills they’ll be expected to perform at work as this can reduce on-the-job training needs. Given the many benefits of an experiential, practical learning approach, it’s important to choose a career college program that offers students this type of opportunity. Oxford College’s programs prepare students for a successful career with hands-on training, equipping them with the practical skills they’ll use in the real professional world. Looking for career training in Ontario? Oxford College’s hands-on programs equip you with the preparation you need. Get started today!
Anthrax is a serious infectious disease caused by Bacillus anthracis, a bacterium that forms spores. Anthrax most commonly occurs in wild and domestic animals such as cattle, sheep, goats, camels, antelopes, but it can also occur in humans when they are exposed to infected animals or to tissue from infected animals or when anthrax spores are used as a bioterrorist weapon. Anthrax is not known to spread from one person to another. Information for the General Public Information for Public Health Departments - Anthrax Case Investigation System - Case Report Form - Disease Plan - Surveillance & Control of Anthrax in Humans & Animals
What are airline regulations? Airline regulations are a set of rules created and maintained by aviation authorities. These regulations keep air travel safe and simple for all travelers, so these important rules must be followed by airlines. The regulations take precedence over an airline’s policy, explaining how airlines should approach key areas like tickets, baggage regulations, and more. Which body controls airline regulations? A variety of formal bodies regulate air travel and the landscape is complex. The International Civil Aviation Organization (ICAO) is a special agency of the United Nations. The ICAO helps its 191 Member States to create shared international standards. These standards then provide the basis for national regulations, which are maintained by a Civil Aviation Authority (CAA). In the US, air travel and air traffic is regulated by the Federal Aviation Administration (FAA). The European Aviation Safety Agency has the same role in Europe, drafting important safety regulations that determine how airlines should operate. Where to find airline regulations Key aviation regulators shape and maintain airline regulations. The regulations are public and can be found on the websites of regulating bodies. Do airline regulations differ a lot? The ICAO works to harmonize airline regulations. This means that member states have similar rules, but the regulations themselves are shaped on a national level. Are airline regulations changed often? Air travel is a complex and dynamic field. This is why regulations are often evolving and changing. For this reason, many professionals in the travel industry monitor the latest changes and news.
COVID-19 and Flu Information Getting a flu vaccine during 2020-2021 is more important than ever, so if you haven’t taken your vaccine yet, it’s not too late. The flu vaccine has been shown to reduce the risk of flu illness, hospitalization, and death. While being immunized against flu will not protect you against COVID-19, it can save healthcare resources for the care of patients with COVID-19 by helping to keep you flu-free. There are some key differences between flu and COVID-19 to keep in mind : Flu and COVID-19 have many similar symptoms. Testing may be needed to confirm a diagnosis. Symptoms of both COVID-19 and flu ranging from no symptoms (asymptomatic) to severe symptoms and include: - Fever or feeling feverish/chills - Shortness of breath or difficulty breathing - Fatigue (tiredness) - Sore throat - Runny or stuffy nose - Muscle pain or body aches - Some people may have vomiting and diarrhea, though this is more common in children than adults COVID-19 has caused more serious illnesses in some people. COVID-19 may also include change in or loss of taste or smell. Flu is contagious 1 day before symptoms begin and remains contagious for approximately 7+ days. COVID-19 is contagious 2 days before symptoms appear and remains contagious for at least 10 days. If someone is asymptomatic or their symptoms go away, it’s possible to remain contagious for at least 10 days after testing positive for COVID-19. Both COVID-19 and flu can spread from person-to-person, between those in close contact (within about 6 feet) through viral droplets (COVID-19 or flu) cough, sneeze, or talk, by physical human contact (e.g. shaking hands) or by touching a surface that has virus on it and then touching his or her own mouth, nose, or eyes.
“I think I can. I think I can. I think I can,” says the Little Blue Engine to herself as she hauls a train full of toys up a mountain. In Watty Piper’s classic children’s book, all it takes is a dose of self-encouragement to give the engine the strength to overcome a seemingly impossible task. Sound too good to be true? Perhaps not, a new study suggests. Researchers found that a simple, five-minute exercise can help boost math performance, especially for students who have poor confidence in their math ability. When students silently spoke words of encouragement to themselves that were focused on effort—saying phrases such as “I will do my very best!”—their math scores improved. In the study, 212 Dutch schoolchildren in grades 4 to 6 took half of a standardized math test. After taking the half-test, they were split into three groups: The first group silently said to themselves words of encouragement focused on effort. The second group did a similar activity, but the words were focused on ability, favoring phrases such as “I’m very good at this.” The third group didn’t engage in self-talk at all. Afterward, the students took the second half of the math test. Students who had participated in self-talk focused on effort improved their math performance on the test, while those who engaged in self-talk focused on ability, or no self-talk at all, experienced no improvement. “When children with low self-confidence work on mathematics problems, they often worry about failure,” Sander Thomaes, the lead researcher of the study, told Edutopia. “They experience challenges and struggles—for example, a difficult problem to solve—as cues of low ability, triggering disengagement from the task and worsening performance. Effort self-talk may counter this process.” So why doesn’t self-talk focused on ability work? Saying “I am the best” can feel like a hollow claim when students don’t feel confident about their own abilities—they’re likely to dismiss the message entirely, explained Thomaes. But telling yourself “I will try my hardest” is an achievable goal, shifting attention away from a perceived lack of ability toward something within a student’s control: effort. Self-Talk and the Emotional Terrain of Learning In the 1920s, Russian psychologist Lev Vygotsky observed that when faced with a challenging task, young children often engage in self-talk, reminding themselves to focus harder or talking themselves through a series of complicated steps. As children get older, the self-regulation strategies are generally no longer vocalized, but an emerging body of research suggests that lightweight, metacognitive exercises that ask older students to reflect on their fears, anxieties, or challenges are still highly beneficial—providing teachers with a cheap, reasonably simple intervention that can be used in a variety of school situations and across all grade levels. A 2019 study, for example, was designed to help ease the transition into middle school by reminding students that the anxiety they felt was both natural and common. New students read stories from peers who had already graduated to the next grades; the essays confided the private fears and doubts the students harbored, and how building positive relationships with friends and teachers helped them cope. The new students then completed two 15-minute writing exercises that asked them to reflect on their own anxieties about the upcoming school year, particularly around test-taking, and to consider reasons why they might do well even if they worried about the tests. The exercise had a surprisingly powerful effect: Compared with their peers, students who learned about the commonality of their fears and then wrote about them were 34 percent less likely to be disciplined for misbehavior, 12 percent more likely to attend school, and 18 percent less likely to receive a failing grade. And in a study published last year, students participated in a 10-minute exercise immediately before a test in which they were encouraged to see stress as “a beneficial and energizing force.” They learned that small amounts of stress can help sharpen focus and aid memory by increasing oxygen flow into the brain. Students were then asked to write responses to two questions: “How do people sometimes feel in important situations?” and “How can the way a person feels in important situations help them do well in those situations?” The writing exercise helped the students manage the “worried thoughts about the possibility of failure” that often accompany a test, reducing the number of students failing out of the course by half—with low-income students seeing the biggest benefits. Taken as a whole, the research suggests that academic performance not only relies on content preparation—you have to know fractions to succeed on some math tests—but also is significantly impacted by emotional and psychological preparation. Helping your students identify and circumvent the psychological barriers that hinder progress is part of teaching them how to navigate the emotional terrain of learning. These powerful techniques appear to work even when students reflect altruistically. A 2019 study coauthored by Angela Duckworth, known for her research on grit and perseverance, asked nearly 2,000 high school students to give motivational advice to an anonymous younger student—on how to stop procrastinating or study better, for example—and then write a brief, encouraging letter to help the student do better in school. Despite the fact that the advice was given to someone else, the students earned higher grades in their own courses. The takeaway: Be mindful of anxiety-provoking situations—from high-stakes tests to major in-school transitions—that can derail a student’s ability to focus. Try brief metacognitive discussions or simple writing exercises to help students overcome hurdles, recognize the commonality of their fears, and prepare emotionally and psychologically for looming challenges.
Great Zimbabwe is a ruined city in the southeastern hills of Zimbabwe near Lake Mutirikwe and the town of Masvingo. It was the capital of the Kingdom of Zimbabwe during the country’s Late Iron Age. Construction on the monument by ancestors of the Shona people began in the 11th century and continued until the 15th century, spanning an area of 722 hectares (1,780 acres) which, at its peak, could have housed up to 18,000 people. It is recognised as a World Heritage Site by UNESCO. Great Zimbabwe served as a royal palace for the Zimbabwean monarch and would have been used as the seat of political power. One of its most prominent features were the walls, some of which were over five metres high and which were constructed without mortar. Eventually the city was abandoned and fell into ruin. The earliest known written mention of the ruins was in 1531 by Vicente Pegado, captain of the Portuguese garrison of Sofala, who recorded it as Symbaoe. The first visits by Europeans were in the late 19th century, with investigations of the site starting in 1871. Later, studies of the monument were controversial in the archaeological world, with political pressure being put upon archaeologists by the government of Rhodesia to deny its construction by native African people. Great Zimbabwe has since been adopted as a national monument by the Zimbabwean government, and the modern independent state was named for it. The word “Great” distinguishes the site from the many hundreds of small ruins, now known as ‘zimbabwes’, spread across the Zimbabwe Highveld. There are 200 such sites in southern Africa, such as Bumbusi in Zimbabwe and Manyikeni inMozambique, with monumental, mortarless walls; Great Zimbabwe is the largest. Zimbabwe is the Shona name of the ruins, first recorded in 1531 by Vicente Pegado, Captain of the Portuguese Garrison of Sofala. Pegado noted that “The natives of the country call these edifices Symbaoe, which according to their language signifies ‘court'”. The name contains dzimba, the Shona term for “houses”. There are two theories for the etymology of the name. The first proposes that the word is derived from Dzimba-dza-mabwe, translated from the Karanga dialect of Shona as “large houses of stone” (dzimba = plural of imba, “house”; mabwe = plural of bwe, “stone”). A second suggests that Zimbabwe is a contracted form of dzimba-hwe, which means “venerated houses” in the Zezuru dialect of Shona, as usually applied to the houses or graves of chiefs.
Slant height of a right cone The distance from the top of a cone, down the side to a point on the edge of the base. Drag the orange dots to adjust the radius and height of the cone and note how the slant height changes. There are three dimensions of a cone. - The vertical height (or altitude) which is the perpendicular distance from the top down to the base. - The radius of the circular base - The slant height which is the distance from the top, down the side, to a point on the base circumference. These three are related and we only need any two to define the cone. We can then find the third missing dimension. From the figure above, we can see that the three dimensions form a with the slant height as the so we can use the Pythagorean theorem to solve it*. Drag either orange dot in the top figure and note how the slant height is calculated from the radius and altitude. * We can actually use any method of solving this triangle we like. It just depends on what you are given and personal preference. See Solving the triangle. Finding the slant height By applying the Pythagorean Theorem, the slant height is given by the formula: where r is the base radius and h is the altitude. If you are given the slant height By rearranging the terms in the Pythagorean theorem, we can solve for other lengths: - The radius r can be found using the formula where s is the slant height h is the altitude. - The altitude h can be found using the formula where s is the slant height r is the base radius. Things to try - In the top figure, click "hide details". - Drag the orange dots to set the radius and height of the cone. - Calculate the slant height of the cone using the formula - Click "show details" to check your answer. (C) 2011 Copyright Math Open Reference. All rights reserved
An X-ray is an image of internal structures of the body which is produced by exposure to a controlled source of X-rays and stored on a special computer system. Despite all the development of more sophisticated forms of scanning, an X-ray examination remains one of the most accurate ways of detecting many clinical problems. X-ray examinations are typically used for: - Bones, teeth, bone fractures, and other abnormalities of bone. - Joint spaces and some abnormalities of joints such as osteoarthritis. - The size and shape of the heart. - Changes in the density of some softer tissues. - Collections of fluid - for example in the lung or gut. X-ray examinations can be done as a simple outpatient procedure and you can go home straight afterwards. There are risks associated with X-rays, but the exposure is kept to the minimum required to obtain an image of the area under investigation. However any female patient who is, or might be pregnant, must notify the Radiology Department in advance of their examination, and all patients should inform the Radiology Department if they have recently had an X-ray investigation.
Noise-induced hearing loss (NIHL) is a true problem in the United States. An estimated 10 to 40 million adults across the country show signs of this condition. Many of these are due to exposure to noisy areas such as the loud burst of gunfire found in the military or at shooting ranges, the clang of heavy machinery often heard in factories or the constant buzz of power tools used in automotive or construction industries. These occupational and recreational environments are hazardous to the hearing health of both people who show up daily as well as those who are only occasional exposed. Sudden, extreme noises can damage the delicate inner ear just as seriously as the constant exposure to these sounds can. It’s been proven that sudden impulse noises have a more adverse effect than a steady noise. Different people have different reactions to noises. Some may be more susceptible, yet others are not as responsive to them. While studies show that certain lifestyle habits such as smoking, drug use, work environment, and even age can have an effect on the body’s reaction to NIHL, the effects of genetics are now being studied as well. Though the effects of NIHL in humans is tough to study thanks to the variations of lifestyles people live as well as their individual genetics, studies performed in animal models is proving much more controlled. A combination of both genetic and environmental elements, the genetic susceptibility has been definitively proven in mice. More than 140 gene variations have been considered in the cause of hearing loss when there is a lack of additional indications. Variations in 34 genes have been found to be linked to the likelihood of increased auditory thresholds for people who have exposure to occupational noise. This type of noise can cause two different types of injury to the sensitive inner ear, though they depend on the duration and the intensity of noise exposure. One type is transient attenuation of hearing acuity or temporary threshold shift (TTS), which hearing often returns within 24-48 hours after TTS. Testing in mice, however, shows that instance of TTS at younger ages can speed up the process of age-related hearing loss even with the short-term recovery. The second type of injury is a permanent threshold shift (PTS). These can be brought on by dramatic, loud sounds such as being near a jet engine. This type of injury can lead to problems understanding speech in areas where there is a lot of loud background noise. NIHL symptoms can be associated with the injury of certain parts of the ear. The tympanic membrane, which is affected by the acoustic waves transmits soundwaves to the inner ear. Damage to these structures from blasts such as explosions, gunshots, jet engines, or even the blare of the siren from an emergency vehicle is highly possible. These sensitive areas can be ruptured or even destroyed completely, resulting in permanent hearing loss. Once soundwaves enter the cochlea, the outer hair cells begin to expand and contract quickly in an effort to pick up the acoustic vibrations produced within the inner hair cells. This is an area where the body’s potassium levels must be up in order to fuel the energy requirements of the process. This is where excessive noise can cause a great deal of harm, by damaging these outer hair cells. As with the tympanic membrane, any damage to these hair cells can also cause a threshold shift, resulting in permanent damage. While there is no known correlation between tinnitus and human genetics, hearing loss itself is subject to the genetics and susceptibility of the individual. Though preventable, NIHL is permanent and there is no way to reverse the condition. As the second-highest reason for hearing loss, it’s only been viewed as a problem since the 20th century. Approximately twelve percent of the population of the U.S. is exposed to noise levels severe enough to cause hearing damage through at least half of their workday. Genetics can play a factor in that some people will develop NIHL in this environment while others will not. Approximately 38 percent of alternative NIHL genes were found to be associated with families who experience hearing loss. Oxidative stress, or the imbalance between free radicals and antioxidants found in the body, endangers auditory function. Twenty-three percent of NIHL variants have been found in the oxidative stress response gene, which are proteins that are encoded to neutralize radical peroxide byproducts produced from the mitochondrial electron transport chain, according to an October 2019 article in the Hearing Health Journal. While each person has a different susceptibility level to NIHL, researchers are finding that many are based on differences in their specific genetic code. Future studies into the relationship between noise-induce hearing loss and genetics have the potential to reveal more surprising connections.
The Fibonacci sequence was first found by an Italian named Leonardo Pisano Bigollo. Fibonacci numbers are a sequence of whole numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ... This infinite sequence is called the Fibonacci sequence. Observe that, the first number in a Fibonacci sequence is always 0. The Fibonacci sequence formula is useful to find any of the terms of the Fibonacci sequence. Fibonacci sequence is represented as the spiral shown below. The Fibonacci formula is used for calculating the numbers in the Fibonacci sequence. Let us learn the Fibonacci formula with its derivation and a few solved examples. What is the Fibonacci Formula? The Fibonacci formula, if explained in simple terms, says that every number in the Fibonacci sequence is the sum of two numbers preceding it in the sequence. Thus, the Fibonacci formula is given as follows.
Malnutrition : facts you should know Feb 12, 2015, 6:30:00 PM Malnutrition is a condition in which the body is undernourished or consumes more nutrition than needed. Undernourishment is when the body does not get adequate vitamins, minerals and other nutrients adequately. When this happens, the body becomes weak and starts losing weight, leading to stunted growth and loss of energy. On the other hand, over nutrition means consuming nutrients over and above the required limit. This can lead to obesity, which in turn increases the risk of type 2–diabetes, hypertension and cancer. Malnutrition has become a serious issue, especially among children and the elderly. A malnourished person’s body finds it difficult to resist diseases and grow. Even routine tasks become difficult, and learning abilities too get affected. Malnutrition can also lead to complications in pregnancy and decrease production of breast milk. Hence it is crucial for pregnant women to eat a healthy diet that fulfills the body’s daily requirement of vitamins, minerals and other essential nutrients. Why does malnutrition happen? The cause of malnutrition is a diet lacking in nutrients. The reasons leading to lack of nutrition can be social, psychological and physical issues. Reasons for malnutrition - Health issues: Elderly might have trouble eating because of dementia, diminished sense of taste or smell, dental issues, loneliness, failing health, lack of mobility, etc. The other factors that cause malnutrition in people is the use of certain medications, chronic illnesses, recent hospitalisation, etc. - Dietary limitations: Dietary restrictions such as cutting down on salt, fat or sugar may help manage certain medical conditions, but they can also result in consuming inadequate nutrients. - Low income: It becomes difficult to fulfill the nutritional needs of a family when there is only one earning member; especially if someone in the family is on expensive medications. - Living in isolation: Some people may lose interest in cooking and eating if they are alone - Psychological concerns: Grief, stress, loneliness or other psychological concerns can cause loss of appetite. How to avoid Malnutrition - A change in medication or being on one for too long can affect the appetite. If this is the case, it is best to consult your doctor. - If a person is on a restricted diet to manage a certain medical ailment, it is better to visit a dietitian as they can help you out with different alternatives and a diet plan that provides adequate nutrients to your body. - Have food packed with nutrients, such as fresh fruits and raw vegetables. Add finely chopped nuts to yogurt, fruit and cereal to make it a nutritious meal. You can also add extra egg whites to scrambled eggs and omelets and encourage the use of whole milk. - You can make bland food tastier by using lemon juice, herbs and spices. - Make some time for daily exercise to strengthen bones and muscles. - Old people can opt for home cooked tiffin services if they can’t cook as these are hygienic and nutritious. Side effects of malnutrition Malnutrition can lead to various health concerns, including: A weak immune system.Slow wound healing.Frail muscles that can lead to falls and fractures. The only thing that can help avoid malnutrition is a healthy and balanced diet. One which includes all the necessary nutrients. To get all the nutrients, all the food groups should be included in a diet, which are fruits and vegetables, starchy foods (Rice, bread, potato, pasta, cereals etc), milk and diary products (Cheese, yogurt etc) and non diary products (Eggs, fish, meat, nuts etc). Remember, a good and healthy diet is always the secret to good immunity, good health and a good life.
Published on in Children's Doctor Adolescence is a time of significant change in the body as children shift towards adulthood. Many of these biopsychosocial shifts result in changes in sleep habits, which can affect mood, academic performance, and family and social relationships. When asked about sleep, adolescents will often complain to their pediatrician about excessive daytime sleepiness, likely due to insufficient sleep. Many intrinsic and extrinsic factors play a role in their lives, including increasing family, academic, and social demands, in addition to biological changes. Together, sleep issues in adolescents tend to have significant impact on their functioning, as well as their families. Teenagers need on average 9.25 hours of sleep a night (Figure 2). However, a vast majority of adolescents report that they do not get enough sleep, with a reported average of about 7 hours. This lack of sleep results in a “sleep deficit” of about 2 hours a night, which accumulates over the week leading to a significant shortfall by the weekend. In response, teenagers tend to oversleep on the weekends to “catch up,” but then have difficulty falling asleep on Sunday. Figure 2: Recommended quantity of sleep in a 24-hour cycle Source: Hirshkowitz M. (2015). The National Sleep Foundation’s sleep time duration recommendations: methodology and results summary, Sleep Health. The most common disorder related to sleep is insufficient sleep, caused by inadequate sleep hygiene. Sleep issues have been linked to problems with mood and irritability, behavior, cognitive ability, school rhythm disorder; delayed sleep phase type; insomnia; sleep disordered breathing; restless legs syndrome; periodic limb movement disorder; and narcolepsy. It is important to ask your patients and families about the quantity and quality of sleep. Sleep should be a lifestyle priority for the whole family. The assessment of sleep problems should include a clinical history that covers the routine, sleep schedule, and nocturnal and daytime behaviors. Any medical and psychiatric conditions that could contribute to sleep problems should be noted, as well as medications, developmental concerns, and hospitalizations. Remind patients that maintaining a consistent, regular sleep schedule is important. They should not deviate more than 2 hours from their weekday to weekend schedule; avoid “sleeping in” on weekends; and aim to get on average between 9 to 9.25 hours of sleep per night. Using a sleep diary can be an effective tool to help capture the pattern of sleep behaviors. If warranted, the use of an actigraphy, polysomnography, and multiple sleep latency test could help rule out other contributing factors to sleep, such as OSA, PLMD, and narcolepsy. Encourage proper sleep hygiene habits such as removing electronics from the bedtime routine at least 1 hour before bed. Other important components of good sleep habits include: regular bedtimes and wake times, avoidance of caffeine, and avoidance of engaging in exciting behavior before bed. Common sleep issues include the following: - Developmental and biological issues: Changes in circadian factors due to the onset of puberty and hormonal influences drive changes such as the ability to voluntarily delay sleep onset. - Early high school start times: Most teenagers have to wake up 1 to 2 hours earlier in order to get to school. This change affects their natural state of alertness. - Social and school obligations: Teenagers have more after-school activities, increased homework demands, and work that interfere with the nighttime routine and sleep. In addition, adolescents have amplified needs to communicate via technology devices at night long after the lights “go out.” - Erratic sleep schedule and poor sleep hygiene: Teenagers will often stay up late on school nights — and even later on weekends. They will then sleep in to “catch up” on sleep on weekends. - Reliance on technology: See Fellow’s Corner. - Caffeine consumption: Many adolescents turn to alternative ways to stay awake, such as drinking coffee and energy drinks. Teenagers today are also buying or using stimulant medication, like Adderall or Ritalin, to help with excessive daytime sleepiness. Not only could these products be harmful and addictive, but, in addition to impacting sleep, they might have legal ramifications. References and suggested readings Mindell, JA, Owens, J. A Clinical Guide to Pediatric Sleep: Diagnosis and Management of Sleep Problems, 2nd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2010. Mindell, JA, Owens, J, Alves, R, et al. Give children and adolescents the gift of a good night’s sleep: a call to action. Sleep Medicine. 2011;12(3):203-204. National Sleep Foundation. Annual sleep in America poll exploring connections with communications technology use and sleep. Accessed July 5, 2016. Contributed by: Billie S. Schwartz, PhD, and Jocelyn H. Thomas, PhD
Researchers have identified a young star, located almost 11,000 light-years away, which could help us understand how the most massive stars in the universe are formed. This star, already more than 30 times the mass of our Sun, is still in the process of gathering material from its parent molecular cloud, and may be even more massive when it finally reaches adulthood. In general, the larger a galaxy’s mass, the higher its rate of forming new stars. However, every now and then a galaxy will display a burst of newly-formed stars that shine brighter than the rest. Researchers using the Atacama Large Millimetre Array (ALMA) have found that galaxies forming stars at extreme rates 9 billion years ago were more efficient than average galaxies today.
Foreground, middleground, and background in photography. Learn more about the importance of the three different layers in an image, and how to use each to compose better photos. If you’ve ever wondered what makes a particular image feel complex and interesting, the answer is likely hidden in the layers of the photo. These “layers” are more commonly known as foreground, middleground, and background — each of which plays a vital role in a photo’s unique composition. Let’s dive into what each layer means and how to best showcase its details for eye-catching results. What are the foreground, middleground, and background? The element of the photo closest to you makes up the foreground. The furthest element away from you is the background, while the middleground makes up the area in between. Not all photos have (or need) all three elements — some might only have a foreground and background, or a middleground and background. If you’re having trouble identifying the different elements, or if you’re not sure whether a photo has two or three elements, try this: imagine peeling back individual layers of the photo. See how many layers you can separate from others. You may have two or three different layers. What’s the benefit of using all three layers? When you frame a shot so it has a foreground, middleground, and background, you add visual interest to the photo by creating depth and dimension. This is especially true in landscape photography. Try to find various textures or interesting objects in each layer, such as flowers in the foreground, water in the middleground, and mountains in the background. When you become more comfortable identifying the three layers, you can also begin incorporating additional photographic principles, such as leading lines or the rule of thirds. Leading lines will guide the viewer’s eye to an area of interest, while the rule of thirds helps you to create pleasing compositions. Discover even more photography tips and techniques you can use to improve your skills.
In a world where the importance of technology is growing by the minute, it comes as no surprise that educators understand the importance of focusing on STEM subjects: science, technology, engineering and mathematics. Programming is already gaining popularity, but there is another area of knowledge that involves all four letters of the acronym and deserves more attention in a school system that truly means to keep up with the times: robotics. A new kind of literacy With the recent advances in robotics and AI, the workforce is undergoing an unprecedented shift: millions of jobs will be performed by robots instead of humans in the near future and millions of new ones will be created, giving people who understand robots a significant advantage on the job market. The robotics revolution is comparable to the advent of computers: a few decades ago, computers were specialized tools only a few people could use, while today, basic computer literacy is essential for functioning in our lives. It is reasonable to predict that the next step in the progression will be the emergence of a robotics literacy that everyone should possess in order to contribute to a society in which the role of robots is growing steadily larger. Programming made fun Robotics and programming go hand in hand: the essential trait for a device to be defined as a robot is to be programmable to follow instructions, so it is possible to design a programming curriculum without robotics, but it is not possible to conceive a robotics class without programming. The advantage of having a robotics program in school, however, is that compared to a programming class, a course on robotics enhances multiple skills at once. 1. Fine motor skills: if the students are involved not just in programming, but also in physically building their robots, then robotics is a subject that keeps up with the latest technology trends while keeping them away from screens and encouraging them to use their hands and learn by doing. 2. Teamwork: the many phases of building, programming and testing a robot are usually performed by a team, which teaches students to work together for a common goal. 3. Thinking outside the box: creating a robot from scratch sharpens the students’ creativity and problem-solving abilities by giving them a task to perform and showing them that there are several valid ways to do it. Students are in complete control of the process, which keeps them engaged and attentive and makes the project rewarding and fun. 4. Perseverance: a robot built by a team of students likely will not work correctly on their first try. Assembling and programming a robot is a trial and error process that teaches the patience and humility to retrace your steps and correct your own mistakes, and a healthy dose of stubbornness that drives you to try harder next time. Going to school from home A robotics class is not the only way school and robots can interact fruitfully: a robot controlled by a student can be an effective strategy to attend school from a distance. This is called telepresence and can be a blessing for students who cannot be present in class because leaving their house or hospital room would be physically impossible or highly dangerous for their health, as is the case with people suffering from immune system disorders that make them unable to interact with common, non-sterilized environments. A telepresence robot is a computer-controlled device that can move at the user’s command and represent them by showing their face on an embedded screen and showing them the environment in turn, allowing a certain degree of interaction such as navigating the building and having conversations. The principle is similar to a video conference, but with the added bonus that a robot is not bound to a single room and can effectively substitute for the user’s presence and give them an experience that is as close as possible to real attendance, both in academics and in social interaction with their peers, who no longer have to feel like their friend is gone once they get used to talking to a screen on wheels instead of a person—a piece of science fiction come true. Discover more robotics tools for your classroom! Check our products and boost your students' STEM skills in a fun and engaging way with robots! just a click away! This article is original from acer for education : https://eu-acerforeducation.acer.com/innovative-technologies/why-robotics-should-be-included-in-stem-education/
Great question! Alfred the Great was really important to the development of English identity, and is often remembered as a great English king—even though he wasn't really a king of England, because England as a defined country didn't yet exist. Alfred started out as King of Wessex, a historic county-kingdom within what is now southern England, and subsequently became the king of all the Anglo-Saxons. Alfred's key contribution to English identity can be best understood in relation to his influence on language, given how language influences identity. What united the Anglo-Saxons was their common language; at the beginning of Alfred's reign, though, even this was very loose, with multiple dialects warring within the Anglo-Saxon region. Alfred was extremely committed to the development of a united Anglo-Saxon tongue, particularly a semi-uniform written version of the language, and what he achieved in this arena helped those who spoke Anglo-Saxon to see themselves as a single people. By calling himself King of the Anglo-Saxons and defining a language for those Anglo-Saxons, Alfred created an idea of nationhood where one had not previously existed. Alfred's reign was a turbulent one. The island of Britain was still constantly beset by would-be invaders, and Alfred was able to see off Viking attacks and establish himself as a dominant ruler. This already meant he was viewed as a very strong king within the Anglo-Saxon culture in which he lived, but more important was what he then did with his power. Alfred reformed the legal system in his new lands so that it was uniform—and fair. He was also very concerned with the idea that everyone should understand the religion they professed to follow (a religion that, of course, was very new). He felt that it was impossible to expect people to follow a religion that was taught in Latin, a language outside the reach of normal people. As such, under Alfred's reign, literature in English began to be produced en masse. Over this period, people came to understand themselves as "English" because they shared several common factors, including: - Adherence to Alfredian legal codes; - Speaking Anglo-Saxon, now the language of primary education; and - Adherence to the Christian religion. These unifying factors helped create a sense of a larger kingdom where several had previously existed. By the end of Alfred's reign, emphasis upon written English also meant that Anglo-Saxon, although it still had variant forms, had reached a sort of standard. The extent to which this was important for the idea of identity can be seen through the attempts made by the Norman invaders to suppress and dismantle English when they invaded in the eleventh century. They did not want to encounter a unified English people with a common language. As such, they removed the language from use in court or legal settings, replacing it with French. During this period, English went underground, and once again came to divide into multiple dialectal forms so that the English would see themselves as a defeated and subordinate class ruled by the French-speaking Normans. However, it did survive in parochial settings, and the English identity Alfred created was strong enough that, several centuries later, it would once again oust Anglo-Norman as the language of court and the English people.
Winter War facts for kids Quick facts for kidsWinter War |Part of World War II| A Finnish machine gun crew during the Winter War | Soviet Union Finnish Democratic Republic (A puppet state. Recognized only by USSR.) |Commanders and leaders| |Carl Gustaf Emil Mannerheim|| Joseph Stalin Otto Wille Kuusinen |Casualties and losses| |25,904 dead or missing 957 civilians in air raids 70,000 total casualties |126,875 dead or missing 188,671 wounded, injured or burned 323,000 total casualties The Winter War (30 November 1939 - 13 March 1940) was a conflict fought between the Soviet Union and Finland. It began when the Soviet Union tried to invade Finland soon after the Invasion of Poland. The Soviet military forces expected a victory over Finland in a few weeks, because the Soviet army had many more tanks and planes than the Finnish army. However, the Finnish forces resisted both better and longer than expected. One reason the Finnish forces did better is because they had good winter clothes and they wore white coats which camouflaged them in the snow. As well, the Finnish soldiers moved around on skis, which made it easy for them to sneak up on the Soviet soldiers. The Soviet army did not have good winter clothes, and they wore dark green coats, which made them easy to see in the snow. The defeated Finns had to give up 11% of their country. They tried to get it back in the Continuation War. Images for kids Winter War Facts for Kids. Kiddle Encyclopedia.
I live in the southern hemisphere near Melbourne, Australia (about 35 degrees south latitude). I have some visitors from the USA (Colorado) who say they noticed that the face of the moon looks different in the southern hemisphere than from their vantage point in the northern hemisphere. This view is not from a telescope/binoculars but just unaided eyesight. Would the moons face be noticeably different from northern and southern hemispherical (unaided) view points? I'm not surprised they noticed a difference in the appearance of the moon. Had they tilted their head and looked at the Moon upside down, it would have looked normal (to them anyway). In short, the moon looks upside down in the southern hemisphere (or in your case the moon would look upside down in the northern hemisphere). I noticed exactly the same thing on my first trip to southern hemisphere. To understand why this happens, imagine for simplicity that the orbit of the Moon was exactly in the same plane as the Earth's equator. From the northern hemisphere, the Moon is in the southern sky because that's the direction of the Earth's equator. In the southern hemisphere the situation is reversed. Now imagine that you are standing on the equator. The Moon would be directly overhead. First face north and look straight up at the Moon. It should look like it does in Australia. Now turn and face south and look at the Moon. You are now looking at the Moon flipped from how it looked when facing north. This is how the moon looks in the northern hemisphere to your American friends. The equator is a special place because the moon is overhead (at least in our thought experiment), and there's no preferred viewing direction. At higher or lower latitudes there is a preferred direction, namely the one when you're standing on your feet and not your hands, so you really only see the moon in one orientation. This page was last updated on July 18, 2015.
There is no universal definition for length, but it is usually measured in meters or feet. However, length is considered a property of 3-dimensional objects which is defined as the distance from one point to another on an object. Feet and meters are units of length. Feet are used for measuring length in the United Kingdom, the United States, and Canada, while meters are used in most other countries and parts of the world. One foot equals 12 inches and one meter equals 39.37 inches or 3.28 feet.
Current models of how stars evolve lack magnetic fields as a fundamental ingredient. An international group of astronomers led by the University of Sydney has discovered strong magnetic fields are common in stars, not rare as previously thought, which will dramatically impact our understanding of how stars evolve. Using data from NASA’s Kepler mission, the team found that stars only slightly more massive than the Sun have internal magnetic fields up to 10 million times that of the Earth, Because only 5-10 percent of stars were previously thought to host strong magnetic fields, current models of how stars evolve lack magnetic fields as a fundamental ingredient,” Associate Professor Stello said. “Such fields have simply been regarded insignificant for our general understanding of stellar evolution.
There is widespread potassium deficiency, and it is common knowledge that an increased potassium intake lowers the risk of hypertension, which is the leading cause of stroke, cardiovascular disease, and early death. However, not many people know that potassium has a vital impact on blood sugar levels and the prevention of diabetes, just as it counteracts side effects of diuretics. The question is how much potassium do we need – and how does the balance between potassium and sodium (salt) affect our health? Of all minerals, potassium is the one that we need in the largest quantities. 98 percent of our potassium is inside our cells. Potassium and sodium, which is mainly found outside of the cells, work in close collaboration. The potassium-sodium ratio is vital for the electrolyte balance of cells. Put differently, the balance between potassium and sodium ions creates an electric potential difference across the cell membrane (membrane potential) that is determining for cells and their ability to absorb nutrients, get rid of waste products, and maintain essential fluid balances. Potassium is essential to all cells, especially those responsible for nerve transmissions that control muscles, the heart, intestinal peristalsis, insulin sensitivity, and blood sugar levels. Our kidneys control the body’s potassium levels that must always be in proper balance with sodium. Too much potassium compared with sodium causes potassium depletion and disturbs the electrolyte balance and many other functions that depend on potassium. Potassium deficiencies are widespread Seaweed, beans, potatoes, almonds, apples, bananas, and other types of fruit and vegetables are rich in potassium. However, our modern, refined diets that consists mainly of grain, meat and dairy products and far fewer vegetables, contain substantially less potassium. The problem is only made worse by the fact that our intake of sodium in the form of table salt has increased substantially, and that makes us need potassium even more. There is also quite a lot of sodium in bread, cheese, chips, peanuts, sausages, cold cuts, ready meals, and numerous other kinds of industrially refined foods. Too much sugar, coffee, and alcohol, or loss of fluid caused by diarrhea, excessive sweating, and the use of diuretics, may also cause potassium deficiency. In addition, there is stress that causes the adrenal hormone, adolsterone, to retain sodium and excrete potassium. Different recommendations in the Nordic countries and the United States The average potassium intake in the United States is around 2.6 grams per day. An estimated 3 percent of the adult population is thought to be able to reach the recommended potassium intake, which is 4.7 grams in the United States. The Nordic Nutrient Recommendations are somewhat lower (3.1 grams for women and 3.5 grams for men). This shows that there is disagreement about the actual need for potassium, which depends on several factors. Nonetheless, there is widespread potassium deficiency in the United States as well as in Europe, and this has serious implications for public health and for the explosive growth rate for lifestyle-associated diseases. Our modern lifestyle is what causes widespread potassium deficiency The combination of refined food, too much salt, overconsumption of coffee and alcohol, and stress lowers our potassium status and causes the body to excrete too much potassium. The use of diuretics is also a problem. Potassium and its role in blood pressure management Potassium controls and lowers our blood pressure by means of various mechanisms that include enzyme processes and nerve impulses to the muscles and heart. It is even possible that increased potassium intake may lower blood pressure by increasing the excretion of sodium. Potassium is even able to increase insulin sensitivity and reduce inflammation. In fact, both type-2 diabetes and the early stage of diabetes (metabolic syndrome) are characterized by elevated blood pressure, insulin resistance, and inflammation. A new British study shows that elevated blood sugar levels lead to blood vessel constriction, which can cause the blood pressure to go up. In other words, sugar, not saturated fat, causes cardiovascular disease. It is also important to pay attention to potassium and chromium, both of which help control blood sugar levels. A diet that controls blood sugar levels combined with supplements of potassium may therefore have a positive effect on blood pressure. The same is the case with magnesium, physical activity, and weight management (it is particularly important to have a normal waistline) Potassium, insulin resistance, and diabetes Potassium influences the pancreas and its release of insulin, the hormone that helps convey sugar into the cells. Lack of potassium may lead to reduced insulin secretion and impaired glucose tolerance that reduces cellular glucose uptake. According to The Nurses’ Health Study, high potassium intake is associated with a reduced risk of developing type-2 diabetes among women with a BMI count of 29 or less. Another study that monitored 4,754 people for 20 years showed that 373 of the participants developed diabetes. Their potassium intake was generally lower than the potassium intake of those participants who did not get diabetes. Fruit and vegetables are good sources of potassium, fiber, magnesium, and other nutrients that are beneficial for the cardiovascular system and blood sugar levels. Anti-hypertensive medication disturbs blood sugar levels As described, elevated blood pressure may be a result of getting too little potassium. It is therefore problematic that so many people manage elevated blood pressure levels with diuretics (thiazide preparations) that are associated with potassium deficiency. There is an established link between thiazide usage and low potassium levels in blood serum and impaired glucose tolerance, elevated blood sugar levels, increased risk of diabetes, and aggravation of existing diabetes. However, patients, who lack potassium because of thiazide therapy, can restore their insulin secretion and keep their blood sugar under control by taking supplements of potassium. This indicates that low potassium in blood serum is linked to blood sugar disturbances. Did you know that headaches, fluid retention, constipation, heart rhythm disturbances, muscle cramps, and a tingling sensation in the arms and legs may also be caused by too little potassium? We used to get more potassium than sodium from our diet Now, it is typically the other way around, and this imbalance may actually be bad for our health. The Nordic Nutrient Recommendations recommend that the salt intake of women and men stay below 6 and 7 grams daily respectively (this corresponds with 2.4 and 2.8 grams of sodium). Also, the daily potassium intake, as mentioned earlier, should be above 3 grams. A minor potassium deficiency can be controlled with dietary measures – especially by eating more potassium-rich foods and less salt. Potassium supplements should mainly be taken together with diuretics to avoid side effects. As the loss of potassium differs from one person to another, it is important to measure blood levels of potassium before and after treatment. Potassium supplements may also be considered in the case of elevated blood pressure and blood sugar disturbances. It is often a good idea to combine potassium supplements with supplements of magnesium and vitamin B6, which are also important for the fluid balance, blood pressure, and blood sugar levels. Michael S. Stone. Potassium Intake, Bioavailability, Hypertension, and glucose Control. Nutrients. 2016 Houston M.C. The importance of potassium in managing hypertension. Curr. Hypertens. Rep. 2011. PubMed Zillich A.J. et al. Thiazide diuretics, potassium, and the development of diabetes. A quantitative review. Hypertension 2006 Goldner M.G. Zarowitz H., Akgun S. Hyperglycemia and glycosuria due to thiazide derivatives administered in diabetes mellitus. N. Engl. J Med 1960
GPSThe Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radionavigation system owned by the United States government and operated by the United States Air Force. It is a global navigation satellite system that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. Obstacles such as mountains and buildings block the relatively weak GPS signals. The GPS does not require the user to transmit any data, and it operates independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the GPS positioning information. The GPS provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver. UTM (WGS84)The Universal Transverse Mercator (UTM) conformal projection uses a 2-dimensional Cartesian coordinate system to give locations on the surface of the Earth. Like the traditional method of latitude and longitude, it is a horizontal position representation, i.e. it is used to identify locations on the Earth independently of altitude. However, it differs from that method in several respects. The UTM system is not a single map projection. The system instead divides the Earth into sixty zones, each being a six-degree band of longitude, and uses a secant transverse Mercator projection in each zone. source: WikipediaUse location data from your device to show what your current GPS and UTM coordinates are. The app works offline and has a small battery footprint.
Kepler-186 is a stellar system of five planets with an Earth-size world in the habitable zone. Simulated comparison of a sunset on Kepler-186f and Earth. On Kepler-186f the star looks dimmer but slightly larger. All the known potentially habitable exoplanets so far are superterran worlds (aka super-Earths) somewhat larger than Earth. The potential of life of these worlds is difficult to relate to Earth since there are no planets in the Solar System of comparable size and we know very little of them. Now a team of scientists led by Elisa V. Quintana from the SETI Institute and NASA Ames report the discovery of Kepler-186f, the first terran world (Earth-size) in the habitable zone of a star. Kepler-186f has a similar size to Earth and it is most likely a rocky world. It orbits the M-dwarf star Kepler-186 along with four other inner planets, which are as old as the Solar System (>4 Gyr), in the constellation Cygnus 500 light years away. Kepler-186f receives less stellar flux (~32%) than presently does Mars (~43%). It could have a temperate climate if it has an atmosphere much denser than Earth. Even Earth probably experienced at least one episode of global glaciation with just a slightly lower stellar flux than today, 650 million years ago. However, early Mars had running surface liquid water with a similar stellar flux as Kepler-186f. Kepler-186f was added to the Habitable Exoplanets Catalog with a low Earth Similarity Index (ESI) of 0.64 due to its potential colder climate. Still, it could be a more Earth-like world if it is experiencing a much higher greenhouse effect than Earth. Nevertheless, Kepler-186f is also the best candidate now of a rocky world in the habitable zone compared to the other known potentially habitable worlds. Figure 1. Artistic representation of Kepler-186f as a cold world with shallow oceans as compared to Earth. Other possible interpretations of Kepler-186f are as a snowball frozen world (Hoth-like) or a dry cold world (Mars-like). Figure 2. Orbital distribution of planets in the stellar system Kepler-186 (top) compared to Kepler-62 (bottom). Both planets 'f' of Kepler-186 and Kepler-62 receive about the same stellar flux. Figure 3. Analysis of the orbit of Kepler-186f. Its equilibrium temperature is around 192 K for a similar terrestrial albedo. For comparison, Mars has an equilibrium temperature of 210 K. Kepler-186f also orbits just in the outer stellar zone for tidally locked-planets but its rotational state is uncertain. The diagram does not show the orbits of the other inner planets of Kepler-186. More details about this figure are available in the Exoplanet Orbital Catalog. Figure 4. The new lineup of up to 21 potentially habitable exoplanets according to the Habitable Exoplanets Catalog. Kepler-186f is the most Earth-size planet now but it also receives one third the energy from its star than Earth. This significantly lowers its observable similarities with Earth as compared with other planets in the catalog. Press Releases >
LA JOLLA, CA - A team of scientists from The Scripps Research Institute and other institutions has solved the structure of a key protein from the virus that caused last year's "swine flu" influenza epidemic. The structure reveals that the virus shares many features with influenza viruses common in the early 20th century, helping to explain why, in general, older individuals have been less severely affected by the recent outbreak than younger ones. The team's findings were published in the March 25, 2010, issue of Science Express, an advance, online publication of selected research papers from the prestigious journal Science. In the study, the team describes the structure of the hemagglutinin (the influenza virus envelope protein) from the H1N1 swine flu virus that triggered the pandemic in 2009 and is still circulating in the human population. The team then compared the swine flu hemagglutinin protein with a range of different human H1N1 flu viruses in the past century. "Parts of the 2009 virus are remarkably similar to human H1N1 viruses circulating in the early 20th century," said Scripps Research Professor Ian Wilson, who was the senior author of the study. "Our findings provide strong evidence that exposure to earlier viruses has helped to provide some people with immunity to the recent influenza pandemic." The information should be useful for scientists and public health officials as they respond to current and future pandemics. Influenza is a common viral infection of the lungs that affects millions of people annually and is a leading cause of death in the United States, contributing to around 50,000 deaths per year. Serious influenza outbreaks such as the deadly "Spanish flu" of 1918 have occurred when a virus adapted to birds jumps directly into humans or reassorts and infects another species, such as the pig, and then jumps into humans. Similar outbreaks occurred in 1957 and 1968. "For a pandemic to occur, there needs to be a naïve population, whose immune systems have not learned to recognize the virus and who can be infected," explained Rui Xu, a research associate in the Scripps Research Wilson lab who was first author of the paper with graduate student Damian Ekiert, also of the Wilson lab. "A pandemic outbreak is different from the seasonal flu, in which existing flu viruses circulate in the human population, gradually mutating as time goes on." The most recent influenza outbreak, dubbed the "swine flu" by the media due to its recent origin in pigs, was first reported in Mexico in April 2009. The virus has now spread worldwide, and has contributed to at least 16,000 deaths, according to the World Health Organization. A vaccine is now available, but the virus remains a public health concern. Almost as soon as the outbreak was first reported last April, the Scripps Research team set out to better understand the new influenza virus by examining its structure. Collaborating with colleagues at Mount Sinai School of Medicine, who provided a clone of the major surface antigen from the emerging virus, A/California/04/2009 (CA04), , the scientists called on a technique called x-ray crystallography. In this method, scientists produce quantities of the viral protein and try to crystallize it. This crystal is then placed in front of a beam of x-rays, which diffract when they strike the atoms in the crystal. Based on the pattern of diffraction, scientists can reconstruct the shape of the original molecule. The scientists chose to focus on the structure of the virus's hemagglutinin, a protein that is abundantly displayed on the viral surface. In addition to enabling the virus to infect cells of the host organism, hemagglutinin is the main antigenic determinant on the virus--in other words, it is what the immune system primarily recognizes and responds to by making antibodies (a type of immune molecule) and mounting an immune defense. Vulnerability to an individual influenza infection depends on how well a person's immune system recognizes the hemagglutinin. The scientists' initial experiments went extraordinarily well, and by June, the team was able to reconstruct the structure of the swine flu hemagglutinin. But what did the structure mean? That's when the hard work began. "One of the interesting aspects of the study to us was that the H1N1 subtype was already circulating in humans," said Xu. "That is the first time that we have seen such a phenomenon. How could the same sub-type of influenza virus induce a new pandemic?" Comparing the 2009 hemagglutinin protein with the hemagglutinin of other influenza samples, including the 1918 flu (a structure that Wilson and colleagues solved six years ago), helped provide answers. For the analysis, the scientists used all known human H1N1 strains between 1918 and 1957, and representative strains since 1977. The researchers found that while much of the hemagglutinin three-dimensional structure had been maintained among the different viruses, the amino acids (protein building blocks) on the viral surface were substantially different in the 2009 virus from seasonal strains. This could enable the virus to initially evade detection by the immune system. Strikingly, the scientists also found that one area of the hemagglutinin, called antigenic site Sa, was highly similar between the 2009 and the 1918 viruses. The similarity of the Sa site for the two viruses suggested that some individuals might be able to mount an immune response that could neutralize either virus. That would have remained an educated guess if Lady Luck hadn't intervened. In another flu project ongoing in the Wilson lab, Ekiert was working to determine the structure of an antibody that neutralized the 1918 influenza virus. The antibody had been isolated from a survivor of the 1918 Spanish flu. "As more information became available, the 1918 antibody suddenly became relevant to the swine flu study," said Ekiert. Could the particular antibody that Ekiert was working with, called 2D1, not only be effective against the 1918 virus, but also act against the 2009 swine flu? A study recently published in the Journal of Virology with researchers at Vanderbilt University, who are collaborators on this present work, showed that, indeed, mice challenged with the 2009 virus are protected by the administration of the antibody against the 1918 virus. The current Science Express study provides the structure of the 2D1 antibody in complex with the 1918 virus and addresses how this protection occurs. "There is a huge divergence among different influenza viruses," said Ekiert, "so that exposure to one won't confer protection against another. However, this study shows that prior exposure to viruses that were around decades ago can provide some protection against infection against a newly emerging pandemic." In addition to Xu, Ekiert, and Wilson, authors of the paper, "Structural basis of pre-existing immunity to the 2009 H1N1 pandemic influenza virus," are Jens C. Krause and James E. Crowe, Jr. of Vanderbilt University Medical Center and Rong Hai of the Mount Sinai School of Medicine. The work was supported by the National Institutes of Health (NIH), the Skaggs Institute for Chemical Biology, and predoctoral fellowships from the Achievement Rewards for College Scientists Foundation and the NIH Molecular Evolution Training Program. X-ray diffraction datasets were collected at the Stanford Synchrotron Radiation Lightsource and at the Advanced Photon Source, which are supported by the National Institutes of Health and the U.S. Department of Energy. About The Scripps Research Institute The Scripps Research Institute is one of the world's largest independent, non-profit biomedical research organizations, at the forefront of basic biomedical science that seeks to comprehend the most fundamental processes of life. Scripps Research is internationally recognized for its discoveries in immunology, molecular and cellular biology, chemistry, neurosciences, autoimmune, cardiovascular, and infectious diseases, and synthetic vaccine development. Established in its current configuration in 1961, it employs approximately 3,000 scientists, postdoctoral fellows, scientific and other technicians, doctoral degree graduate students, and administrative and technical support personnel. Scripps Research is headquartered in La Jolla, California. It also includes Scripps Florida, whose researchers focus on basic biomedical science, drug discovery, and technology development. Scripps Florida is located in Jupiter, Florida.
German occupation of Czechoslovakia The German occupation of Czechoslovakia (1938–1945) began with the Nazi annexation of Czechoslovakia's northern and western border regions, known collectively as the Sudetenland, under terms outlined by the Munich Agreement. German leader Adolf Hitler's pretext for this effort was the alleged privations suffered by the ethnic German population living in those regions. New and extensive Czechoslovak border fortifications were also located in the same area.Following the Anschluss of Austria to Nazi Germany, in March 1938, the conquest of Czechoslovakia became Hitler's next ambition. The incorporation of the Sudetenland into Nazi Germany left the rest of Czechoslovakia weak and it became powerless to resist subsequent occupation. On 16 March 1939, the German Wehrmacht moved into the remainder of Czechoslovakia and, from Prague Castle, Hitler proclaimed Bohemia and Moravia the Protectorate of Bohemia and Moravia. The occupation ended with the surrender of Germany following World War II.
Humans and gorillas last shared a common ancestor 10 million years ago, according to an analysis of the first full sequence of the gorilla genome. The gorilla is the last of the living great apes – humans, chimpanzees, gorillas and orangutans – to have its complete genetic sequence catalogued. Scientists, led by researchers from the Wellcome Trust Sanger Institute near Cambridge and Baylor College of Medicine in Houston, also found that 15% of the gorilla's genome is closer between humans and gorillas than it is between humans and chimpanzees, our closest animal relative. The genomes of all three species are, in any case, highly similar: humans and chimpanzees share more than 98% of their genes, while humans and gorillas share more than 96%. The genetic sequence was taken from a female western lowland gorilla (Gorilla gorilla gorilla) named Kamilah and published in Nature. An initial analysis also showed similarities in genes involved in sensory perception and hearing, and brain development showed accelerated evolution in all three species. Genes associated with proteins that harden up skin were also particularly active in gorillas – which goes some way to explaining the large, tough knuckle pads on gorillas' hands. "Gorillas are an interesting animal in their own right but the main reason they are of particular interest is because of their evolutionary closeness to us," said Aylwyn Scally, an author of the research from the Wellcome Trust Sanger Institute. "They're our second-closest evolutionary cousins after chimpanzees and knowing the content of the gorilla genome enables us to say quite a lot about an important period in human evolution when we were diverging from chimpanzees." Comparing the sequences of humans, chimpanzees and gorillas has enabled scientists to put a more accurate clock on when the three species split from their last common ancestors. It was traditionally thought that the emergence of new species (known as "speciation") happens at a relatively localised point in time but emerging evidence suggests that this is not necessarily the case, that species split over an extended period. Studying the gorilla genome suggests that the divergence of gorillas from the common ancestor of humans and chimpanzees happened around 10 million years ago. Humans and chimpanzees last shared a common ancestor around 6 million years ago. Eastern and western gorillas split some time in the last million years. One curious find was the evolution of genes associated with hearing, which seem very similar between humans and gorillas. "Scientists had suggested that the rapid evolution of human hearing genes was linked to the evolution of language," said Chris Tyler-Smith, senior author from the Wellcome Trust Sanger Institute. "Our results cast doubt on this, as hearing genes have evolved in gorillas at a similar rate to those in humans." Scally adds that it could well be that there has been a parallel acceleration in these genes for two entirely different reasons – that human hearing has developed because of speech and gorilla hearing has developed to serve an entirely different, but as-yet-unknown, purpose. The researchers said that studying the gorilla genome would shed light on a time when apes were fighting for survival across the world. "There's an interesting background story of great ape evolution," says Scally. "The common ancestor of all four great apes was sometime back in 15 to 20 million years ago. At that time, it seems to have been a nice time to have been an ape – it was a golden age – a lot of the world was just right for the kind of environment for apes to live in. Since that time, the story has been of fragmentation and extinction – most of the great ape species that have existed have gone. Today, all the non-human apes are really endangered populations, they're living in forest refuges and population numbers are quite low. Humans look like an exception to that – we're all over the world now and live in places where you could never have had a primate beforehand." Today, gorillas are classified as critically endangered and populations have plummeted to below 100,000 individuals in recent decades due to poaching and disease. They are restricted to equatorial forests in countries including Cameroon, Central African Republic, Gabon, Nigeria, Republic of Congo and Angola. "As well as teaching us about human evolution, the study of great apes connects us to a time when our existence was more tenuous," say the researchers in Nature. "And in doing so, highlights the importance of protecting and conserving these remarkable species." • This article was amended on 15 March 2012. The original used the term "genetic code" as a synonym for "genome". This has been corrected.
Volcanic gases escape from fumaroles, or vents, around volcanically active areas. They can occur along tiny cracks or long fissures in a volcano, in groups called clusters or fields, and on the surfaces of lava and pyroclastic flows. Fumaroles have been known to last for centuries. They can also disappear in a few weeks or months if their source cools quickly. For example, Yellowstone National Park and the Kilauea volcanoes have many fumaroles and associated deposits; some have been there for years, while others have just recently appeared.
Transmission and pathogenesis of viral infections: The key to developing control programs (Proceedings) Transmission of Viruses Direct transmission requires that animals be in close, intimate contact because the animal-free state of the virus must be very short in order to effect a successful transmission of viable virus. Direct transmission of viral infections can be controlled by isolation of infected animals and by immunization of susceptible individuals. The mechanisms for direct transmission of viruses are via aerosol, direct contact, and sexual transmission; it is difficult to separate aerosol transmission clearly from direct contact.Generally, viruses that are transmitted via aerosol are more stable viruses and are able to survive for longer periods of time outside of the host's bodily environment; conversely, viruses that are transmitted by direct intimate contact or sexually are more unstable and require rapid and immediate transmission in order to maintain their viability (infectivity). Enveloped viruses generally require close direct contact, or arthropod vectors, to effect transmission. Vectors serve as intermediaries between the infected and the susceptible animals. Vectors are either biological or mechanical. Requisite for insect vector transmission is a high level viremia in the infected mammalian animal, without which the vector would be less likely to be infected during a blood meal on an infected animal. Biological Vectors. Biological vectors support the replication of the virus, with concomitant amplification, before transmission to the new host. Bluetongue virus is transmitted by a biological insect vector. Vector control is, however, the single most important preventative measure. Mechanical Vectors. Two types of mechanical vectors are of concern: inanimate vectors (fomites) and animate vectors including ticks, flies and other animals. Animate (insect) vectors may also acquire a virus during a blood meal which is then transmitted to a new host without replication within the vector. Control and management mechanisms applicable to mechanical vectors are generally those of hygiene and disinfection. Therefore, the clinician should be aware of the significance of single-use materials, i.e., syringes and needles, endo-tracheal tubes, palpation sleeves, and thorough disinfection of equipment between animals and premises. As with biological vectors, control of disease requires control and elimination of the vector(s). Genetic (Vertical) Transmission Genetic transmission occurs the least often of the mechanisms discussed but is of importance for oncornavirus, herpesvirus, and orbivirus infections. Genetic transmission is the transmission of virus from one generation to successive generation via the germ cell(s). Genetic transmission may occur by the simple attachment or association of an infectious virus or the viral genome to the germ cell. The genetic transmission of IBR and blue tongue viruses in semen has been demonstrated; these viruses have been shown to be absorbed to the sperm cell. Oncornavirus genome may be incorporated into the germ cell genome.
The international standard code format for terminal forecasts issued for airports, which took effect on I July 1996.| Any of the many types of objects detected by radar. A radar target must have an index of refraction sufficiently different from that of the atmosphere to return a target signal to the radar by reflection, refraction, or scattering. Also, it must be near enough and have a large enough radar cross section that the target signal will exceed the threshold of detectability of the radar receiver. The target is then said to produce a detectable echo. Abbreviation for true airspeed. The quantity measured by a thermometer. Bodies in thermal equilibrium with each other have the same temperature. In gaseous fluid dynamics, temperature represents molecular kinetic energy, which is then consistent with the equation of state and with definitions of pressure as the average force of molecular impacts and density as the total mass of molecules in a volume. For an ideal gas, temperature is the ratio of internal energy to the specific heat capacity at constant volume. The local rate of change of a vector or scalar quantity with time at a given point in space. Thus, in symbols, ¶[∂]p/¶[∂]t is the pressure tendency, ¶[∂]z[&zgr;]/¶[∂]t is the vorticity tendency, etc. Because of the difficulty of measuring instantaneous variations in the atmosphere, variations are usually obtained from the differences in magnitudes over a finite period of time and the definition of tendency is frequently broadened to include the local time variations so obtained. An example is the familiar three-hourly pressure tendency given in surface weather observations; in fact, the term "tendency" alone often means the pressure tendency. 1. Pertaining to temperature or heat. 2. A discrete buoyant element in which the buoyancy is confined to a limited volume of fluid. See plume. 3. A relatively small-scale, rising current of air produced when the atmosphere is heated enough locally by the earth's surface to produce absolute instability in its lowest layers. The use of this term is usually reserved to denote those currents either too small and/or too dry to produce convective clouds; thus, thermals are a common source of low-level clear-air turbulence. It is generally believed that the term originated in glider flying, and it is still very commonly used in this reference. thermal gradient - 1. (Or geothermal gradient.) According to Smithsonian Physical Tables, the rate of variation of temperature in soil and rock from the surface of the earth down to depths of the order of kilometers. It varies greatly from place to place, depending on the geological history of the region, the radioactivity of the underlying rocks, and the conductivity of the upper rocks. An average is about +10°C per km. 2. Same as temperature gradient. See also lapse rate. 1. Abbreviation for temperature-humidity index. 2. Abbreviation for time-height indicator. In synoptic meteorology, the vertical depth, measured in geometric or geopotential units, of a layer in the atmosphere bounded by surfaces of two different values of the same physical quantity, usually constant-pressure surfaces. See thickness chart. As used in aviation weather observations, descriptive of a sky cover that is predominantly transparent. According to the summation principle, at any level, if the ratio of the transparent sky cover to the total sky cover (opaque plus transparent) is one-half or more, then the cloud layer at that level must be classified as "thin." It is denoted by the symbol "-" preceding the appropriate sky cover symbol. Abbreviation for Temperature-Humidity Infrared Radiometer. The sound emitted by rapidly expanding gases along the channel of a lightning discharge. Some three-fourths of the electrical energy of a lightning discharge is expended, via ion-molecule collisions, in heating the atmospheric gases in and immediately around the luminous channel. In a few tens of microseconds, the channel rises to a local temperature of the order of 10 000°C, with the result that a violent quasi-cylindrical pressure wave is sent out, followed by a succession of rarefactions and compressions induced by the inherent elasticity of the air. These compressions are heard as thunder. Most of the sonic energy results from the return streamers of each individual lightning stroke, but an initial tearing sound is produced by the stepped leader; and the sharp click or crack heard at very close range, just prior to the main crash of thunder, is caused by the ground streamer ascending to meet the stepped leader of the first stroke. Thunder is seldom heard at points farther than 15 miles from the lightning discharge, with 25 miles an approximate upper limit, and 10 miles a fairly typical value of the range of audibility. At such distances, thunder has the characteristic rumbling sound of very low pitch. The pitch is low when heard at large distances only because of the strong attenuation of the high-frequency components of the original sound. The rumbling results chiefly from the varying arrival times of the sound waves emitted by the portions of the sinuous lightning channel that are located at varying distances from the observer, and secondarily from echoing and from the multiplicity of the strokes of a composite flash. See electrometeor. (Sometimes called electrical storm.) In general, a local storm, invariably produced by a cumulonimbus cloud and always accompanied by lightning and thunder, usually with strong gusts of wind, heavy rain, and sometimes with hail. It is usually of short duration, seldom over two hours for any one storm. A thunderstorm is a consequence of atmospheric instability and constitutes, loosely, an overturning of air layers in order to achieve a more stable density stratification. A strong convective updraft is a distinguishing feature of this storm in its early phases. A strong downdraft in a column of precipitation marks its dissipating stages. Thunderstorms often build to altitudes of 40 000-50 000 ft in midlatitudes and to even greater heights in the Tropics; only the great stability of the lower stratosphere limits their upward growth. A unique quality of thunderstorms is their striking electrical activity. The study of thunderstorm electricity includes not only lightning phenomena per se but all of the complexities of thunderstorm charge separation and all charge distribution within the realm of thunderstorm influence. In U.S. weather observing procedure, a thunderstorm is reported whenever thunder is heard at the station; it is reported on regularly scheduled observations if thunder is heard within 15 minutes preceding the observation. Thunderstorms are reported as light, medium, or heavy according to 1) the nature of the lightning and thunder; 2) the type and intensity of the precipitation, if any; 3) the speed and gustiness of the wind; 4) the appearance of the clouds; and 5) the effect upon surface temperature. From the viewpoint of the synoptic meteorologist, thunderstorms may be classified by the nature of the overall weather situation, such as airmass thunderstorm, frontal thunderstorm, and squall-line thunderstorm. Abbreviation for traveling ionospheric disturbances. 1. The periodic rising and falling of the earth's oceans and atmosphere. It results from the tide-producing forces of the moon and sun acting upon the rotating earth. This disturbance actually propagates as a wave through the atmosphere and along the surface of the waters of the earth. Atmospheric tides are always so designated, whereas the term "tide" alone commonly implies the oceanic variety. Sometimes, the consequent horizontal movement of water along the coastlines is also called "tide," but it is preferable to designate the latter as tidal current, reserving the name tide for the vertical wavelike movement. See equatorial tide, neap tide, spring tide, tropic tide. 2. See rip current, red tide, storm tide. Duration as measured by some clock. Atomic clocks give the most accurate measure of time. Less regular timekeepers are those based on the rotation of the earth and other bodies of the solar system. In oceanography, a three-dimensional, tonguelike intrusion of finite extent in the along-front direction. See interleaving. 1. Generally, the disposition of the major natural and man-made physical features of the earth's surface, such as would be entered on a map. This may include forests, rivers, highways, bridges, etc., as well as contour lines of elevation, although the term is often used to denote elevation characteristics (particularly orographic features) alone. 2. The study or process of topographic mapping. 1. A violently rotating column of air, in contact with the ground, either pendant from a cumuliform cloud or underneath a cumuliform cloud, and often (but not always) visible as a funnel cloud. When tornadoes do occur without any visible funnel cloud, debris at the surface is usually the indication of the existence of an intense circulation in contact with the ground. On a local scale, the tornado is the most intense of all atmospheric circulations. Its vortex, typically a few hundred meters in diameter, usually rotates cyclonically (on rare occasions anticyclonically rotating tornadoes have been observed) with wind speeds as low as 18 m s-1 (40 mph) to wind speeds as high as 135 m s-1 (300 mph). Wind speeds are sometimes estimated on the basis of wind damage using the Fujita scale. Some tornadoes may also contain secondary vortices (suction vortices). Tornadoes occur on all continents but are most common in the United States, where the average number of reported tornadoes is roughly 1000 per year, with the majority of them on the central plains and in the southeastern states (see Tornado Alley). They can occur throughout the year at any time of day. In the central plains of the United States they are most frequent in spring during the late afternoon. See also supercell tornado, nonsupercell tornado, gustnado, landspout, waterspout. 2. A violent thundersquall in West Africa and adjacent Atlantic waters. total cloud cover - Fraction of the sky hidden by all visible clouds. A mirage in which the angular height of the image is greater than that of the object. As the width is unaffected (the angular width of image width remains that of the object because the refractive index gradient is vertical), the aspect ratio is altered and distant images appear vertically enlarged. Towering often accompanies sinking¾[—]distant features appear depressed and enlarged¾[—]but it can also accompany looming. Compare stooping. towering cumulus - A descriptive term, used mostly in weather observing, for cumulus congestus. 1. In general, an unmeasurable (less than 0.01 in.) quantity of precipitation. 2. An insignificantly small quantity. 3. The record made by any self-registering instrument. Thus, one may speak of the barograph trace, the hygrothermograph trace, etc. 1. See trade winds. 2. Of or pertaining to the trade winds or the region in which the trade winds are found. Common contraction for trade winds. (Or path.) A curve in space tracing the points successively occupied by a particle in motion. At any given instant the velocity vector of the particle is tangent to the trajectory. In steady-state flow, the trajectories and streamlines of the fluid parcels are identical. Otherwise, the curvature of the trajectory KT is related to the curvature of the streamline KS by KT = KS - , where V is the parcel speed and ¶[∂]y[&psgr;]/¶[∂]t is the local change of the wind direction. The curvatures and wind change are positive for the cyclonic sense of flow. See energy transfer, conduction, mixing, exchange coefficients, transport. The movement of a substance or characteristic. Characteristics that can be transported in the atmosphere are heat (temperature), moisture, momentum, chemicals, turbulence, etc. The transport is sometimes interpreted as a flux density (characteristic per unit area per time), or as a flow rate (characteristic per time). See transport processes. tropical air - A type of air mass with characteristics developed over low latitudes. Maritime tropical air (mT), the principal type, is produced over the tropical and subtropical seas. It is very warm and humid and is frequently carried poleward on the western flanks of the subtropical highs. Continental tropical air (cT) is produced over subtropical arid regions and is hot and very dry.See airmass classification, trade air; compare polar air. tropical depression - A tropical cyclone with a closed wind circulation and maximum surface winds up to 17 m s-1 (34 knots). tropical disturbance - A migratory, organized region of convective showers and thunderstorms in the Tropics that maintains its identity for at least 24 hours but has no closed wind circulation. The system may or may not be associated with a detectable perturbation of the low-level wind or pressure field. tropical storm - See tropical cyclone. tropical upper-tropospheric trough - A semipermanent trough extending east-northeast to west-southwest from about 35°N in the eastern Pacific to about 15°-20°N in the central west Pacific. A similar structure exists over the Atlantic Ocean, where the mean trough extends from Cuba toward Spain. 1. Any portion of the earth characterized by a tropical climate. 2. Same as Torrid Zone. See Tropic of Cancer, Tropic of Capricorn. The boundary between the troposphere and stratosphere, usually characterized by an abrupt change of lapse rate. The change is in the direction of increased atmospheric stability from regions below to regions above the tropopause. Its height varies from 15 to 20 km (9 to 12 miles) in the Tropics to about 10 km (6 miles) in polar regions. In polar regions in winter it is often difficult or impossible to determine just where the tropopause lies, since under some conditions there is no abrupt change in lapse rate at any height. It has become apparent that the tropopause consists of several discrete, overlapping "leaves," a multiple tropopause, rather than a single continuous surface. In general, the leaves descend, step-wise, from the equator to the poles. That portion of the atmosphere from the earth's surface to the tropopause; that is, the lowest 10-20 km (6-12 mi) of the atmosphere; the portion of the atmosphere where most weather occurs. The troposphere is characterized by decreasing temperature with height, appreciable vertical wind motion, appreciable water vapor, and weather. Dynamically, the troposphere can be divided into the following layers: surface boundary layer, Ekman layer, and free atmosphere. See atmospheric shell. In meteorology, an elongated area of relatively low atmospheric pressure; the opposite of a ridge. The axis of a trough is the trough line. This term is commonly used to distinguish the previous condition from the closed circulation of a low (or cyclone), but a large-scale trough may include one or more lows, an upper-air trough may be associated with a lower-level low, and a low may have one or more distinct troughs radiating from it. See front, dynamic trough, easterly wave, equatorial wave. trough aloft - Same as upper-level trough. Abbreviation for Total Totals index. See stability index. 1. Irregular fluctuations occurring in fluid motions. It is characteristic of turbulence that the fluctuations occur in all three velocity components and are unpredictable in detail; however, statistically distinct properties of the turbulence can be identified and profitably analyzed. Turbulence exhibits a broad range of spatial and temporal scales resulting in efficient mixing of fluid properties. Analysis reveals that the kinetic energy of turbulence flows from the larger spatial scales to smaller and smaller scales and ultimately is transformed by molecular (viscous) dissipation to thermal energy. Therefore, to maintain turbulence, kinetic energy must be supplied at the larger scales. See also ocean mixing. 2. Random and continuously changing air motions that are superposed on the mean motion of the air. See aircraft turbulence. 1. A specific classification of aircraft having the same basic design, including all modifications that result in a change in handling or flight characteristics. 2. See weather type.
What is Influenza? Influenza, or the flu, is a viral infection spread from person-to-person primarily by respiratory droplets created by coughing, sneezing, or talking. The flu can cause mild to severe illness, and at times can lead to death. Complications of the flu can include ear and sinus infections, bronchitis, pneumonia, dehydration, and worsening of chronic medical conditions such as asthma, diabetes, and congestive heart failure. Anyone can get flu. Flu strikes suddenly and can last several days. Symptoms vary by age, but can include: - sore throat - muscle aches - runny or stuffy nose Why get a flu vaccination every year? Flu viruses are constantly changing; flu vaccine may be updated from one season to the next to protect individuals against the most recent and most commonly circulating viruses. A person's protection from vaccination declines over time and an annual vaccination is needed and recommended for optimal protection. Who should get the flu vaccine? The Missouri Department of Health and Senior Services recommends annual vaccination against flu for all people six months of age and older, unless they have a condition or medical reason not to get the vaccine. It is especially important for young children, pregnant women, older people and people with chronic health problems. Can I get the flu from the vaccine? No.The most common side effect associated with receiving a flu vaccine is a sore arm when receiving the flu shot. You are not fully protected from the flu until two weeks after receiving the vaccine. There is no live flu virus in flu shots. They cannot cause the flu. If I had the flu already this season, am I protected for the rest of the year? No. While you may have developed immunity against the virus that infected you, it does not guarantee that you have immunity against other flu viruses that are circulating the same season. What can I do to protect myself from getting the flu? - Wash your hands. - Get the flu vaccine each year. - Avoid touching your eyes, nose, or mouth. - Avoid close contact with people who are sick. - Cover you mouth and nose when coughing or sneezing. Who should not get the flu vaccination? - If you have any severe, life-threatening allergies. If you ever had a life-threatening allergic reaction after a dose of flu vaccine, you may be advised not to get vaccinated. Most, but not all, types of flu vaccine contain a small amount of egg protein. - If you ever had Guillain-Barre Syndrome (also called GBS). Some people with a history of GBS should not get this vaccine. This should be discussed with your doctor. - If you are not feeling well. It is usually okay to get the flu vaccine when you have a mild illness, but you might be asked to come back when you feel better.
Skin cancer is a malignant tumor that grows in the skin cells. In the U.S. alone, more than 2 million Americans are expected to be diagnosed in 2013 with nonmelanoma skin cancer, and more than 76, 000 are expected to be diagnosed with melanoma, according to the American Cancer Society. There are three main types of skin cancer, including: |Basal cell carcinoma||Basal cell carcinoma accounts for approximately 80 percent of all skin cancers. This highly treatable cancer starts in the basal cell layer of the epidermis (the top layer of skin) and grows very slowly. Basal cell carcinoma usually appears as a small, shiny bump or nodule on the skin - mainly those areas exposed to the sun, such as the head, neck, arms, hands, and face. It most commonly occurs among people with light-colored eyes, hair, and complexion.| |Squamous cell carcinoma||Squamous cell carcinoma, although more aggressive than basal cell carcinoma, is highly treatable. It accounts for about 20 percent of all skin cancers. Squamous cell carcinoma may appear as nodules or red, scaly patches of skin, and may be found on sun-exposed areas, such as the face, ears, lips, and mouth. However, if left untreated, squamous cell carcinoma can spread to other parts of the body. This type of skin cancer is usually found in fair-skinned people.| |Malignant melanoma||Malignant melanoma accounts for a small percentage of all skin cancers, but accounts for most deaths from skin cancer. Malignant melanoma starts in the melanocytes - cells that produce pigment in the skin. Malignant melanomas sometimes begin as an abnormal mole that then turns cancerous. This cancer may spread quickly. Malignant melanoma most often appears on fair-skinned men and women, but people with all skin types may be affected.| To help find melanoma early, it is important to examine your skin on a regular basis, and become familiar with moles, and other skin conditions, in order to better identify changes. Certain moles are at higher risk for changing into malignant melanoma. Moles that are present at birth (congenital nevi), and atypical moles (dyplastic nevi), have a greater chance of becoming malignant. Recognizing changes in moles, by following this ABCD Chart, is crucial in detecting malignant melanoma at its earliest stage. The warning signs are: |Normal mole / melanoma||Sign|| |Asymmetry||When half of the mole does not match the other half| When the border (edges) of the mole are ragged or irregular |Color||When the color of the mole varies throughout| |Diameter||If the mole's diameter is larger than a pencil's eraser| |Photographs Used By Permission: National Cancer Institute| Melanomas vary greatly in appearance. Some melanomas may show all of the ABCD characteristics, while others may show few or none. Always consult your doctor for a diagnosis. A risk factor is anything that may increase a person's chance of developing a disease. It may be an activity, such as smoking, diet, family history, or many other things. Different diseases, including cancers, have different risk factors. Although these factors can increase a person's risk, they do not necessarily cause the disease. Some people with one or more risk factors never develop the disease, while others develop disease and have no known risk factors. But, knowing your risk factors to any disease can help to guide you into the appropriate actions, including changing behaviors and being clinically monitored for the disease. Skin cancer is more common in fair-skinned people, especially those with blond or red hair, who have light-colored eyes. Skin cancer is rare in children. However, no one is safe from skin cancer. Other risk factors include: The American Academy of Dermatology (AAD) recommends the following steps to help reduce your risk of skin cancer: The American Academy of Pediatrics approves of the use of sunscreen on infants younger than 6 months old only if adequate clothing and shade are not available. Parents should still try to avoid sun exposure and dress the infant in lightweight clothing that covers most surface areas of skin. However, parents also may apply a minimal amount of sunscreen to the infant's face and back of the hands. Remember, sand and pavement reflect UV rays even under an umbrella. Snow is a particularly good reflector of UV rays. Finding suspicious moles or skin cancer early is the key to treating skin cancer successfully. A skin self-exam is usually the first step in detecting skin cancer. The following suggested method of self-examination comes from the AAD: (You will need a full-length mirror, a hand mirror, and a brightly lit room.) Specific treatment for skin cancer will be determined by your doctor based on: There are several kinds of treatments for skin cancer, including the following: Click here to view the Online Resources of Women's Center
The midpoint formula is used to find a point (its coordinate values) that is located exactly between two other points in a plane. The formula finds its tremendous application in geometry. The coordinates of the point (x, y) that lies exactly halfway between the two points (x1, y1) and (x2, y2) are given by: ` x = ( x_(1 )+ x_2)/2 , y = ( y_(1 )+ y_2)/2 ` Similarly, if we want to find the midpoint of a segment in the 3-dimensional space, we can determine the midpoint using: ` x = ( x_(1 )+ x_2)/2 , y = ( y_(1 )+ y_2)/2 , z = ( z_(1 )+ z_2)/2 ` The figure shown below gives an illustration of the midpoint formula.
A Milliarium (plural milliaria) was a stone that was placed alongside Roman roads. Such stones were used from about the 3rd century BC. They marked the distance between two towns, and were placed at intervals of one Roman mile. This was about 1000 milia passum, or 1.5 kilometres (0.93 mi). In the celtic provinces, leagues are often used, these are 1500 milia passum, or about 2.2 kilometres (1.4 mi). Very often the stones condain the following information: - The name of the person or emperor who built or repaired the road, usually with all titles. - The distance between the starting point, and the point where the millarium was placed. Some milliaria liste the distance to a larger settlement instead. Even if they look like modern road signs showing the distance to given places, this was probably not the first function of these stone pillars. What seems more likely is that they were used to show the power of the person who erected them; some were probably used for propaganda. Very often, the inscription of the emperor was in Latin, but the inscription of the distance was in Greek. The ordinary population could probably read the second part, but not the first. Today between 7.000 and 8.000 such stones are known.
Don’t like spiders? Well, here’s one that will grow on you! Located about 160,000 light years in the web of the Large Magellanic Cloud, star-forming region 30 Doradus is best known as the “Tarantula Nebula”. But don’t let it “bug” you… this space-born arachnid is home to giant stars whose intense radiation causes stellar winds to blast through surrounding gases to give us an incredible view! When seen through the eyes of the Chandra X-ray Observatory, these huge shockwaves of x-ray energy heat the encompassing gaseous environment up to multi-millions of degrees and show up as blue. The supernovae detonations blast their way outward… gouging out “bubbles” in the cooler gas and dust. They show up hued as orange when observed through infra-red emissions and recorded by the Spitzer Space Telescope. What’s so special about the Tarantula? Because it is so close, it’s a prime candidate for studying an active HII region. This stellar nursery is the largest in our Local Group and a perfect laboratory for monitoring stellar evolution. Right now astronomers are intensely interested in what causes growth on such a large scale – and their curent findings show it doesn’t have anything to do with pressure and radiation from the massive stars. However, an earlier study had opposing conclusions when it came to 30 Doradus’ central regions. By employing the Chandra Observatory observations, we may just find different opinions! “Observations show that star formation is an inefficient and slow process. This result can be attributed to the injection of energy and momentum by stars that prevents free-fall collapse of molecular clouds. The mechanism of this stellar feedback is debated theoretically; possible sources of pressure include the classical warm H II gas, the hot gas generated by shock heating from stellar winds and supernovae, direct radiation of stars, and the dust-processed radiation field trapped inside the H II shell.” says Laura Lopez (et al). “By contrast, the dust-processed radiation pressure and hot gas pressure are generally weak and not dynamically important, although the hot gas pressure may have played a more significant role at early times.” Original Story Source: Chandra News Release. For Further Reading: What Drives the Expansion of Giant H II Regions?: A Study of Stellar Feedback in 30 Doradus.
Behavioral therapy is a treatment that helps change potentially self-destructing behaviors. It is also called behavioral modification or cognitive behavioral therapy. Medical professionals use this type of therapy to replace bad habits with good ones. The therapy also helps you cope with difficult situations. It is most often used to treat anxiety disorders. However, you don’t have to be diagnosed with a mental health disorder to benefit. Behavioral therapy is used by psychotherapists, psychiatrists, and other qualified medical professionals. It is usually used to help treat anxiety and mood disorders. These include: - obsessive-compulsive disorder (OCD) - post-traumatic stress disorder (PTSD) - social phobia - bipolar disorder This treatment can help patients cope with certain mental disorders. It can also be used to treat: - personality disorders - substance abuse - eating disorders This therapy is also used on patients with chronic diseases to help manage pain. For example, cancer patients use learned techniques to better cope with radiation therapy. Doctors often recommend behavioral modification to pregnant women who can’t safely take medications. This form of treatment can also help with emotional grief. Therapists create treatment plans specifically tailored to individual conditions. Some exercises may include: - discussions about coping mechanisms - role playing - breathing and relaxation methods - positive reinforcement - activities to promote focus - journal writing - social skills training - modifications in responses to anger, fear, and pain Therapists sometimes ask patients to think about situations that scare them. The goal is not to frighten them but to help them develop different coping skills. The general benefit is increased quality of life. Specific benefits vary depending on what condition is being treated. These can include: - reduced incidents of self-harm - improved social skills - better functioning in unfamiliar situations - improved emotional expressions - less outbursts - better pain management - ability to recognize the need for medical help The goal of behavioral therapy is to limit self-harm. The risks for this treatment are minimal. Some patients consider the emotional aspects of the sessions risky. Exploring feelings and anxieties can cause bursts of crying and anger. The emotional aftermath of therapy can be physically exhausting and painful. A therapist will help to improve coping mechanisms and to minimize any side effects from therapy. Generally, a primary physician or neurologist will refer patients to another doctor who specializes in behavioral therapy. Some psychotherapists also perform these treatments. Always check the credentials of your therapist. A credible behavioral therapist should have a degree as well as a license or certification. Because therapy sessions are frequent, it is important that the patient and doctor get on well. Patients can request a consultation before beginning treatment. Therapy sessions can become a financial burden. Some insurance providers do cover behavioral therapy. Others may only grant a portion of the costs or allot a certain number of sessions per year. Before beginning therapy, patients should discuss the coverage with your health insurance company and create a payment plan. Behavioral therapy is not a cure for any condition. It is a teaching method to help cope with everyday life. Depending on individual needs, a person may only need it on a short-term basis. The exact length of a treatment plan depends on individual goals and progress made. During treatment it is important to continue taking any medications as prescribed by a doctor. Some research shows that learned techniques in therapy may gradually reduce the need for medicine. However, each case is different. Speak with a doctor if treatment doesn’t seem to be working.
New technology and know-how suggests that the builders of Stonehenge were Welsh. A breakthrough study using modern technology claims to have identified the builders of England’s most mysterious monument. Previous examinations of the stone circle in Wiltshire have focused on the massive bluestone pillars, but it was in the remains buried at the site that experts found evidence that suggests the builders may have been Welsh. The 25 cremated remains were first discovered by Colonel William Hawley in the 1920s. Hawley’s team excavated them from 56 pits that dot the inner circumference and ditch of the monument. These pits are also known as Aubrey Holes. Hawley lacked the means to examine the remains further and reburied them for later study, which has commenced now, nearly 100 years later. The new study of these remains has identified the regions from which they came. Fox News reports: A groundbreaking new analysis of the 25 cremated remains buried at the prehistoric monument in Wiltshire has revealed that 10 of them lived nowhere near the bluestones. Instead they came from western Britain, and half of those 10 possibly came from 140 miles away in Southwest Wales (where the earliest Stonehenge monoliths have also been traced back to). The remaining 15 could be locals from the Wiltshire area or other descendants of migrants from the west. The study notes that these were most likely a mix of men and women, all of high social standing. It is unknown if the individuals died shortly before interment at Stonehenge, or if their descendants brought their bones back to Stonehenge several generations later. The oldest bones were dated to around 3,000 BC, and the rest follow in a 500-year span. Lead author of the study John Pouncett said: “The range of dates raises the possibility that for centuries people could have been brought to Stonehenge for burial with the stones.” The team of scientists, led by researchers from Oxford, cannot say for certain that these were the actual builders of Stonehenge, but in dating the remains they found the earliest cremation date to coincide with the first bluestones that make up the inner circle. The study, published in Scientific Reports, found that people and materials were transported between Wiltshire and Pembrokeshire, nearly 100 miles away. Some of these people, it says, decided to settle in Wiltshire. The key breakthrough came when researchers recognized that cremation at extreme temperatures can crystalize a skull and store the chemical signature of its origins. Co-author Dr. Christophe Snoeck demonstrated that cremated bones can retain their strontium isotope composition. He said that “about 40 percent of the cremated individuals did not spend their later lives on the Wessex chalk where their remains were found.” Pouncett concluded: “The cremated remains from the enigmatic Aubrey Holes and updated mapping of the biosphere suggest that people from the Preseli Mountains not only supplied the bluestones used to build the stone circle but moved with the stones and were buried there too.”
A fresh water biome food chain is likely to be made up of micro-organisms, decaying substances, detritus, animals without backbones, small fish, large fish, eels and birds. The animals may be present in different species depending on the physical location of the particular biome.Continue Reading A food chain normally represents the relationship of organisms within a given ecosystem. The food chain shows how each organism is important to the other and the ecosystem as a whole. At the bottom of the food chain in a fresh water biome one is likely to notice micro-organisms and decaying substances. These are the basic things that enable life to begin. Right on top of these things are likely to be plants such as phytoplankton. On the next level are animals without backbones such as stonefly, midge, caddis and mayfly. These animals are at this position because they are able to eat the plants available. The next level of the food chain is likely to have small fish which feed on the animals without backbones. Right on top of small fish are large fish which are able to eat both the small fish and some animals without backbones. On top of the fresh water biome food chain, one is likely to notice birds such as ducks and herons which feed on all kinds of fish.Learn more about Biology
this Shockwave exhibit doesn't load properly, please visit our help page.] The Scientific Slugger imitates a ball being hit perfectly by a major league player. In order to see what makes a home run, try adjusting the strength of your swing and the angle at which the ball leaves bat. You can also vary the pitch speed to create more complex combinations. Try changing one variable at a time, and notice what happens. The distance a baseball travels depends on two primary factors: the angle at which the ball leaves the bat, and how fast the ball is hit. The speed of the ball depends on both the speed of the pitch and the speed of the bat. If the bat is standing still and the ball hits it, the ball will bounce off the bat with most, but not all, of the pitch speed. (Some of the energy is wasted in the friction of deforming the ball, making a sound, etc.) If the ball is standing still and is hit by the bat, it's given a good portion of the bat's speed. Combine the two and you can see that a pitched ball hitting a swinging bat gains a good portion of the sum of both the pitch and the bat speed. Gravity is always pulling downwards on the ball. If you hit the ball straight up, it spends quite a bit of time in the air, but doesn't travel far from home plate. If you hit the ball horizontally, as in a line drive, the ball moves away from home plate at maximum velocity, but quickly hits the ground because of gravity -- still not very far from home plate. To maximize your hitting distance, you need to have both a high horizontal velocity AND you need to keep the ball in the air for a longer time. You can do this by hitting the ball at an upward angle. If there were no air resistance (that is, if a ball didn't have to make its way through the air on its way out of the park), the ball would travel nearly twice as far. Air resistance depends on humidity, temperature, and altitude: To make a ball go farther, you want high humidity, high temperature, and high altitude. The Scientific Slugger is set for constant air resistance based on zero humidity, at 56 degrees Fahrenheit, all played at sea level.
Degree symbol is a super-scripted or raised small circle. It is commonly used to denote temperatures (as Celsius, Centigrade, Fahrenheit etc.) , angles, geographic coordinates (e.g. 37 °C temperature or 45° angle). In this tutorial, we will learn various methods of typing degree symbol in MS Word documents, Excel spreadsheets, PowerPoint presentations and HTML webpages. We will also learn about the Unicode of degree symbol. There are several methods of typing degree symbol in MS Word. This symbol can either be typed with keyboard or inserted as a symbol. I would recommend that you learn the keyboard method because it works quick and save you time while working in MS Word. The easiest method of typing degree symbol in MS Word is to use the key combination of Ctrl+Shift+@ and then press space bar. Following is the sequence of pressing keys to get degree symbol: - Press Ctrl and hold - Press Shift and hold - Press @ - Release Ctrl and Shift - Press space bar You can also use the Alt key method to type this symbol. Press Alt, hold it down and then type 0176 on your Numeric Pad (also known as Numpad). Please note that you must type 0176 on Numpad and not on the regular number keys that run across the keyboard. This method is called the Alt+x method. In this method you type the Unicode of the desired symbol and then press Alt+x to get it. So, to type degree symbol, type 00B0 and then press Alt+x. Voila! the code 00B0 will be instantly replaced by a neat tiny degree symbol, Caveat: With this method, however, you need to be a bit careful. Technically, there should not be any space between the number and degree symbol (i.e. 45° is correct but 45 ° is wrong). This lack of space may bring wrong results. For example, if you want to type 45°, you’ll type as below: Now if you will press Alt+x, MS Word will decipher the whole 4500B0 as the Unicode and therefore it will not type the degree symbol. The solution to this problem is that you give a space between 45 and the Unicode and once the degree symbol is typed, then you remove the space. You can insert the degree symbol in your document from the extensive symbols list provided by MS word. - Open MS Word document - Place the cursor where you want to insert degree symbol - Go to Insert tab and then Symbol option - Click on More Symbols… - Symbol box will come up. From Font dropdown select (normal text) - Scroll down to locate the degree sign - Double click on the degree sign to insert it. You can use the AutoCorrect feature of MS Word. This feature allows you to set a key sequence for quickly inserting a symbol. For example, if you will type <o> , it will be automatically replaced with the degree symbol. Let’s see how to do this: - Open the Symbol box and select degree symbol as per to steps given in previous method. - After selecting the degree sign, click on the AutoCorrect… button. The AutoCorrect dialog box will appear. - In the Replace box, type the key sequence that you would want to be automatically be replaced by degree symbol. In the image given below, I have typed <o> - Click OK - Now, in your MS Word document, whenever you’ll type <o> and press space, degree symbol will appear. Bonus tip: There may be a case when you really need to type just <o> in your document. In such a scenario, press backspace and MS Word will revert its automatic action. If you have a blog or website, you would need the Unicode or HTML codes for degree sign. - Unicode for degree symbol is U+00B0 - HTML code for degree symbol is ° or ° There are separate symbols for degree Celsius and degree Fahrenheit: - Unicode for degree Celsius symbol (℃) is U+2103. For this symbol, the HTML code is ℃ - HTML code for degree Fahrenheit symbol (℉) is U+2109. For this symbol, the HTML code is ℉ I hope this information was useful for you and now you will not have any problem in typing degree symbol. Lately, I have written a few articles on typing symbols (e.g. Indian rupee symbol, copyright symbol etc.) Should you have any question regarding how to type degree symbol, please fee free to ask me through the comments section of this article. I will try my best to help you. Thank you for using TechWelkin!
None of the soft tissues, such as skin, muscles, or brain matter, were preserved in the Starchild Skull. However reserchers are still able to tell us a lot about the brain the Skull once held. In 2012, Medical Modeling of Colorado, USA, used detailed scans of the Starchild to create an exact 3D replica of the inside of the Skull, showing in moderate detail what it's brain would have looked like. While this technique can never produce an exact replica of the living brain, the same technology was used to create brain casts of both the Starchild and a normal human, so it is possible to accurately compare the two. The Starchild has the same two lobes as a normal human brain, however the shape is quite different, and the brain itself is significantly larger. A human child has a brain about 1200 cc in size, while an adult is about 1400 cc. The Starchild's brain was about 1600 cc, significantly larger than a normal human. X-Rays show veinous imprints on the inside of the Skull, the dents left in the bone where veins once existed. This shows that the brain filled the entire skull cavity, and is important because this helped experts to rule out Hydrocephaly as a cause for the unusual properties of the Skull. Read more about Hydrocephaly HERE.
Unicode is a standard for encoding computer text in most of the internationally used writing systems into bytes. It is promoted by the Unicode Consortium and based on ISO standards. Its goal is to replace current and previous character encoding standards with one worldwide standard for all languages. New versions are issued every few years and later versions have over 100,000 characters. Unicode was developed in the 1990s and integrated earlier codes used on computer systems. Unicode provides many printable characters, such as letters, digits, diacritics (things that attach to letters), and punctuation marks. It also provides characters which do not actually print, but instead control how text is processed. For example, a Newline and a character that makes text go from right to left are both characters that do not print. Unicode considers a graphical character (for instance é) as a code point (alone or in sequence [e+ ‘] ). Each code point is a number with many digits which can be encoded in one or several code units. Code units are 8, 16, or 32 bits. This allows Unicode to represent characters in binary. Encodings[change | change source] There are different ways to encode Unicode, the most common ones are: - UTF-7 Uses 7 bits per character; relatively unpopular; officially not part of Unicode - UTF-8 Uses 8 bits per character; a variable-width encoding that keeps compatibility with ASCII; the most common characters can be coded in 2 bytes - UTF-16 Uses 16 bits per character; also variable-width encoding - UTF-32 Uses 32 bits per character; a fixed width encoding Problems[change | change source] Other websites[change | change source]
CLN5, intracellular trafficking protein The CLN5 gene provides instructions for making a protein whose function is not well understood. Cells produce a CLN5 protein that is inactive and contains extra protein segments. This inactive protein is called a preprotein. For the CLN5 preprotein to become active, the additional segments must be removed, followed by additional processing steps. The active CLN5 protein is then transported to cell compartments called lysosomes, which digest and recycle different types of molecules. Research suggests that the CLN5 protein may play a role in the process by which lysosomes break down or recycle damaged or unneeded proteins within cells. At least 35 mutations in the CLN5 gene have been found to cause CLN5 disease. This condition impairs mental and motor development causing difficulty with walking and intellectual function. In addition, affected children often develop recurrent seizures (epilepsy) and vision loss. Signs and symptoms of CLN5 disease typically appear around age 5 but can begin in adolescence or adulthood. Most of the mutations that cause CLN5 disease make changes in the CLN5 protein that interfere with the processing of the preprotein or alter the structure of the protein. The resulting proteins cannot be transported to the lysosomes. Other mutations lead to production of abnormal proteins that are quickly broken down. One such mutation, known as Finmajor, is responsible for almost all cases of CLN5 disease in people of Finnish descent. The Finmajor mutation replaces the protein building block (amino acid) tyrosine with a signal to stop protein production prematurely (written as Tyr392Ter or Y392X). A lack of functional CLN5 protein within lysosomes probably impairs the breakdown of certain proteins, which then likely accumulate in cells throughout the body. While these accumulations can damage any cells, nerve cells appear to be particularly vulnerable. Widespread loss of nerve cells in CLN5 disease leads to severe signs and symptoms and early death. In the cases in which CLN5 disease develops in adolescence or adulthood, it is thought that the CLN5 gene mutations lead to a CLN5 protein with reduced function that is broken down earlier than normal. Because the altered CLN5 protein can function for a small amount of time, some damaged or unneeded proteins may be broken down in lysosomes. Since it takes longer for these substances to accumulate and cause nerve cell death, the signs and symptoms of CLN5 disease in these individuals occur later in life. - ceroid-lipofuscinosis neuronal protein 5 - ceroid-lipofuscinosis, neuronal 5
Instruments for Measuring Water Quality We have comprised a basic guide to give you a small introduction to the types of meters that you may need to use while testing a body of water for specific properties and pollutants. While there are many more instruments than those that we have listed below, and many devices that will combine the effects of many of the different types of meter below, this should prove to be a useful introductory guide to anyone getting into the industry. pH Meters measure the acidity or the alkalinity of a solution of water. They typically comprise of a probe that is submersed into a body of water and a meter or recorder that will read data from the probe and display this data for the reader to interpret. Used to measure the pH of water, but can be used with many other liquids. Is composed of a specialised probe and a meter that receives and displays data gained by the probe. Dissolved Oxygen Meter A dissolved oxygen meter is used to determine the amount of oxygen that is present in a body of water. It is possible to get meters that give a quicker reading directly to the user through a digital panel or you can invest in a device that can be left in a body of water to record the levels of dissolved oxygen in the solution overtime so that this data can be analysed over a period of time. Biological Oxygen Demand (BOD) Meter A Biological Oxygen Demand meter is useful for determining the amount of dissolved oxygen that is currently required by the organisms that are living in a body of water. These meters, along with Dissolved Oxygen Meters can help to determine whether a body of water is of sufficient quality to support extra growth and population of organisims in a body of water. Temperature Measuring Devices Measuring the temperature of a body of water can be a simple affair of dipping a probe in a body of water and measuring the temperature much like a simple thermometer. However, in many cases it is important to consider the temperature of a body of water over longer periods of time, and many devices will allow the user to monitor the temperature of a body of water over a longer period of time, taking readings at a specific period and storing those readings for furhter analysis at a later date. The conductivity of a solution is a measure of the ability for that liquid to conduct electricity. The solids and particles in a body of water can alter it's conductivity and so these meters can give some indication of the solids that are in the solution being tested. Typically, these meters will measure the conductivity in ohms. A flow meter presents a method to measure the apparent flow of a body of water and presents the user with information about the overall movement of a solution.
This Article outlines the human rights theories of nineteenth-century abolitionist and civil rights leader James Pennington. Born into slavery in Maryland, Pennington escaped North and became the first African American to attend Yale. As an ordained Presbyterian clergyman, educator, orator, author, and activist, he adapted traditional Protestant rights theories explicitly to include the rights of all, regardless of race. He emphasized the authority and freedom of the individual conscience as foundational to human rights. He advocated a central role for covenantal institutions including church, state, family, and school as essential for fostering a law and culture of human rights. And he defended the right of all to disobey unjust laws and resist tyrannical regimes. Pennington bridged these theories in novel ways with pacifist teachings, anticipating by more than a century the American civil rights movement led by Martin Luther King, Jr., and others. Though largely forgotten by historians, Pennington was well known and influential among his contemporaries. His life and work represent an important step in the development of law, religion, and human rights. The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format. By default, clicking on the export buttons will result in a download of the allowed maximum amount of items. To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export. After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.
The message ‘It’s OK to not be OK’ is important for children to hear, but we must also include a disclaimer that it is ‘OK to be OK’ too. The ideal is for young people to develop resilience, so they can negotiate most of life’s challenges, but also an openness about asking for help if it is required. Striking this balance is often a challenge. How do I help my child to be assertive without being arrogant, or how can I help my child to be compassionate without being a pushover? In my view, resilience is about authenticity, self-belief, and accuracy. However, there are limited opportunities to explore these qualities when the social, personal and health education (SHPE) curriculum is isolated from the school culture. There is little value in stating ‘we all have different qualities that are equally important’, and then going on to do a spelling test where those who do well are commended, and those who struggle are dismissed. It’s important to point out that the wellbeing aspects of the curriculum can be labour intensive and challenging, especially with no psychological training. When adults want to communicate a message to children, we need to repeat it and exaggerate it to make sure it sticks. But they also have to see it, if we want them to be it. So, despite inclusive classroom exercises, these espoused values need to be visible in the day to day running of things. Express your feelings In recent years there’s been a shift towards encouraging children to share their feelings. We’ve told them it’s OK to cry and not to bottle things up. In fact, one cartoon campaign even suggested that holding feelings in could cause your head to explode. This promotion of emotional expression is a move in the right direction and certainly is an improvement on the pervasive silence and suppression. However, emotional expression is perhaps not enough in itself. What is also needed is an increased emotional understanding or emotional intelligence. Emotional intelligence is not purely the ability to articulate how something makes you feel, rather it is the capacity to be aware, to control, and to express one’s emotions empathetically. This is not simply expressing how you feel, you must also consider the impact of your actions on others, an important aspect of developing empathy and priorities in interpersonal relationships. Intrapersonal communication involves the internal monologues that we have with ourselves, whereas interpersonal communication involves our interactive communication with others. If we are to introduce the concept of wellbeing into our families, schools, and communities, we need to include social responsibility which is a crucial aspect of emotional intelligence. This social responsibility needs to be part of the lived experience too, and not just a removed topic covered in the classroom. Children need to be aware that they have rights, but they also need to be aware that alongside these rights come responsibilities. The right to be the leader of a game at lunchtime comes with the responsibility of ensuring that everyone is included. The right to have a treat food on a Friday comes with the responsibility of putting the wrappers in the bin and keeping the classroom tidy. These everyday subtle value systems carry far more influence on children’s learning than we give them credit for. Emotional intelligence is not simply understanding ourselves but also understanding our relationships with other people. If we make emotional intelligence too individualistic, we run the risk that the emotional skills are self-focused and lack consideration of others, a key social skill and ingredient for resilience. Developing core skills A child’s ability to navigate the social world is essential to their development, and even more so for children who may require additional support, explanation and understanding. Given the loss of social and emotional development opportunities over the past 18 months, there has never been a greater need to help children develop core intrapersonal and interpersonal skills. While we may claim to have strategies in our schools that aim to value effort over outcome we might also have an academic le
Macronutrients are the components of the food which are present in the larger amount and make up the major bulk of our food. Macronutrients are the essential nutrients that are required by the body for the proper functioning of all the systems in the body. Also, our bodies are not capable enough to manufacture them in enough quantities without needing them or getting them from an external source. World Health Organization recommends that these nutrients must be supplied through the food we take in. Essential nutrients are also a vital part playing in the prevention of disease, support the growth and promote overall good health for the living beings. There are three macronutrients essential for our bodies. They are carbohydrates, proteins, and fats. They all make up the building blocks of the body in one way or the other and none of them is a waste or unnecessary in anyways. Carbohydrates are the most common and abundant form of macronutrients and are responsible for an instant source of energy. They are also a source of the raw material for a number of metabolic cycles and processes. Types of carbohydrates and their significance Carbohydrates are further divided into many types depending upon their chemical structure, functions and the many other bases. This is the reason carbohydrates are considered as the most important macronutrients for the body. Carbohydrates vary from the simplest sugars like glucose and maltose to the most complex and indigestible forms. Glucose, as we all know, is the instant source of energy and is responsible for the proper functioning of the brain as well as muscles. Then there are starches and glycogens which are complex carbohydrates and take part in the formation of bodily and cellular structures for example cell wall. Importance of quantity of carbohydrates intake Having said the carbohydrates are the most essential macronutrients for our body, we must also mention here that excessive or unchecked carbohydrates are also a source of various diseases and problems for our health. As we have said that simple carbohydrates serve as the basic raw material for the metabolic cycles, it gets converted and stored into some unwanted substances especially fats which appear as obesity in the long run. So, it is important that we always check the number of carbohydrates in our daily food intake. Sugar gram calculator The accurate check and balance on sugar intake are only possible if we keep the record of the carbohydrates or sugars we take. This becomes a real problem when we don’t know the amount of carbohydrates in the fruits and food we take. For example, it becomes difficult to know the number of grams present in two hundred grams of a medium sized apple. Similarly, we need to learn with time the amount of sugars in grams present in one teaspoon of raw sugar. In such circumstances, a gram sugar calculator comes to the rescue and allows us to accurately check the sugar intake in our daily diet. See the sample template below for help.
Colour Blind is a film about subtle racism and its daily impact on teenagers in high school. To outsiders, Princess Margaret Senior Secondary, in the heart of Surrey, BC, looks like an ordinary high school. To teachers and students, however, it was a school full of racial rage, segregation and violence. Its troubles began in 1995 when the predominately white student body became a predominately ethnic majority. Five years later, we follow five teenagers as they learn tolerance for each other's differences. Colour Blind documents that painful and confusing process of overcoming racial conflicts. The video's purpose is to encourage young students to examine their own behaviours and attitudes and to ask probing questions of themselves about how they react to racism within their own high school. 1999, 25 min 42 s - Date modified:
JPS Dual Language Program Dual language is a more effective way of teaching students two languages based on the fundamental premise that children learn a second language the same way they learn their first language. They learn in context where they are exposed to language in its natural form and when they are socially motivated to communicate. The target language is used as the medium of instruction rather than the content of instruction. This means that students acquire content-area knowledge and language skills together. 1. How will dual language benefit my child? Spanish Dual Language Programs in Jefferson Parish Schools are based on the three dual language pillars designed to promote bilingualism and biliteracy, academic student achievement, and social cultural competence. There is a considerable body of research that outlines the benefits of dual language/immersion programs for students, including: Dual Language Education for a Transformed World by Thomas & Cullier, 2012. 2. What is a two-way dual language program? The two-way model balances the number of English Language Learners (ELLs) and non-English Language Learners (non-ELLs) in a classroom giving both groups the opportunity to learn each other’s language in a natural way. Research tells us that the most effective way to teach English Language Learners is in a two-way dual language program (Thomas & Cullier, 2010). 3. What language will my child be instructed in? Beginning in Pre-K and K, students receive instruction in Spanish 90% of the day and 10% in English. As they move up to 1st grade and beyond, English increases 10% each year until a balance of 50%/50% between the Spanish and English language is reached in 4th grade. 4. Will my child fall behind in learning English? Students in a Two-Way Immersion (TWI) dual language program may seem to fall behind in the first years as they acquire proficiency in the two languages, but research has demonstrated that by third grade the students are doing as well or better than their monolingual peers. 5. Will my child be confused learning in two languages? Children’s brains are wired to be able to understand multiple languages. Children are very sensitive and flexible to the way different people speak and therefore can learn a second language much easier than adults. 6. Can I change my mind once my child begins in dual language? In order for children to receive the full benefits of participating in a dual language program, parents must make a commitment to allow their children time to acquire the target language over time. Social and academic language acquisition is a process that takes up to five or seven years. 7. Do I have to know Spanish or English to be able to help my child with homework? Knowledge of Spanish or English by the parent is not a prerequisite for a child’s success in the program. Homework is given as a reinforcement of what has been done in class, therefore the child should be able to complete it independently. 8. Will my child learn the same things that students learn in a monolingual class? The LDOE and Jefferson Parish School district expectations and requirements are the same for all JP students. The curriculum used in dual language classrooms align to the LDOE and district requirements and expectations. 9. What subjects are taught in Spanish? English? Students receive Spanish language arts, math, science, and social studies in Spanish. In addition, they receive English language development or language arts and PE in English. As the students enter higher grades, more English subjects are provided creating a balance between the two languages and subjects. When students enter 4th grade and up, two subjects will be taught in English and two subjects will be taught in Spanish, in addition to receiving language arts in both languages. 10. What language will my child learn to read first? Students will learn to read in Spanish first and slowly transition into English literacy beginning in 1st grade. Spanish literacy is easier to acquire, leading students to transition into English literacy faster. 11. How do I apply for dual language? For more information on the JPS Dual Language Program, please go to:
Regardless of their complexity, all programs essentially perform operations on numbers, strings, and other values. These values are called literals i.e. in the most basic sense or meaning of the symbol. Before we start writing our first programs, let’s learn the basic literals in Kotlin: integer numbers, characters, and strings. You can meet these literals everywhere in everyday life. We use integer numbers to count things in the real world. We will also often use integer numbers in Kotlin. Here are several examples of valid integer number literals separated by commas: If an integer value contains a lot of digits, we can add underscores to divide the digits into blocks to make this number more readable: for example, 1_000_000 is much easier to read than You can add as many underscores as you would like: 1_2_3. Remember, underscores can’t appear at the start or at the end of the number. If you write 100_ , you get an error. A single character can represent a digit, a letter, or another symbol. To write a single character, we wrap a symbol in single quotes as follows: '9'. Character literals can represent alphabet letters, digits from '9', whitespaces ( ' '), or some other symbols (e.g., Do not confuse characters representing numbers (e.g., '9') and numbers themselves (e.g., A character cannot include two or more digits or letters because it represents a single symbol. The following two examples are incorrect: '543' because these literals have too many characters. Strings represent text information, such as the text of an advertisement, the address of a web page, or the login to a website. A string is a sequence of any individual characters. To write strings, we wrap characters in double quotes instead of single ones. Here are some valid examples: "I want to learn Kotlin", "[email protected]". So, strings can include letters, digits, whitespaces, and other characters. A string can also contain just one single character, like "A". Do not confuse it with the character 'A', which is not a string. Do not confuse these literals: 123is an integer number, and "123"is a string; 'A'is a character, and "A"is a string; '1'is a character, and 1is an integer number.
Strengthening Students’ Speaking and Listening Skills Having productive conversations requires students to listen deeply, reflect on what is said, express ideas clearly, sustain attention, ask insightful questions, debate respectfully, and develop comprehension of information taken in. These essential listening and speaking skills need to be taught and practiced and will help students have successful conversations both inside and outside of school. Taking the time to teach and practice academic conversation skills helps prevent or minimize problems that can arise during collaborative work and enables students to be more deeply invested in their interactions and learning. “By giving our students practice talking with others, we give them frames for thinking on their own.” – Lev Vygotsky How to teach listening and speaking skills Listening requires the fundamental skill of focusing attention on the speaker to be able to hear and understand what the speaker is saying. Speaking skills require students to take turns, speak confidently, stay on topic, and speak with clarity. Students are more likely to master speaking and listening skills when they can actively engage in learning them. Interactive Modeling gives students a clear picture of these skills and an immediate opportunity to both practice them and receive feedback. How to practice listening and speaking skills - Teach students activities and games that bolster their ability to demonstrate listening skills while also having fun. - Provide ongoing support by displaying anchor charts that list expectations, such as: voices off, eyes on the speaker, focused attention on the speaker. - Provide multiple opportunities to practice. - Give explicit and inclusive positive feedback. - Have students reflect on their progress with listening and speaking skills. - Use a variety of interactive learning structures that vary ways to practice the skills, such as Inside-Outside Circles, Four Corners, Maître d’, Swap Meet, Partner Chat, and Table Talk. Why these skills matter To have productive discussions in all subjects, students need to be able to express ideas clearly, concisely, and confidently. Having successful communication skills leads to better social relationships. For any conversation, knowing when to speak – and when to listen – is essential.
Vitamin B1 (thiamine) is used by nearly all your cells, and helps to metabolize the carbohydrates and lipids in the foods you eat. It also facilitates converting your food into energy and boosting the flow of electrolytes in and out of your nerves and muscles. It's considered "essential" because your body can't produce it on its own; it must come from an outside source. Thiamine is sometimes referred to as an "antistress" vitamin for its positive influence on your central nervous system, and it's also important for healthy immune function. In addition to nutrients such as zinc and vitamins C and D, vitamin B1 (thiamine) may actually be crucial to protect against infectious respiratory illnesses such as COVID-19. Thiamine deficiency syndrome (beriberi) has also been implicated in other types of severe infections and bears many similarities to sepsis. This is one of the reasons why thiamine is such an important part of Dr. Paul Marik's sepsis treatment.1 Sepsis, in turn, is a major contributor in influenza deaths in general, and a primary cause for COVID-19 deaths specifically. While thiamine deficiency is often the result of alcohol misuse, chronic infections, poor nutrition and/or malabsorption, recent research suggests vitamin B1 availability has dramatically declined throughout the food chain in recent years.2 Lack of Thiamine Is Disrupting Ecosystem In a January 28, 2021, article in Hakai Magazine,3 Alastair Bland reviews findings showing certain marine ecosystems are being decimated by an apparent lack of thiamine. Problems were noticed in January 2020 at salmon hatcheries in California. Fish were acting disoriented and mortality was surprisingly high. Initially, they feared a virus might be at play, but after digging through the medical literature, they found research discussing thiamine deficiency in marine life. As noted in the article, vitamin B1 is "a basic building block of life critical to the functioning of cells and in converting food into energy." Biologists tested the theory by dissolving thiamine powder into the water, and within hours, nearly all of the fish were acting normally again. Meanwhile, the behavior of fish in an untreated batch continued to decline. As a result of this research, many hatcheries took to applying thiamine, but the underlying problem still remains. "Since the fish acquire thiamine by ingesting it through their food, and females pass nutrients to their eggs, the troubling condition indicated that something was amiss in the Pacific Ocean, the last place the fish eat before entering fresh water to spawn," Bland writes, adding: "California researchers now investigating the source of the salmon's nutritional problems find themselves contributing to an international effort to understand thiamine deficiency, a disorder that seems to be on the rise in marine ecosystems across much of the planet. It's causing illness and death in birds, fish, invertebrates, and possibly mammals, leading scientists from Seattle to Scandinavia to suspect some unexplained process is compromising the foundation of the Earth's food web by depleting ecosystems of this critical nutrient." What's Causing Ecosystem-Wide Thiamine Deficiency? As explained by Bland, "Thiamine originates in the lowest levels of the food web." Certain species of bacteria, phytoplankton, fungi and even some plants are responsible for synthesizing thiamine from other precursor compounds. From there, thiamine makes its way through both the animal and plant kingdoms. All organisms need it. In animals, enzymes interact with thiamine to generate cellular energy. Without sufficient amounts of thiamine, fundamental metabolic processes start to fail, causing neurological disturbances, reproductive problems and increased mortality. While beriberi has been recognized as a serious health risk in humans for nearly 100 years, and thiamine supplementation has been standard practice for domesticated livestock such as sheep, cattle, mink and goats for several decades,4 the presence in and effect of thiamine deficiency on wildlife wasn't discovered until the 1990s, when Canadian scientist John Fitzsimons started investigating the decline in Great Lakes trout. "Studying lake trout born in captivity, Fitzsimons observed symptoms like hyperexcitability, loss of equilibrium, and other abnormal behavior. He wondered if a nutritional deficiency was at play, and to test for this he dissolved various vitamin tablets in water and — using trout in different life stages, including fertilized eggs — administered the solutions to the fish, both through injection and baths. The idea was to see which vitamin, if any, cured the condition. 'It came down to a range of B vitamins, and it was only the thiamine that was able to reverse the signs I was seeing,' he says." Since the publication of Fitzsimons' findings in 1995, thiamine deficiency has been identified in dozens of animal species, including birds and moose. While severe deficiency has lethal consequences, sublethal deficiency can have insidiously devastating effects, including:6 - Lowering strength and coordination - Reducing fertility - Impairing memory and causing other neurobehavioral deficits.7 In humans, thiamine deficiency has been shown to play a role in cases of delirium. In one study,8 45% of cancer patients suffering from delirium had thiamine deficiency, and 60% recovered when treated with intravenous thiamine - Loss of vocalization B1 Deficiency May Be Responsible for Wildlife Declines Thiamine deficiency is now suspected of driving declines in wildlife populations all across the northern hemisphere.9 Bland cites research showing marine and terrestrial wildlife populations declined by half between 1970 and 2012. Between 1950 and 2010, the global seabird population declined by 70%.10 While habitat loss and other environmental factors are known to impact biodiversity, these declines are allegedly occurring far faster than can be explained by such factors. Researchers strongly suspect human involvement, but how? "Scientists are floating various explanations for what's depriving organisms of this nutrient, and some believe that changing environmental conditions, especially in the ocean, may be stifling thiamine production or its transfer between producers and the animals that eat them," Bland writes.11 "Sergio Sañudo-Wilhelmy, a University of Southern California biogeochemist, says warming ocean water could be affecting the populations of microorganisms that produce thiamine and other vitamins, potentially upsetting basic chemical balances that marine ecosystems depend on. 'In different temperatures, different phytoplankton and bacteria grow faster,' he says. This, he explains, could hypothetically allow microorganisms that do not produce thiamine — but, instead, acquire it through their diet — to outcompete the thiamine producers, effectively reducing thiamine concentrations in the food web." The transfer of thiamine up the food chain may be blocked by a number of factors, including overfishing. But there's yet another possibility, and that is the overabundance of thiaminase, an enzyme that destroys thiamine. Thiaminase is naturally present in certain microorganisms, plants and fish that have adapted to use it to their advantage. "When larger animals eat prey containing thiaminase, the enzyme rapidly destroys thiamine and can lead to a nutritional deficiency in the predator," Bland explains. One thiaminase-rich species is an invasive species of herring called alewife, which during the 20th century have spread through the Great Lakes, displacing native species. This, some researchers believe, has led to chronic and severe thiamine deficiency in larger fish species. "The Great Lakes' saga illustrates the outsized impact that one single nutrient can have on an entire ecosystem," Bland writes. An overabundance of thiaminase-containing species also appears to be responsible for the decline in Sacramento River salmon. In this case, northern anchovy, which is rich in thiaminase, is the suspected culprit. Unfortunately, few answers have emerged as of yet. Giving thiamine to fish in hatcheries is not a long-lasting solution, because once they re-enter the wild, the deficiency reemerges. One scientist likened the practice to "sending a kid with a fever off to school after giving them a Tylenol."12 Signs and Symptoms of Thiamine Deficiency Considering both plants and wildlife are becoming increasingly thiamine-deficient, it's logical to suspect that this deficiency is becoming more common in the human population as well. Early symptoms of thiamine deficiency include:13,14 - Fatigue and muscle weakness - Confusion and/or memory problems - Loss of appetite and weight loss - Numbness or tingling in arms or legs As your deficiency grows more severe, the deficiency can progress into one of four types of beriberi:15 - Paralytic or nervous beriberi (aka "dry beriberi") — Damage or dysfunction of one or more nerves in your nervous system, resulting in numbness, tingling and/or exaggerated reflexes - Cardiac ("wet") beriberi — Neurological and cardiovascular issues, including racing heart rate, enlarged heart, edema, breathing problems and heart failure - Gastrointestinal beriberi — Nausea, vomiting, abdominal pain and lactic acidosis - Cerebral beriberi — Wernicke's encephalopathy, cerebellar dysfunction causing abnormal eye movements, ataxia (lack of muscle coordination) and cognitive impairments. If left untreated, it can progress to Korsakoff's psychosis, a chronic brain disorder that presents as amnesia, confusion, short-term memory loss, confabulation (fabricated or misinterpreted memories) and in severe cases, seizures Thiamine is frequently recommended and given to people struggling with alcohol addiction, as alcohol consumption reduces absorption of the vitamin in your gastrointestinal tract. An estimated 80% of alcoholics are deficient in thiamine and therefore more prone to the side effects and conditions above.16 Thiamine is also very important for those with autoimmune diseases such as inflammatory bowel disease (IBD) and Hashimoto's (a thyroid autoimmune disorder).17 In case studies,18,19 thiamine supplementation has been shown to improve fatigue in autoimmune patients in just a few days. Interestingly, in one of these studies,20 which looked at patients with IBD, patients responded favorably to supplementation even though they all had "normal" baseline levels. The authors speculate that thiamine deficiency symptoms in such cases may be related to enzymatic defects or dysfunction of the thiamine transport mechanism (opposed to being an absorption problem), which can be overcome by giving large quantities of thiamine. Thiamine in Infectious Disease As mentioned earlier, thiamine deficiency has also been implicated in severe infections, including COVID-19. In fact, researchers have noted that, based on what we know about B vitamins' effects on the immune system, supplementation may be a useful adjunct to other COVID-19 prevention and treatment strategies. You can learn more about this in "B Vitamins Might Help Prevent Worst COVID-19 Outcomes." More generally, a 2016 study21 in the journal Psychosomatics sought to investigate the connection between thiamine and infectious disease by looking at 68 patients with Korsakoff syndrome. Thirty-five of them suffered severe infections during the acute phase of the illness, including meningitis, pneumonia and sepsis, making the authors conclude that "Infections may be the presenting manifestation of thiamine deficiency." Another study22 published in 2018 found thiamine helps limit Mycobacterium tuberculosis (MTB) by regulating your innate immunity. According to this paper: "… vitamin B1 promotes the protective immune response to limit the survival of MTB within macrophages and in vivo … Vitamin B1 promotes macrophage polarization into classically activated phenotypes with strong microbicidal activity and enhanced tumor necrosis factor-α and interleukin-6 expression at least in part by promoting nuclear factor-κB signaling. In addition, vitamin B1increases mitochondrial respiration and lipid metabolism … Our data demonstrate important functions of thiamineVB1 in regulating innate immune responses against MTB and reveal novel mechanisms by which vitamin B1 exerts its function in macrophages." Thiamine deficiency is also associated with the development of high fever, and according to a letter to the editor,23 "Is Parenteral Thiamin a Super Antibiotic?" published in the Annals of Nutrition & Metabolism in 2018, thiamine injections are "likely to eradicate microbial infections" causing the fever. By dramatically increasing susceptibility to infections, thiamine deficiency could potentially have the ability to impact the spread of just about any pandemic infectious disease — including COVID-19. Are You Getting Enough B Vitamins? While biologists struggle to find an ecosystem-wide solution for thiamine deficiency in the food chain, the solution for us, in the meantime, may be to make sure we get enough thiamine through supplementation. Evidence suggests thiamine insufficiency or deficiency can develop in as little as two weeks, as its half-life in your body is only nine to 18 days.24 Ideally, you can select a high-quality food-based supplement containing a broad spectrum of B vitamins to avoid creating an imbalance. The following guidelines will also help protect or improve your thiamine status: • Limit your sugar and refined grain intake — As noted by the World Health Organization,25 "Thiamine deficiency occurs where the diet consists mainly of milled white cereals, including polished rice, and wheat flour, all very poor sources of thiamine." Simple carbs also have antithiaminergic properties,26 and raise your thiamine requirement for the simple fact that thiamin is used up in the metabolism of glucose. • Eat fermented foods — The entire B group vitamin series is produced within your gut provided you have a healthy gut microbiome. Eating real food, ideally organic, along with fermented foods will provide your microbiome with important fiber and beneficial bacteria to help optimize your internal vitamin B production as well. • Avoid excessive alcohol consumption, as alcohol inhibits thiamine absorption, and frequent use of diuretics, as they will cause thiamine-loss. • Avoid sulfite-rich foods and beverages such as nonorganic processed meats, wine and lager, as sulfites have antithiamine effects. • Correct any suspected magnesium insufficiency or deficiency, as magnesium is required as a cofactor in the conversion of thiamine. Daily Intake Recommendations While individual requirements can vary widely, the typical daily intake recommendations for B vitamins are as follows: Adult men and women need 1.2 and 1.1 mg respectively each day.27 If you have symptoms of thiamine deficiency, you may need higher doses. Thiamine is water-soluble and nontoxic, even at very high doses, so you're unlikely to do harm. Doses between 3 grams and 8 grams per day have been used in the treatment of Alzheimer's without ill effect. Suggested daily intake is about 1.1 mg for women and 1.3 mg for men.28 The dietary reference intake established by the Food and Nutrition Board ranges from 14 to 18 mg per day for adults. Higher amounts are recommended depending on your condition. For a list of recommended dosages, see the Mayo Clinic's website.29 Nutritional yeast (not to be confused with Brewer's yeast or other active yeasts) is an excellent source of B vitamins, especially B6.30 One serving (2 tablespoons) contains nearly 10 mg of vitamin B6, and the daily recommended intake is only 1.3 mg.31 B8 is not recognized as an essential nutrient and no recommended daily intake has been set. That said, it's believed you need about 30 mcg per day.32 Vitamin B8 is sometimes listed as biotin on supplements. Brewer's yeast is a natural supplemental source.33 Folic acid is a synthetic type of B vitamin used in supplements; folate is the natural form found in foods. (Think: Folate comes from foliage, edible leafy plants.) For folic acid to be of use, it must first be activated into its biologically active form (L-5-MTHF). This is the form able to cross the blood-brain barrier to give you the brain benefits noted. Nearly half the population has difficulty converting folic acid into the bioactive form due to a genetic reduction in enzyme activity. For this reason, if you take a B-vitamin supplement, make sure it contains natural folate rather than synthetic folic acid. Nutritional yeast is an excellent source.34 Adults need about 400 mcg of folate per day.35 Nutritional yeast seasoning is also high in B12, and is highly recommended for vegetarians and vegans. One serving (2 tablespoons) provides about 67 mcg of natural vitamin B12.36 Sublingual (under-the-tongue) fine mist spray or vitamin B12 injections are also effective, as they allow the large B12 molecule to be absorbed directly into your bloodstream. Source: mercola rss
Understanding why we balance equations helps you organize your thinking efficiently and makes learning faster and more memorable. The two big reasons for balancing equations. First is so that they obey the Law of Conservation of Mass. Second is so we can use the coefficients as ratios between products and reactants. Balanced equations provide ratios between substances. The coefficients can be thought of as mole ratios.
Bone formation achieved in laboratory Scientists from Eindhoven University of Technology (TU/e), assisted by colleagues from the University of Illinois, have successfully mimicked the process of bone formation in the laboratory. A cryoTitan electron microscope was used to capture the process in great visual detail and the results, which contradicted previous assumptions, could be applied to areas other than medicine. Bone forms naturally when calcium phosphate nanocrystals are deposited on collagen fibers – which is just what the researchers did in the lab. It has long been assumed that the collagen only acted as a template for bone formation, with the actual process occurring due to the presence of specialized biomolecules. What the Eindhoven researchers discovered, however, was that the collagen fibers themselves control mineral (and thereby bone) formation. The biomolecules, it turns out, serve to keep the calcium phosphate in solution until mineral growth starts. The formation process was observed with a cryoTitan electron microscope, which rapidly froze samples at various stages of mineralization. This allowed the scientists to document the procedure in steps, instead of trying to grab all their images on the fly. The microscope is capable of extremely high definition, being able to distinguish between individual atoms. While an Italian group is already using the newfound knowledge to develop bone implants, the Eindhoven team are more interested in applying it to other areas, such as the creation of magnetic materials that could be used as biomarkers or for data storage. “I am seriously convinced that we can make all kinds of materials using these principles,” said project leader Dr. Nico Sommerdijk. “The biomimetic formation of magnetic materials is a new area that is still completely unexplored.”
What Is Causing Glaciers and Sea Ice to Melt Why are glaciers significant? What Is Causing Glaciers And Sea Ice To Melt: Ice forms a protective shell over the Earth and its oceans. These dazzling white spots reflect excess heat into space, thereby cooling the Earth. In principle, the Arctic stays colder than the equator because the ice reflects more of the sun’s heat into space. Depending on their location, glaciers can be hundreds or thousands of years old. Scientifically documenting how the climate has changed over time. By analysing them, we can get a good idea of how quickly the Earth is heating up. They give scientists evidence of how the climate has evolved. Glacial ice now covers around 10% of the Earth’s land surface. Most of this is in Antarctica, while the others are trapped in Greenland’s ice. As enormous quantities of cold glacier meltwater enter warmer ocean waters, rapid glacial melt in Antarctica and Greenland also alter ocean currents. As ice from land melts, sea levels will rise much further. What’s the distinction between sea ice and glaciers? Only the ocean is responsible for the formation and melting of sea ice. Whereas glaciers form on land. Icebergs are glacier chunks that break off from glaciers and fall into the sea. When glaciers melt, runoff increases the ocean’s water by releasing water trapped on land, leading to a global sea level rise. Instead, melting Arctic sea ice has many other disastrous implications, ranging from diminishing the amount of usable ice for walruses and polar bears to disrupting weather patterns worldwide by modifying the jet stream pattern. What is causing glaciers to melt? Since the early 1900s, several glaciers worldwide have been rapidly receding. Our actions have led to this result. Since the Industrial Revolution, human-caused climate change—particularly the generation of carbon dioxide and other greenhouse gases—has caused global warming, which is more severe near the poles. Scientists anticipate that if emissions continue unabated, summers in the Arctic may be ice-free by 2040 as ocean and air temperatures rise fast. Does the melting of glaciers contribute to the overall increase in sea level? Sea level rise is directly causing coastal erosion to worsen at an alarming rate due to glacial melt and storm surges. Rising air and ocean temperatures create more frequent and stronger coastal storms like hurricanes and typhoons. Melting Greenland’s ice would raise global sea levels by 20 feet, which is quite concerning. What effect do melting sea ice and glaciers have on weather patterns? A darker ocean forms when this ice melts, reducing the impact that formerly cooled the poles, leading to higher air temperatures and disrupting standard ocean circulation patterns. According to research, the polar vortex is becoming increasingly common outside of the Arctic due to changes in the jet stream produced by increased air and water temperatures in the Arctic and the Tropics. How does the loss of sea ice and glaciers affect humans and other life forms? What occurs in these regions has global ramifications. Sea ice and glacier melting, combined with warming waters, will continue to wreak havoc on weather patterns all over the planet. Industries that rely on healthy fisheries may suffer as warming waters alter where and when fish breed. As flooding gets more familiar and storms become more severe, coastal cities will continue to face billions in reconstruction expenditures. It’s not just human beings who suffer. Melting sea ice is forcing species like walruses and polar bears to relocate from the Arctic, increasing the likelihood of deadly encounters between the two.
Bacterial otogenic meningitis causes Meningitis - bacterial or viral inflammation of the pia mater. Meningitis with otitis arises as a result of the spread of infection into the subarachnoid space of the cavities of the middle and inner ear or is a consequence of other intracranial complications (extra-, subdural brain abscess, sigmoid sinus thrombosis). The most frequent causes of meningitis with otitis are cocci (strepto-, staphylo-, diplo- and pneumococci), far less - other microbes. But often in the cerebrospinal fluid can not identify any flora. Most often, infection with otogenic meningitis spreads through contact or labirintogenous. The first time there are usually significant changes up to fracture in the bone separating the middle ear cavity from the meninges. It is sometimes leptomeningitis precedes inflammation of the dura mater (epidural or subdural abscess). When labirinthogenous meningitis infection from the affected labyrinth extends through the cochlea and vestibular aqueducts and internal auditory canal. This route of infection in meningitis is more common than in other intracranial complications. Meningitis usually occurs when mastoiditis complicating acute purulent otitis media and chronic purulent otitis (attic disease), especially the complicated cholesteatoma. In the initial stage of acute otitis media meningitis is most often a consequence of the hematogenous spread of infection (for vascular paths). This so-called fulminant form of meningitis, the most unfavorable. Pathological anatomy. The initial inflammatory changes pia mater (redness, swelling) also apply to the cerebral cortex. Serous exudate in the subarachnoid space during meningitis is later purulent. Simultaneously in the cerebral cortex in meningitis occasionally appear areas of softening and suppuration. Thus, each meningitis is essentially a meningoencephalitis. The dura in meningitis also becomes hyperemic, tense. Inflammation of the membranes of meningitis often expressed in the base of the brain (basal meningitis), sometimes extends to the hemisphere of the brain or restricted to them. Even more rarely, meningitis is only observed in the cerebellum, sometimes extending to the shell and the spinal cord. In severe cases, purulent inflammation in meningitis is extended to all the subarachnoid space of the cerebrum, cerebellum and spinal cord. Bacterial otogenic meningitis symptoms and clinical presentation The headache of meningitis is often intense, nearly constant. Headache in meningitis appears before other symptoms. Initially, the headache of meningitis may be limited to the neck or forehead, then to become diffuse. Very often the headache of meningitis is accompanied by nausea and vomiting. The temperature of meningitis is increased to 39-40 ° C and above and has the "continua" character. Pulse is usually accelerated but sometimes marked bradycardia. The general condition of the patient with severe meningitis, a person is often pale, sallow, haggard, dry tongue. The patient's consciousness in meningitis confused, passing into delirium. Patient apathetic, often marked motor excitation, increased by extraneous stimuli (sound, light). Salient features of the patient with head thrown back, knees bent legs to eliminate unnecessary tension of the dura mater. Stiff neck and Kernig's signs and Brudzinski's signs always expressed in a patient with meningitis. Sometimes marked pyramidal signs (pathological reflexes - Babinski, Oppenheim, Gordon, and others). Occasionally there are spasms of the limbs. Often with basal meningitis observed abducent paralysis, and sometimes the oculomotor and other cranial nerves. Rarely occur with focal brain symptoms, forcing the resort to an unsuccessful and even unsound puncture the brain. Cerebrospinal fluid (CSF) is published with the lumbar puncture is usually under significant pressure. It loses its transparency, becomes turbid, sometimes almost pure pus. CSF pleocytosis in CSF varies greatly from a moderate increase to the number of cells that can not be counted. Usually dominated by neutrophils, with a further favorable course marked lymphocytic reaction, but such a relation of cellular elements is not constant, especially when treatment with antibiotics. The protein content is usually increased, often significantly. The percentage of sugar and chlorides, on the contrary, reduced. In blood - high leukocytosis, and a significant increase in ESR. The use of antibiotics has led to significant changes in clinical meningitis. Symptoms become blurred, sometimes unexpressed, another character acquired during meningitis. So, there are cases of meningitis with subfebrile or normal temperature, with a mild headache, little marked meningeal signs, impaired consciousness, and other brain symptoms in general, satisfactory or even good condition. Often there are significant modifications in the cerebrospinal fluid (CSF pleocytosis reduction, changes in cellular composition in the direction of lymphocytosis, etc.) and hemogram also masks the true picture of the disease. During meningitis with timely surgery, the absence of other complications, and rational use of antibiotics and sulfonamides mostly favorable, mostly disease after 3-4 weeks ends with recovery. However, there is a prolonged duration (up to several months), usually intermittent nature (the so-called recurrent meningitis). This form of meningitis caused by several factors - remaining after surgery on the ear purulent foci in the labyrinth or the apex of petrous - deep epidural abscess - delimited accumulations of pus in the subarachnoid space, not amenable to antibiotics due to fibrinoplasty plastic clotting (they can also cause focal brain symptoms) With intermittent flow with a number of outbreaks of meningitis and damping progressively increases resistance of bacteria. The prognosis for such a course of meningitis, mostly bad. Fatal properties also a hematogenous form of meningitis, in which the symptoms are often so lightning fast-growing that the autopsy can not establish a visible (macroscopic) changes in the shells. Bacterial otogenic meningitis diagnosis In the presence of characteristic meningeal symptoms and a picture of cerebrospinal fluid, the diagnosis is simple. Having established meningitis, it is necessary to find out several circumstances - whether it is associated with ear disease, is it epidemic cerebrospinal or tuberculous meningitis, is it a consequence of another otogenic intracranial complication (for example, extradural or cerebral abscess). Acute inflammation or exacerbation of chronic purulent inflammation of the middle ear speaks in favor of the otogenic nature of meningitis. The presence of meningococci or tuberculous mycobacteria in the cerebrospinal fluid reveals the nature of meningitis. However, mycobacteria are not always sown with tuberculosis. Tuberculous meningitis is characterized by a clear fluid, resulting from puncture under high pressure, lymphocytic reaction of the cerebrospinal fluid, prolapse of the fibrin film. The diagnosis is facilitated by the identification of organ tuberculosis. However, this is far from always possible, and at the same time, the clinical course of tuberculous meningitis can be atypical. Along with tuberculous meningitis, otogenic meningitis should be differentiated from cerebrospinal meningitis. The latter is characterized by a sudden onset, the absence of any prodrome, which with otogenic meningitis can be in the form of a gradually increasing headache several days before the developed picture of the disease. If with purulent meningitis there is a picture of acute otitis media or exacerbation of chronic purulent otitis media, you should not hesitate with surgery on the ear. In the vast majority of cases, operational findings confirm the correctness of the decision. During surgery, it is most often possible to detect other intracranial complications if meningitis has arisen on their soil. In the absence of such findings, careful monitoring of neurological symptoms and dynamics of cerebrospinal fluid changes is necessary, taking into account, first of all, the possibility of an unrecognized brain abscess. Suspicion of a brain abscess should arise in the absence of an improvement in neurological status, with a simultaneous tendency to normalize cerebrospinal fluid and protein-cell dissociation in cerebrospinal fluid (increased protein content with slight pleocytosis). Bacterial otogenic meningitis treatment Operation on the temporal bone (simple or general cavity, depending on the nature of the otitis media) with wide exposure of the dura mater in the middle and posterior cranial fossa. If a concomitant complication is identified (epidural or subdural abscess, brain abscess, thrombophlebitis of the sigmoid sinus), an appropriate intervention is performed. Intervention in the maze or top of the pyramid, even with complications, is optional. Very often, the phenomena of purulent labyrinthitis or petrozitis are eliminated after a conventional operation on the temporal bone and drug treatment. However, if there is no success or incomplete effect, the indicated intervention on the maze or top of the pyramid should be performed. Postoperative treatment consists of antibiotics and sulfonamides. Treatment should be combined with oral administration of nystatin to avoid the development of candidiasis (500,000 units 3-4 times a day) and vitamin therapy (ascorbic acid and a complex of B vitamins). To reduce intracranial pressure, dehydration is carried out: 10-15 ml of a 25% solution of magnesium sulfate i/m, 10 ml of a 2.4% solution of aminophylline iv, dropwise, 1-2 ml of a 2% solution of Lasix iv or i / m or furosemide (1-2 tablets) inside. They produce lumbar punctures (in severe cases after 2-3 days, with the beginning of the rehabilitation of the cerebrospinal fluid - after 4-5 days), and a moderate amount of cerebrospinal fluid is released. With a pronounced clinical improvement and approaching the normalization of cerebrospinal fluid, a puncture is stopped. In severe cases, antibiotics are administered endolumbally after fluid removal. In especially severe forms of meningitis with a threatening increase in intracranial pressure that is not amenable to the indicated treatment, in addition to the operation performed, an opening of the lateral brain cistern is shown. In severe cases of meningoencephalitis, antibiotics are also administered through the lumen of the carotid arteries.
Millions of tons of aerosol particles are transported to remote oceans and forests each year. These particles, once deposited, provide the ecosystems with an external source of nutrients, such as iron, phosphorus, and nitrogen. This, in turn, stimulates primary production (a plant’s ability to produce complex organic compounds from water, carbon dioxide, and simple nutrients) and enhances carbon uptake and thus indirectly affects the climate. To discuss current scientific knowledge of atmospheric nutrients and their impact on the Earth system, 30 scientists from seven countries met in July 2015 in Leeds, United Kingdom. The aim of the workshop was to identify the most prominent uncertainties in quantifying the effects of atmospheric nutrients on ecosystems and the climate. The workshop focused on two key questions: - What is the flux of atmospheric nutrients to the ecosystems? - What is the impact of atmospheric nutrients on global biogeochemical cycles and the climate? Results presented at the workshop show that mineral dust from the deserts is the dominant source of total atmospheric iron and phosphorus on a global scale, but only a small fraction of these nutrients is bioavailable. However, organic and inorganic acids formed mainly from anthropogenic gaseous emissions can transform insoluble iron and phosphorus from the dust, making these nutrients bioavailable. New results also show that biomass and fossil fuel combustion contribute significantly to the atmospheric deposition of iron because the solubility of iron from these sources is much higher than that from dust. Most recent modeling studies estimated that the deposition of soluble iron and reactive nitrogen into the ecosystems may have increased by more than 100% and more than 400%, respectively, on a global scale since the Industrial Revolution. However, much uncertainty remains regarding soluble iron flux to the oceans in global models, particularly in the nutrient-limited Southern Ocean. On the other hand, flux of reactive nitrogen appears to be well quantified. There is strong evidence that atmospheric deposition affects iron and nitrogen budgets and enhances primary production and nitrogen fixation rates in the open ocean. Atmospheric nutrients may also enhance algae growth in melting glaciers, which decreases the albedo of the glaciers and accelerates their melting. Higher dust and iron deposition in the Southern Ocean during the Last Glacial Maximum may have enhanced the biological pump (the biological processes that remove carbon dioxide from the atmosphere and store it in the deep sea), contributing to the decrease in atmospheric carbon dioxide concentration during this period. More recent studies also suggest that atmospheric phosphorus deposition may help to enhance primary production and carbon uptake in the Amazon rainforests. Meeting participants agreed that there are major uncertainties in the flux of atmospheric nutrients to the ecosystems, particularly those from anthropogenic and biogenic sources. The impact of atmospheric nutrients on ecosystems and the climate is also poorly understood. Close collaborations between experimentalists and modelers are essential to reducing uncertainties in our knowledge of the emission, transport, transformation, and deposition of atmospheric nutrients and in determining the ecosystem’s response to these nutrients—an essential factor in producing better representations of the atmospheric nutrients in Earth system models. This workshop was partly funded by the Natural Environment Research Council (NE/I021616/1), the Aerosol Society, and the Institute for Climate and Atmospheric Science at the University of Leeds and by grant RPG 406 from the Leverhulme Trust. We thank all participants for their active contribution to this meeting. —Zongbo Shi, School of Geography, Earth and Environmental Sciences, the University of Birmingham, Birmingham, U.K.; email: [email protected]; and Ross Herbert, School of Earth and Environment, University of Leeds, Leeds, U.K. Citation: Shi, Z., and R. Herbert (2016), The importance of atmospheric nutrients in the Earth system, Eos, 97, doi:10.1029/2016EO044133. Published on 27 January 2016. Text © 2016. The authors. CC BY 3.0 Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.
3D printing, also known as additive manufacturing (AM), refers to processes used to synthesize a three-dimensional object in which successive layers of material are formed under computer control to create an object. Three dimensional printing (or simply known as 3D printing) really is the stuff of the future. Two dimensional printing has always had a lot of versatility—from creating small thank you notes to entire trade show banners—but it’s been confined to only two dimensions. Three dimensional printing really does feel like the next step in humanity’s ability and freedom to express itself creatively. But how does 3D printing actually work? It’s not exactly as simple (or magical) as the replicator machines in Star Trek, but it does do some pretty amazing things. It’s part of a process called “additive manufacturing,” which is a process that involves laying down individual horizontal layers one at a time to create a fully three-dimensional object. But that’s just scratching the surface of how 3D printing works. In this article, we’ll look at the actual process itself and break down the individual steps of printing a 3D object. Read on to learn more about this amazing and constantly growing technology. Step 1: Creating an Object Blueprint The first step of creating a blueprint takes place on the computer in which you use a type of modeling software. This software is called “Computer Aided Design” or “CAD” and popular brands include Blender and Tinkercad. While creating an object in the software, you assign its dimensions as well as its shape and eventually it will be ready for printing. Another option for creating an object blueprint can come from using 3D scanners, which include devices like the Microsoft Kinect. These scanners literally scan an object within the printer and then produce a copy for the template that is stored on the computer you are using. There are several different types of 3D scanners including time-of-flight scanners and triangulation scanners, both of which use lasers to measure the object being scanned. There are also 3D scanners that actually touch and probe the object being copied, called “contact scanners.” These run a contact probe across the object you want to scan, all the while feeding the data back into the computer until it’s ready to print. If You are looking for the best advice related 3d printing, we suggest that You find 3d scanning services near you. Step 2: Bioplastic is Heated Inside many popular 3D printers is a material called “bioplastic” that is looped around inside a compartment. When the information about the desired object is sent from the computer to the printer, the bioplastic is heated as it moves through a tube and begins to melt. This helps it form into the shape of the desired object. The bioplastic is usually based off some type of compound, but the most common kind is a substance known as polylactic acid, or PLA. PLA is a common biodegradable plastic that is made of things like corn starch and can be found in a wide variety of products (though it’s most common in food packaging). Other types of bioplastics used in 3D printing include acrylonitrile butadiene styrene—which is stronger than PLA—and polyvinyl alcohol plastic—which is more of an adhesive. This isn’t to say that 3D printing is confined to plastics. Other types include nylon, carbon fibers, waxes, and even metals. There are even those who have actually worked with creating certain types of food, including chocolate! Step 3: Object is Formed and Bioplastic is Cooled As the layers of heated bioplastic or whatever other material being used are placed in their respective layers, the object cools and solidifies. Depending on the size and shape and complexity of the object being printed, this whole process can take some time, up to several hours, in fact. Once the object has cooled, more detail can be added, including painting and even finishing techniques like subtractions accomplished by the printer. But the variety of objects that can be created with the 3D printing process is quite impressive. While visual art and sculptures have become increasingly popular as the capabilities and availability of 3D printers have increased as well, it’s been used in other fields including crime scene investigation, construction, and even medicine. Scientists are even looking into printing replacements for certain limbs like noses and ears. If you'd like to learn more on Acrylonitrile Butadiene Styrene (ABS) check out this blog post
Capacitors are passive devices that are used in almost all electrical circuits for rectification, coupling and tuning. Also known as condensers, a capacitor is simply two electrical conductors separated by an insulating layer called a dielectric. The conductors are usually thin layers of aluminum foil, while the dielectric can be made up of many materials including paper, mylar, polypropylene, ceramic, mica, and even air. Electrolytic capacitors have a dielectric of aluminum oxide which is formed through the application of voltage after the capacitor is assembled. Characteristics of different capacitors are determined by not only the material used for the conductors and dielectric, but also by the thickness and physical spacing of the components.
Learning music provides so many benefits for children that extend beyond the music classroom. From maths to language skills, even brain growth, incorporating songs into your lessons across the curriculum can have wide-reaching positive effects. While investing in multiple instruments is unachievable for many school budgets, luckily there’s so many ways to create, play, and export songs, using only your imagination and versatile edtech. Here are some of our favourite ways to create songs with the ActivPanel and incorporate music into your lesson plan. Benefits of music in the classroom Music and songs have been used as tools for creating super-effective mnemonic devices for eons. But only relatively recently have we started interrogating the power that music seems to have when it comes to memory retention. Studies have found listening to and performing music reactivates areas of the brain associated with memory, reasoning, speech, emotion, and reward. As an added plus, music is fun! It’s hard to deny the uplifting effect that singing a song can have. It’s a great tool for lifting the engagement and energy levels in your classroom while delivering lesson content. Apps for making music There are so many fantastic apps and websites at your fingertips if you want to create music with the ActivPanel. Here are some of our favourites for accessibility and ease of use: We have been huge fans of Chrome Music Lab for a while now. It’s super accessible; you can find it through your preferred browser on the ActivPanel, no downloads required. Connect multiple devices for collaboration and explore different components of music through its varied tools, called “experiments”. What we love most about Chrome Music Lab is its ability to not only create music and sound, but also the potential for demonstrating scientific elements like soundwave visualisations in experiments like Spectogram and Voice Spinner. This is a more traditional song-making app, allowing you to compose music by combining different melodies and rhythms, then adding your recorded vocals. Modify loops and sounds in the editor, add free beats and let your imagination run free! Another beat-making app, Groovepad is perfect for beginners, with an extensive library of ready-to-use tunes and music elements to get you started. Autotune your voice with an app that can make melodies out of your lesson recordings. You can choose between different effects that alter the genre, like electro-funk, rap and hip hop, harmonic folk and more. This is perfect for creating music without too much effort, like easy mnemonic devices or fun educational raps. Incorporating music into your classroom Now that you’ve learnt how to make music with ease, why not incorporate songs into your lesson plans? - Using Strings on Chrome Music Lab, show the relationships between long and short strings, then have students make their own instruments for a fun art project. - Check out this in-depth lesson plan for years 3-8. Using Rhythm on Chrome Music Lab, you and your students can compose rhythms in different meters. - Have students work in groups to take terms and ideas learnt throughout the week and incorporate them into a song for a fun mnemonic device. Display the lyrics on the panel and the class can sing along. - Play certain music at different points in the day to signify the end of one activity and the start of another. This is a fun way to establish and reinforce your classroom procedures. - For older students, why not set up a collaborative playlist that the class can add songs to? You can then play this during appropriate periods such as group work or private study. Kids will appreciate the chance to hear their own music, and the collaborative element gives a sense of class unity and trust. Have you enjoyed any of the above ideas? Sign up to our free professional development site, Learn Promethean for a wide range of online courses, videos, and articles designed to inspire.
This week, a [study] was published that named a new dicynodont. Let’s talk about it! Dicynodonts are extinct herbivores from the Permian Period (298-252 millions of years ago). They are named for the two tusks that most of them had: di – two, cyno – dog, dont – tooth. Dicynodonts are not dinosaurs, though. They are synapsids. Land animals (mammals, reptiles, turtles, birds, and their extinct relatives) are divided into three main groups based on the openings in their skulls. Every skull needs to have openings for eyes, the nose, ears, and mouth, but some additional openings around the temple can appear for muscles of the jaw. Anapsids, the turtles, have no temporal openings. Diapsids, the reptiles, have two temporal openings. Synapsids, the mammals, have one temporal opening. Dicynodonts are early synapsids, so they are more closely related to mammals than to other animals, even though they look like reptiles. This new dicynodont is named Bulbasaurus phylloxyron: bulb – for the bulb on its nose, saurus – for lizard, phyllo – for leaf, and xyron – for razor (referring to the edge of the jaw that was sharp and used for cutting plants). Bulbasaurus had a small skull (13-16 cm long), but had adult features, meaning it was an adult even though it was small. It’s also the earliest dicynodont of its family, the Geikiids, which allows us to adjust the timing of the evolution of this group.
From Science -- March 24, 2000 Richard A. Kerr, Science Magazine Greenhouse skeptics often point to the relatively modest atmospheric warming of the past few decades as evidence of the climatic impotence of greenhouse gases. Climate modelers respond that much of the heat trapped by greenhouse gases should be going into the ocean, delaying but not preventing some of the atmospheric warming. But oceanographers plumbing the ocean depths have been unable to say who was right, because records of deep-ocean temperature have been too spotty to pick out clear trends. Now, on page2225 of this issue of Science, physical oceanographers rummaging through piles of neglected data report that they have turned up millions of old, deep-ocean temperature measurements, enough to draw up oceanic fever charts that confirm the climate models' predicted ocean warming. "We've shown that a large part of the 'missing warming' has occurred in the ocean," says oceanographer Sydney Levitus, the lead author of the paper. "The whole-Earth system has gone into a relatively warm state." The international data search-and-rescue effort "adds credibility to the belief that most of the warming in the 20th century is anthropogenic," says climate modeler Jerry D. Mahlman of the National Oceanic and Atmospheric Administration's (NOAA's) Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. It also suggests that past greenhouse gas emissions guarantee more global warming ahead and that the climate system may be more sensitive to greenhouse gases than some had thought. How could millions of valuable oceanographic measurements go missing for decades? Oceanographers have never had the orchestrated, worldwide network of routine observations that meteorologists enjoy. Instead, 40 or 50 years ago, ocean temperature profiles made by dropping a temperature sensor down through the sea might end up handwritten on paper, captured in a photograph, or recorded in analog form on magnetic tape. Everything from mold to mice was devouring the data. That's why, under the auspices of the United Nations-sponsored Global Oceanographic Data Archeology and Rescue project, data archaeologists like Levitus have spent the past 7 years seeking out ocean temperature data around the world and digitizing them for archiving on modern media. After adding 2 million profiles of ocean temperature to the previously archived 3 million profiles, Levitus and his NOAA colleagues in Silver Spring, Maryland, could see a clear change. Between 1955 and 1995, the world ocean--the Pacific, Atlantic, and Indian basins combined--warmed an average of 0.06ºC between the surface and 3000 meters. That's about 20 1022 joules of heat added in 40 years, roughly the same amount the oceans of the Southern Hemisphere gain and lose each year with the change of seasons. Half the warming occurred in the upper 300 meters, half below. The warming wasn't steady, though; heat content rose from a low point in the 1950s, peaked in the late '70s, dropped in the '80s, and rose to a higher peak in the '90s. All three ocean basins followed much the same pattern. These rescued data have oceanographers excited. "I've never seen anything like this before," says physical oceanographer Peter Rhines of the University of Washington, Seattle. "What surprises me is how much [of the warming] is in the deepwater." The newly retrieved data "show how active the [deep-ocean] system is," says oceanographer James Carton of the University of Maryland, College Park, "and how it's a part of the climate system on short time scales." The friskiness of the whole-ocean system came as a surprise as well. "There's striking variability from decade to decade," says Rhines. That the heat content tends to rise and fall in concert across all three ocean basins, in both the north and the south, is "quite amazing," he adds. Meteorologists and oceanographers are increasingly recognizing that the atmosphere connects ocean basins (Science, 10 July 1998, p.157), but as to what could be coordinating global swings in heat content, "I really don't know," says Rhines. The most immediate reward for retrieving so much data from the oceanographers' attic seems to be more confidence in climate models. The increased heat content of the world ocean is roughly what climate models have predicted. "That's another validation of the models," says climatologist Tom Wigley of the National Center for Atmospheric Research in Boulder, Colorado. As the models implied, rising ocean temperatures have delayed part of the surface warming, says climate modeler James Hansen of NASA's Goddard Institute for Space Studies in New York City, but that can't continue indefinitely. Even if rising concentrations of greenhouse gases could be stabilized tomorrow, Hansen says, gases that have already accumulated will push surface temperatures up another half-degree or so. The ocean-induced delay in global warming also suggests to some climatologists that future temperature increases will be toward the top end of the models' range of predictions. Mainstream climatologists have long estimated that a doubling of greenhouse gases, expected by the end of the 21st century, would eventually warm the world between 1.5º and 4.5ºC. Some greenhouse contrarians have put that number at 1ºC or even less. Now, the ocean-warming data "imply that climate sensitivity is not at the low end of the spectrum," says Hansen. He, Wigley, and some others now lean toward a climate sensitivity of about 3ºC or a bit higher. But as climatologist Christopher Folland of the Hadley Center for Climate Prediction in Bracknell, United Kingdom, notes, the considerable variability in ocean heat content from decade to decade means scientists will still be hard pressed to find a precise number for climate sensitivity. Getting better numbers for ocean heat content remains a top priority for oceanographers. "There's still a vast amount of data out there that needs digitizing," says Folland. And for future numbers, an international effort called Argo, now under way, will create an ocean-spanning network of 3000 free-floating instrument packages. Linked by satellites, the Argo drifters will create a "weather map" of the ocean down to 1500 meters. At least future oceanographers won't have to rummage through the data detritus of their predecessors to see what the ocean is up to. World's Oceans Warming Up, Could Trigger Large Climate ChangesBy Cat Lazaroff WASHINGTON, DC, March 24, 2000 (ENS) - The oceans of the world have warmed substantially during the past 40 years, the National Oceanic and Atmospheric Administration announced Thursday. NOAA researchers suggest that much of the heat from global warming may have been stored in the oceans, reducing atmospheric temperature increases but leading to potentially huge climate changes in the near future. Researchers from NOAA's Ocean Climate Laboratory in Silver Spring, Maryland examined three major ocean basins - the Atlantic, Indian and Pacific. They found the greatest warming has occurred in the upper 300 meters (975 feet) of the ocean waters. This level has warmed an average of 0.56 degrees Fahrenheit. The water in the upper 3,000 meters (9,750 feet) of the world's oceans has warmed on average by 0.11 degrees Fahrenheit. These findings represent the first time scientists have quantified temperature changes in all of the world's oceans from the surface to a depth of 3,000 meters. "Since the 1970s, temperatures at the earth's surface have warmed, Arctic sea ice has decreased in thickness, and now we know that the average temperature of the world's oceans has increased during this same time period," said NOAA Administrator D. James Baker. The ocean and atmosphere interact in complex ways to produce Earth's climate. Owing to its large mass, the ocean acts as the memory of the earth's climate system and can store heat for decades or longer. As a result, it might become possible some day for scientists to use ocean temperature measurements to forecast the earth's climate decades in advance, the researchers said. "It is possible that ocean heat content may be an early indicator of the warming of surface, air and sea surface temperatures more than a decade in advance," said Sydney Levitus, who heads NOAA's Ocean Climate Laboratory. "For example, we found that the increase in subsurface ocean temperatures preceded the observed warming of surface air and sea surface temperatures, which began in the 1970s," Levitus said. "Our results support climate modeling predictions that show increasing atmospheric greenhouse gases will have a relatively large warming influence on the earth's atmosphere," Levitus warned. "One criticism of the models is that they predict more warming of the atmosphere than has been actually observed. Climate modelers have suggested that this ‘missing warming' was probably to be found in the world ocean. The results of our study lend credence to this scenario," he explained. The scientists determined their findings by using data - 5.1 million temperature profiles - from sources around the world, to quantify the variability of the heat content (mean temperature) of the world’s oceans from the surface through 3000 meter depth for the period 1948 to 1996. The researchers looked at temperature changes in the Atlantic, Indian and Pacific oceans. "In each ocean basin substantial temperature changes are occurring at much deeper depths than we previously thought. This is just one more piece of the puzzle to understanding the variability of the earth's climate system," said Baker. The Pacific and Atlantic Oceans have been warming since the 1950s, while the Indian Ocean has warmed since the 1960s. The similar warming patterns of the Pacific and Indian Oceans suggest that the same phenomena is causing the changes to occur in both oceans. The world ocean warming is likely due to a combination of natural variability, such as the Pacific Decadal Oscillation, and human induced effects, the researchers say. The scientists, led by Levitus, report their findings in today’s issue of the journal "Science," in an article titled "Warming of the World Ocean." The NOAA report was made possible in part by an international ocean data management project headed by Levitus that has added more than two million historical temperature profiles to electronic archives during the past seven years. "International cooperation in building the global ocean databases required for understanding the role of the ocean as part of the earth's climate system has been excellent," said Levitus. Contributions of subsurface ocean temperature data have come from all countries that make oceanographic measurements including the United States, Russia, the United Kingdom, Germany, France, Canada, Australia, and Japan. Nearly all of the data were gathered by research ships, naval ships, buoys, and merchant ships. Some merchant ships deploy instruments that measure the temperature of the upper ocean as participants in voluntary programs. Understanding the role of the ocean in climate change and making 10 year climate forecasts will soon be greatly enhanced by observations planned as part of an emerging international Global Ocean Observing System. Meanwhile, a recently completed study of climate over the past 100 years suggests that interactions between the atmosphere, ocean and sea ice systems may have played a prominent role in the global warming of the early 20th century, NOAA scientists say. Using climate models run on high performance supercomputers, scientists at NOAA's Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey, conducted six experiments to explore possible causes for the warming in the first half of the century. Their findings were also published in today’s issue of "Science." They linked warming in the early part of the century with a combination of ingredients, including increasing concentrations of greenhouse gas and sulfate aerosols. The supercomputers turned up strong evidence that warming in the latter part of the 20th century was due in large part to human generated greenhouse gases. "The fact that all experiments capture the warming from 1970 on is indicative of a robust response of the climate model to increasing concentrations of greenhouse gases," said Thomas Knutson, a research meteorologist. Researchers Find Ocean Temperature Rising, Even in the Depths By William K. Stevens, The New York Times, March 24, 2000 An important piece of the global-warming picture has come into clearer focus with a confirmation by scientists that the world's oceans have soaked up much of the warming of the last four decades, delaying its full effect on the atmosphere and thus on climate. The warming of the deep oceans had long been predicted, and the consequent delaying effect long thought to exist. But until now the ocean's heat absorption had not been definitively demonstrated, and its magnitude had not been determined. The finding, by scientists at theNational Oceanographic Data Center in Silver Spring, Md., is based on an analysis of 5.1 million measurements, by instruments around the world, of the top two miles of ocean waters from the mid-1950's to the mid-1990's. The analysis, the first on a global scale, is being published in the March 24, 2000 issue of the journal Science. As the earth warms, from either natural or human causes, or both, not all the extra heat goes immediately into the atmosphere, where its effect on climate is most direct. Much of it is absorbed by the oceans, which store it for years or decades before releasing it. This means that to whatever extent the planet is being warmed by emissions of greenhouse gases like carbon dioxide, which are produced by the burning of coal, oil and natural gas, only part of that heating has materialized so far at and above the earth's surface. Some experts believe that about half the greenhouse warming is still in the oceanic pipeline and will inevitably percolate to the air in the decades just ahead. The average surface temperature of the globe has risen by about 1 degree Fahrenheit over the last 100 years. Over the last 25 years, the rate of surface warming has accelerated, amounting to the equivalent of about 3.5 degrees a century. By comparison, the world is 5 to 9 degrees warmer now than in the depths of the last ice age, 18,000 to 20,000 years ago. Scientists generally agree that it is unclear how much of the warming is attributable to greenhouse gases and how much to natural causes; many think both are involved. The new study shows that the average warming of the seas over the 40-year study period amounted to about one-tenth of a degree Fahrenheit for the top 1.9 miles of ocean water as a whole, and more than half a degree in about the top 1,000 feet. It is possible that the ocean may now be giving up to the atmosphere some of the heat it stored in the early part of the study period, but this has not been established, said Sydney Levitus, the chief author of the study. He is the director of the Ocean Climate Laboratory, part of the data center at Silver Spring, which in turn is part of the National Oceanic and Atmospheric Administration. Likewise, Mr. Levitus said, it is possible but not established that more frequent appearances of the phenomenon known as El Niño, a semi-periodic warming of the eastern tropical Pacific that disrupts weather around the world, are related to the generally warming ocean. The magnitude of the oceanic warming surprised some experts. One, Dr. Peter Rhines, an oceanographer and atmospheric scientist at the University of Washington in Seattle, said it appeared roughly equivalent to the amount of heat stored by the oceans as a result of seasonal heating in a typical year. "That makes it a big number," he said. Dr. James E. Hansen, a climate expert at the NASA Goddard Institute for Space Studies in New York, said the finding was important because, "in my opinion, the rate of ocean heat storage is the most fundamental number for our understanding of long-term climate change." Three years ago, Dr. Hansen and colleagues used a computer model to calculate the amount of warming that should have been produced up till then by external influences on the climate system like greenhouse gases and solar radiation. They found that because of the storage of heat in the ocean, only about half the surface warming should have appeared. Mr. Levitus and his fellow researchers say in their paper that their findings support the Hansen conclusion. Still, Mr. Levitus said the cause of the oceanic warming was not clear, although "I believe personally that some of it is due to greenhouse gases." Some scientists believe that natural factors like recurring oscillations in ocean surface temperature in various parts of the world may play a role in the last century's warming. For example, studies by Dr. Gerard Bond of Columbia University's Lamont Doherty Earth Observatory found that the climate of the North Atlantic region, at least, had alternated between cooler and warmer every 1,500 years, more or less. The world may be entering one of the natural warming cycles now, say Dr. Bond and Dr. Charles D. Keeling, a climate expert at the Scripps Institution of Oceanography in San Diego. In a study published this week in the online edition of Proceedings of the National Academy of Sciences, Dr. Keeling suggested that a natural fluctuation in ocean tides over hundreds of years might contribute to these long-term cycles of warming and cooling. Other possible causes have also been suggested. Oceans may hold answer to global warming riddle, scientists say By H. Josef Hebert, Associated Press, The Boston Globe, March, 24, 2000 WASHINGTON - Scientists have discovered a significant, surprising warming of the world's oceans over the past 40 years, providing new evidence that computer models might be on target when they predict the Earth's warming. The broad study of temperature data from the oceans, dating to the 1950s, shows average temperatures have increased more than expected - about half a degree Fahrenheit closer to the surface, and one-tenth of a degree even at depths of up to 10,000 feet. The findings, reported by scientists at the National Oceanic and Atmospheric Administration, also might explain a major puzzle in the global warming debate: why computer models have shown more significant warming than actual temperature data. Global warming skeptics contend that if the computer models exaggerate warming that already has occurred, they should not be trusted to predict future warming. The models have shown higher temperatures than those found in surface and atmospheric readings. But now, the ocean data may explain the difference, scientists said. In the administration study, scientists for the first time have quantified temperature changes in the world's three major ocean basins. ''We've known the oceans could absorb heat, transport it to subsurface depths, and isolate it from the atmosphere. Now we see evidence that this is happening,'' said Sydney Levitus, chief of the agency's Ocean Climate Laboratory and principal author of the study. Levitus and fellow scientists examined temperature data from more than 5 million readings at various depths in the Pacific, Atlantic, and Indian oceans, from 1948 to 1996. They found the Pacific and Atlantic oceans have been warming since the mid-1950s, and the Indian Ocean since the early 1960s, according to the study published today in the journal Science. The greatest warming occurred from the surface to a depth of about 900 feet, where the average heat content increased by 0.56 degrees Fahrenheit. Water as far down as 10,000 feet was found to have gained on average 0.11 degrees. ''This is one of the surprising things. We've found half of the warming occurred below 1,000 feet,'' Levitus said. ''It brings the climate debate to a new level.'' This story ran on page A06 of the Boston Globe on 3/24/2000. Science, March 23, 2000 Sydney Levitus, * John I. Antonov, Timothy P. Boyer, Cathy Stephens We quantify the interannual-to-decadal variability of the heat content (mean temperature) of the world ocean from the surface through 3000-meter depth for the period 1948 to 1998. The heat content of the world ocean increased by ~2 × 1023 joules between the mid-1950s and mid-1990s, representing a volume mean warming of 0.06°C. This corresponds to a warming rate of 0.3 watt per meter squared (per unit area of Earth's surface). Substantial changes in heat content occurred in the 300- to 1000-meter layers of each ocean and in depths greater than 1000 meters of the North Atlantic. The global volume mean temperature increase for the 0- to 300-meter layer was 0.31°C, corresponding to an increase in heat content for this layer of ~1023 joules between the mid-1950s and mid-1990s. The Atlantic and Pacific Oceans have undergone a net warming since the 1950s and the Indian Ocean has warmed since the mid-1960s, although the warming is not monotonic. National Oceanographic Data Center/National Oceanic and Atmospheric Association (NODC/NOAA), E/OC5, 1315 East West Highway, Silver Spring, MD, * To whom correspondence should be addressed. E-mail: The Intergovernmental Program on Climate Change (1), the World Climate Research Program CLIVAR (2), and the U.S. National Research Council (3) have identified the role of the ocean as being critical to understanding the variability of Earth's climate system. Physically we expect this to be so because of the high density and specific heat of seawater. Water can store and transport large amounts of heat. Simpson (4) conducted the first study of Earth's heat balance which concluded that the Earth system is not in local radiative balance, and therefore transport of heat from the tropics to the poles is required for the Earth system to be in global radiative balance. Identifying the mechanisms by which heat is transported from the tropics to the poles is one of the central problems of climate research. In addition, Rossby (5) drew attention to the fact that because of its large specific heat capacity and mass, the world ocean could store large amounts of heat and remove this heat from direct contact with the atmosphere for long periods of time. The results of these studies are the subject of this research article. Until recently, little work has been done in systematically identifying ocean subsurface temperature variability on basin and global scales, in large part due to the lack of data [recent studies include (6-8)]. The first step in examining the role of the ocean in climate change is to construct the appropriate databases and analysis fields that can be used to describe ocean variability. About 25 years ago, ship-of-opportunity programs were initiated to provide measurements of subsurface upper ocean temperature. Before the initiation of these programs, subsurface oceanographic data were not reported in real time, as is the case with much meteorological data. During the past 10 years, projects have been initiated (9) that have resulted in a large increase in the amount of historical upper ocean thermal data available to examine the interannual variability of the upper ocean. Using these data, yearly, objectively analyzed, gridded analyses of the existing data were prepared and distributed (7) for individual years for the period 1960 to 1990. We have used the recently published World Ocean Database 1998 (10-13) to prepare yearly and year-season objectively analyzed temperature anomaly fields. Detailed information about the temperature data used in this study can be found in this series. Computation of the anomaly fields was similar to our earlier work (7), but some procedures were changed (7). To estimate changes in heat content at depths greater than 300 m, we prepared objective analyses of running 5-year composites of all historical oceanographic observations of temperature for the period 1948 to 1996 at standard depth levels from the surface through 3000-m depth using the procedures described above. Constructing composites of deep-ocean data by multiyear periods is necessary due to the lack of deep-ocean observations. Most of the data from the deep ocean are from research expeditions. The amount of data at intermediate and deep depths decreases as we go back further in time. Both Pacific Ocean basins exhibit quasi-bidecadal changes in upper ocean heat content, with the two basins positively correlated. During 1997 the Pacific achieved its maximum heat content. A decadal-scale oscillation in North Pacific sea surface temperature (Pacific Decadal Oscillation) has been identified (15, 16), but it is not clear if the variability we observe in Pacific Ocean heat content is correlated with this phenomenon or whether there are additional phenomena that contribute to the observed heat content variability. In order to place our results in perspective, we compare the range of upper ocean heat content with the range of the climatological annual cycle of heat content for the Northern Hemisphere and world ocean computed as described by (8) but using a more complete oceanographic database (10-13). There is relatively little contribution to the climatological range of heat content from depths below 300 m. Our results indicate that the decadal variability of the upper ocean heat content in each basin is a significant percentage of the range of the annual cycle for each basin. For example, the climatological range of heat content for the North Atlantic is about 5.6 × 1022 J, and the interdecadal range of heat content is about 3.8 × 1022 J. Parts of the midlatitudes and subtropical regions have also cooled substantially. Maximum warming is associated with the tongue of temperature associated with the Mediterranean Outflow. The changes in salinity at this depth (not shown) in both sets of pentadal differences are positively correlated with the changes in temperature, with the result that these changes in temperature and salinity are at least partially density compensating. Tests of statistical significance (Student's t test) have been performed on these difference fields, and we find (not shown) that the changes over most of the North Atlantic are statistically significant, as was found for the earlier pentadal differences (18). The observed changes are not small and can make an appreciable contribution to Earth's heat balance on decadal time scales, which we quantify in the next section. Cooling occurred throughout the subarctic gyre, with the maximum heat storage exceeding 6 W min the Labrador Sea. Warming occurred in the midlatitudes and subtropics, with values exceeding 8 W min the midlatitudes of the western North Atlantic. We have computed the contribution to the vertically integrated field shown inFig. 3B from each 500-m layer of the North Atlantic. The cooling of the Labrador Sea is from each 500-m-thick ocean layer down to 2500-m depth. The warming in the western midlatitudes is due to nearly equal contributions by the 0- to 500- and 500- to 1000-m layers, with some small contributions from deeper layers. The warming associated with the Mediterranean Outflow is mainly due to contributions from the 1000- to 2000-m layer. There is a consistent warming signal in each ocean basin, although the signals are not monotonic. The signals between the Northern and Southern Hemisphere basins of the Pacific and Indian oceans are positively correlated, suggesting the same basin-scale forcings. The temporal variability of the South Atlantic differs significantly from the North Atlantic, which is due to the deep convective processes that occur in the North Atlantic. Before the 1970s, heat content was generally negative. The Pacific and Atlantic oceans have been warming since the 1950s, and the Indian Ocean has warmed since the 1960s. The delayed warming of the Indian Ocean with respect to the other two oceans may be due to the sparsity of data in the Indian Ocean before 1960. The range of heat content for this series is on the order of 20 × 1022 J for the world ocean. Our results demonstrate that a large part of the world ocean has exhibited coherent changes of ocean heat content during the past 50 years, with the world ocean exhibiting a net warming. These results have implications for climate system research and monitoring efforts in several ways. We cannot partition the observed warming to an anthropogenic component or a component associated with natural variability. Modeling studies are required even to be able to attempt such a partition. However, our results support the findings of Hansen et al. (19), who concluded that a planetary radiative disequilibrium of about 0.5 to 0.7 W mexisted for the period 1979 to 1996 (with the Earth system gaining heat) and suggested that the "excess heat must primarily be accumulating in the ocean." Hansen et al. included estimates of the radiative forcings from volcanic aerosols, stratospheric ozone depletion, greenhouse gases, and solar variability. Such information is critical for studies attempting to identify anthropogenic changes in Earth's climate system. This is because coupled air-sea general circulation model experiments that are used to assess the effects of increasing carbon dioxide frequently begin integration with a sudden increase of atmospheric carbon dioxide (e.g., twice the present value) rather than the gradual buildup observed in nature. This is done to minimize computer time required for completion of the time integrations of these numerical experiments. Integration in this manner introduces what is known as a "cold start" error (20, 21). Global sea surface temperature time series (1) for the past 100 years show two distinct warming periods. The first occurred during the period 1920 to 1940 and was followed by a period of cooling; the second warming began during the 1970s. It is important to note that the increase in ocean heat content preceded the observed warming of sea surface temperature. It is not clear what physical mechanisms may be responsible for the observed increase in ocean heat content. The warming could be due to natural variability, anthropogenic effects, or more likely a combination of both. It may seem implausible that subsurface ocean warming preceded the observed global mean warming of surface air and sea surface temperature. This phenomenon is possible because the density of sea water is a function of salinity as well as temperature. Thus, relatively warm and salty water or cold and fresh water can reach subsurface depths from a relatively small region of the sea surface through the processes of convection and/or subduction and can then spread out and warm or freshen a much larger region such as an entire gyre or basin. This is clearly occurring in the North Atlantic Ocean by the mechanism of deep ocean convection (Fig. 2). Lazier (22) has documented the cooling and freshening of the deep Labrador Sea that began with the renewal of deep convection in the early 1970s. Dickson et al. (23) have related the renewal of convection in the Labrador Sea to the North Atlantic Oscillation (NAO) in sea-level pressure. Nerem et al. (24) showed for the period 1993 to 1998 that a relative maximum in global mean sea level and sea surface temperature [based on TOPEX/Poseidon altimetric measurements and the Reynolds sea surface temperature analyses (25)] occurred at the beginning of 1998. This was associated with the occurrence of El Ñino. Global sea level began decreasing during the rest of 1998. Part of the reason for extreme values in North Atlantic heat content observed during 1998 may be related to the 1997 El Ñino, but additional analyses are required to understand the large increase in the North Atlantic heat content between 1997 and 1998. In addition, we emphasize that the extreme warmth of the world ocean during the mid-1990s was in part due to a multidecadal warming of the Atlantic and Indian oceans as well as a positive polarity in a possible bidecadal oscillation of Pacific Ocean heat content. One possible link between the Northern Hemisphere oceans and the atmosphere may be found in recent research culminating in the publication by Thompson and Wallace (26). Their work indicates that the NAO may in fact be part of a hemispheric mode of sea-level pressure termed the Arctic Oscillation. These authors also relate changes at sea level associated with the NAO to changes at the 500-mb height of the atmosphere. Recently, other investigators have related changes in the Northern Hemispheric stratospheric circulation to tropospheric changes related to the NAO pattern (27-31). Dickson et al. (23) have correlated convection in the Labrador Sea with the polarity of the NAO. To the extent that these relations are found to be statistically significant, it may be that changes we observe in global ocean heat content may be related to the hemispheric and/or global modal variability of the atmosphere, from sea level through the stratosphere. Determining such possible links is a major part of understanding the mechanisms that govern the state of Earth's climate. Our final point relates to the large change in Atlantic heat storage from depths exceeding 300 m. Because convection can result in mixing of water through the entire 2000-m depth of the water column in the Labrador Sea, changes in sea surface temperature may remain relatively small in this region despite a large heat flux from ocean to atmosphere. This flux is responsible for the large changes of heat content we have documented at 1750-m depth. This may be an important consideration when comparing the relative role of the tropics and high-latitude convective regions in effecting climate change, whether due to natural or anthropogenic causes. 8 November 1999; accepted 11 February 2000
Any machine that transforms any form of energy (chemical, electrical) into mechanical work transmitted through the way of his body (motor shaft) is called a motor. The internal combustion engine is the mechanically unit within which combustion occurs. Mechanical work is obtained throught the heat of combustion. Diesel or gasoline engines operate on a white-crank principle. What this means? According to this principle, the bell-crank mechanism is designed to distribute the reciprocating movement of the pistons in the rotation movement of the crankshaft. The internal combustion engine has the following systems or components: - mobile mechanism (piston, connecting rod, crankshaft) - fixed parts (engine block, cylinder head, oil sump) - air intake system (intake manifold, turbo compressor, filter) - exhaust system (exhaust manifold, muffler) - gas distribution systems (camshafts, valves) - fuel system (pump, injector, filter) - grease lubrication system (pump, filter) - petrol ignition system (ignition coil, spark plugs) - electrical system (starter, alternator) - auxiliary systems (cooling, air conditioning)
This week, I am going to talk about what is arguably the most pressing virus that humans are facing today, not entirely because of its lethality, but because of its widespread effects and absence of a cure. Human Immunodeficiency Virus (HIV) has currently infected 37 million individuals worldwide. Shockingly, it is estimated that only 54% of those who are infected with HIV are aware of their infection, meaning that roughly 46% of those infected are unaware that their bodies are fighting a potentially life-threatening diseases. Below, I want to present some facts about HIV: - It is estimated that there will be roughly 2 million new HIV infections every year - HIV is a virus that infects a person’s immune system, particularly their CD4 T cells - While the immune system is able to fight off most pathogens without a problem, HIV mutates so quickly in the body that the immune system is unable to effectively combat it - 2-4 weeks after being infected with HIV, a person may experience any of the following symptoms: fever, chills, rash, night sweats, muscle aches, fatigue, swollen lymph nodes, and mouth ulcers. Not all people will show these symptoms, and some people may not show symptoms at all for over 10 years. - After a certain amount of time, these symptoms will subside. When the symptoms subside, the virus is lying dormant in the body. During this time, the virus is being suppressed by the body’s immune system, but it is constantly evolving and mutating to find ways to break free. - Eventually, due to the chronic infection, the body’s immune system begins to lose its ability to hold back the virus and the CD4 T cells begin to die off. At this time, the virus rises in its numbers and the patient will progress to AIDS (acquired immunodeficiency syndrome). When a patient has AIDS, they experience many negative symptoms, and eventually succumb to an opportunistic infection because the body’s immune system has been severely degraded. - There is no cure for HIV, and no vaccine to prevent people from contracting it. Fortunately, we have Anti-Retroviral Therapies (ART’s) which can greatly prolong an infected individual’s life by helping the immune system suppress the virus. These treatments, while effective, are also extremely expensive, adding a financial burden to those who are infected. For more information:
Sigtran is the traditional ss7 signaling over IP network. The protocol standard is define in IETF spec. The SS7 Network is fast and reliable and its a circuit switched network. Having dedicated resources. It have links, which acts as stream of messages. This makes multiple streams to work in parallel. Enables low latency in ss7 network and multiple paths to peer node. User of ss7 link gets immediate link status (congestion, link down etc.). Network deploys nodes in mated pairs for redundancy. There is a separate wires for ss7 links. The protocol specification in rfc 4666. Internet is another network which was growing faster and faster with the time. Growth was at hardware level , from metal cables to fiber optics. This enabled fast transfer of IP packets, but still missing other requirements to fulfill to become useful for telecom network. TCP or Transmission Control Protocol, was the only connection oriented protocol for setting up a virtual circuit over IP network. It is reliable , but still was not fit for telecom because of following short comings. Single Streaming: TCP/IP uses single stream in a tcp connection. Stream is the sequence of bytes. If one byte is corrupted, all bytes stopped util corrupted bytes will not re-transmitted again by the sender. Imagine there there are ten concurrent calls going on a TCP stream. Problem in one call, will create problem in remaining nine calls. No Message Boundaries: TCP flow is stream of bytes, the sender and received has to manage message boundaries. Telecom network have coverage to all the globe. Both ends need to work on a common protocol to talk. Then creating boundaries was required to set a new protocol if TCP has to use. No Asynchronous State Indication: After a TCP/IP connection setup. If IP network fails (e.g cable is removed), there is no immediate indication to the sender or receiver. This protocol don't have a path health check mechanism. If TCP has to use then its responsibility of TCP application to check health of a connection. Single Homed: A TCP connection have a pair of IP address and port. If one IP interface or path failed, all communication on the connection fails. To overcoming all short comings of TCP protocol for telecom. SCTP or stream controlled transmission protocol was standardized. This protocol have following features. Multi Streaming: A SCTP connection many have multiple streams. While sending message, user of SCTP can specify the stream to send. This enable parallel processing of calls without disturbing other call in failure of a call. While connection setup both end points negotiates on incoming and outgoing streams. Packet Oriented: Uses sending packets (like UDP protocol style) in connection oriented mode. Heart Beat: SCTP Protocol uses heartbeat mechanism for monitoring the health of a connection. User receives connection status whenever there is change in connection state. Multi Homing: A sctp end point includes list of ip addresses and a port. This makes a connection to keep active, even in case of one IP network fails. With signaling over IP , sigtran standard define new protocols stack. This brings new nodes into network. The End node in sigtran network is called ASP or application server process. This is similar to SSP or SCP in SS7 network. This nodes runs the actual ss7 application. HLR is the example of ASP in m3ua. It terminates the SS7 traffic over IP network. An asp may be connected directly to another ASP or it may connect to a signaling gateway for reaching other nodes. The signaling gateway or SG, is similar to STP in SS7 Network. It have Sigtran and SS7 transport support. When a new node over m3ua needs to connect to the telecom network, it connects via SG. Sigtran vs ss7 There are many differences in sigtran and ss7. Both have its own transport and protocol stacks. Following tables lists the sigtran vs ss7, key differences. |A legacy protocol in telecommunication. Media and Signaling uses ss7 protocol.||Relatively new protocol in telecommunication. Only signaling uses sigtran protocol.| |Uses TDM based E1 or T1 links for transport of ss7 messages.||Uses IP network for ss7 signaling messages.| |Special hardware is reqired for ss7 links. This hardware is a ss7 card. Which implements MTP2 and MTP1. Dialogic, Digium and Adax are the few vendors for ss7 cards.||No special hardware is required. The sigtran uses Ethernet card, which is available in all computers.| |Protocol Standards defined by ITU-T.||Protocol Standards are defined by IETF.| |Complex implementation of network layer.||Easy implementation of network layer.| |High is cost and required additional hardware.||Low in cost and uses transitional ip network.| Sigtran protocol stack: Sigtran protocol stack have three layers. Adaptation Layer (m3ua, sua, m2ua, m2pa): This layer uses the services of SCTP protocol and provides the services to a SS7 layer. Starts association with peer node for setting a SCTP connection. Does signaling for bringing AS and ASP up at adaptation layer. Its a transport layer, provides services to adaptation layer and uses services of IP layer. Does the four way handshake with peer sctp for connection setup. Provides services to SCTP and used services of data link layer. Provides routing of IP packets having SCTP as payload. User Adaptation Layers In Sigtran: Sigtran is only for transport purpose. The user application should not change if a ss7 node connectivity changes from SS7 to sigtran. To make this possible , in sigtran protocols there are adaptation layers. A SS7 layer is the user of adaptation layer and adaptation layer uses the service of SCTP protocol. Basis of SS7 layer , sigtran layers are standardize. M3UA or Mtp3 User Adaptation Layer: Peer of MTP3 in sigtran protocol is M3UA. User of M3UA is SCCP. SUA or SCCP User Adaptation Layer: Peer of SCCP in sigtran protocol is SUA. User of SCCP is TCAP. M2UA or MTP2 User Adaptation Layer: Peer of MTP2 in sigtran protocol M2UA. User of M3UA is MTP2. M2PA or Mtp2 Peer Adaptation Layer: M2PA is the replacement of MTP2 layer in sigtran or IP. Two MTP3 layers can directly connect to each other over m2pa. Wgile M2Ua uses the services of MTP2 from SG.
November 14, 2012 This computer-generated perspective view of Nereidum Montes was created using data obtained from the High-Resolution Stereo Camera (HRSC) on ESA’s Mars Express. Centred at around 40°S and 310°E, the image has a ground resolution of about 23 m per pixel. In the foreground near to the crater, a mud or ice-landslide is seen, possibly due to glacial processes. The striations along the slopes of both the right side of the inner crater wall and on the slopes of the range to the left of the crater indicate the presence of ice. Topics: Environment, Disaster Accident, Space technology, Spaceflight, Spacecraft, Nereidum Montes, Mars Express, Technology Internet, ESA
CASSINI CLOSES IN ON THE CENTURIES-OLD MYSTERY OF SATURN'S MOON IAPETUS Extensive analyses and modeling of Cassini imaging and heat-mapping data have confirmed and extended previous ideas that migrating ice, triggered by infalling reddish dust which darkens and warms the surface, may explain the mysterious two-toned "yin-yang" appearance of Saturn's moon Iapetus. The results, published online Dec. 10 in a pair of papers in the journal Science, provide what may be the most plausible explanation to date for the moon's bizarre appearance, which has puzzled astronomers for more than 300 years. Shortly after he discovered Iapetus in 1671, the French-Italian astronomer Giovanni Domenico Cassini noticed that the surface of Iapetus is much darker on its leading side, the side which faces forward in its orbit around Saturn, than on the opposite trailing hemisphere. Voyager and Cassini images have shown that the dark material on the leading side extends onto the trailing side near the equator. The bright material on the trailing side, which consists mostly of water ice and is ten times brighter than the dark material, extends across the north and south poles onto the leading side. One of the papers, led by Tilmann Denk of the Freie Universitat in Berlin, Germany, describes findings made by Cassini's Imaging Science Subsystem (ISS) cameras during the spacecraft's close flyby of Iapetus on September 10, 2007, and on previous encounters. "ISS images show that both the bright and dark materials on Iapetus' leading side are redder than similar material on the trailing side," says Denk, suggesting that the leading side is colored (and slightly darkened) by reddish dust that Iapetus has swept up in its orbit around Saturn. This observation provides new confirmation of an old idea, that Iapetus' leading side has been darkened somewhat by infalling dark dust from an external source, perhaps from one or more of Saturn's outer moons. The dust may be related to the giant ring around Saturn recently discovered by NASA's Spitzer Space Telescope. However, the ISS images show that this infalling dust cannot be the sole cause of the extreme global brightness dichotomy. "It is impossible that the very complicated and sharp boundary between the dark and the bright regions is formed by simple infall of material. Thus, we had to find another mechanism," said Denk. Close-up ISS images provide a clue, showing evidence for "thermal segregation", in which water ice has migrated locally from sunward-facing, and therefore, warmer areas to nearby poleward-facing and therefore colder areas, darkening and warming the former and brightening and cooling the latter. The other paper, by John Spencer of Southwest Research Institute in Boulder, Colo., and Denk, adds runaway global migration of water ice into the picture to explain the global appearance of Iapetus. Their model synthesizes ISS results with thermal observations from Cassini's Composite Infrared Spectrometer (CIRS) and computer models. CIRS observations in 2005 and 2007 found that the dark regions reach temperatures high enough (129 Kelvin or -227 F) to evaporate many meters of ice over billions of years. Iapetus' very long rotation period, 79 days, contributes to these warm temperatures, by giving the sun more time to warm the surface each day than on faster-rotating moons. Spencer and Denk propose that the infalling dust darkens the leading side of Iapetus, which therefore absorbs more sunlight and heats up enough to trigger evaporation of the ice near the equator. The evaporating ice recondenses on the colder and brighter poles and on the trailing hemisphere. The loss of ice leaves dark material behind, causing further darkening, warming, and ice evaporation on the leading side and near the equator. Simultaneously, the trailing side and poles continue to brighten and cool due to ice condensation, until Iapetus ends up with extreme contrasts in surface brightness in the pattern we see today. The relatively small size of Iapetus, which is just 1,500 kilometers (900 miles) across, and its correspondingly low gravity, allow the ice to move easily from one hemisphere to another. "Iapetus is the victim of a runaway feedback loop, operating on a global scale," says Spencer. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory (JPL), a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA's Science Mission Directorate, Washington. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging team consists of scientists from the U.S., England, France, and Germany. The imaging operations center and team leader (Dr. C. Porco) are based at the Space Science Institute in Boulder, Colo. The Composite Infrared Spectrometer team is based at NASA's Goddard Space Flight Center, Greenbelt, Md., where the instrument was built, with significant hardware contributions from England and France.
Learning at Home: Watering Plants & Gardening Activity Getting plants for your household is a great way to teach your child about caring for living things and how they grow. What You Need - Hardy houseplants like ivy, aloe vera, or succulents - Small watering can - Plant food or fertilizer - Pen or dry erase marker - Paper or dry erase board Head out to the garden center and pick plants out together (don’t forget to pick up some plant food or fertilizer, too). Be sure to choose plants that are safe for everyone — stay away from the mildly poisonous poinsettia and philodendron and the highly poisonous shamrock and dieffenbachia. When you bring your plants home, talk about why they need water, how much they need, and how often to give it to them. Come up with a watering schedule — you can even make a calendar on paper or a dry erase board. When it comes time to water, have your child measure water into the watering can and pour it into each pot. Let him decide where he thinks the plants will get the most sun. Then, put the plant food or fertilizer into the soil. Ask your child things like - “What do you think the plant food will do to the plant? What does food do for you?” - “What do you think would happen if you didn’t feed or water your plant?” - A couple weeks down the road, “Do you think your plant looks any bigger? Greener? Leafier?” Don’t forget to have your child dust off the plants’ leaves every so often and talk about why it’s necessary. Say things like, “Can your plant get oxygen when it’s covered in dust?” and “What happens when something can’t breathe?” What Your Child Learns Taking care of a living thing builds compassion, respect, and responsibility in children. Your child will also learn to observe, ask questions, consider problems, and find solutions — the beginnings of scientific thinking.
The Realism Art Movement began in the 1840s in France following the 1848 French Revolution. The style of the movement, as its name would imply, was in favor of focusing on depictions of real life and everyday people. Realist painters depicted common labors and ordinary people going about their contemporary life of the period, often being engaged in real activities in everyday surroundings. The style also used simple, basic details which stood in contrast to the pretty and fanciful detail of the previous styles of art. 4. History and Development The art movement began in France during the 1840s following a turbulent half-century with multiple revolutions and leadership changes, starting with the French Revolution (1789-99) and throughout the Revolutions of 1848 (1848-49) that spread across parts of Europe. It was also during this period that the Industrial Revolution and the Enlightenment had been spreading across Europe, bringing about many cultural, economic and technological changes. It was in this setting that the art movement of realism challenged the previous art movements of Neoclassicism, Romanticism and History Painting which had been the dominant art forms in the previous decades. Realism responded to this ever-changing political and social upheaval, as well as the changing landscape by challenging the previous art movements by focusing on a simple representation of everyday people and nature, as opposed to the fanciful, high-class traditional art forms. It is therefore regarded by many as the first modern art movement. It would take a while for the art to become popular outside of France, as it took until the 1860s for it to start developing in countries like Russia, England, and America. 3. Notable Artists Jean Désiré Gustave Courbet (1819-77) was a French artist who was responsible for leading the realist movement of art from the beginning in France during the 1840s. Courbet, like some in the Realism Art Movement, saw the style as a way to visually represent the margins of French society and attack political power and the art institution in France. At his art exhibition at the Salon in Paris in 1851, Courbet showed one of the most important realism works, "A Burial at Ornans." This painting marked the debut of Realism in the European art world and caused controversy with its large scale funeral depiction of Courbet's grand uncle, which was something that had only done for royal or religions works. IIya Yefimovich Repin (1844-1930) was the most celebrated Russian artist of the 19th century. He played a major role in realism art in Russia and brought it into the greater mainstream culture of Europe. His early 1970s work "Barge Haulers on the Volga" called attention to low-class labor and social inequality in Russia by showing a group of poor workers having to pull a ship upstream with their bodies while tied together. Despite the subject matter, Repin was praised by the Russian nobles for showing the strength of the Russian spirit in the average everyday man. At the same time, he was also showing the world the ways in which rural labor was being exploited. 2. Decline and Subsequent Successive Movements In the 1860s, as the movement moved beyond France and to the rest of Europe and America, it became the influential art movement art for most of the second half of the 19th century. However, as the style became more embraced and adopted by the mainstream world of painting, realism became less common and useful in terms of defining a specific artistic style. This was also partially due to Impressionism, which appeared in 1860s France, and later art movements after it. All of these movements put much less importance on having the precise illusionism style brushwork that the realism movement had used. By the 1880s, the Realism Art Movement had ended. Despite the Realism Art Movement ending in the 1880s and being surpassed by other styles of art, it has never really gone away. As mentioned before, when the style got adopted to the mainstream, it lost its specific art style that defined it. However, realism has still been influential beyond its movement due to its subject matter. The style has since influenced a number of later movements, trends, and artists in the art world. Realism influenced them by its use of illusionism style brushwork or more importantly in its focus on a deception of real, everyday people and subjects that have led to a depiction of future realist subject matter in other art forms. Your MLA Citation Your APA Citation Your Chicago Citation Your Harvard CitationRemember to italicize the title of this article in your Harvard citation.
You’ve probably seen the terms “buffering” and “buffer” thrown around quite a bit in the technology world, but it can harder to find a buffering definition that is easy to understand. In short, just know that whatever the form of buffering — and there are different kinds — it generally speeds up what you trying to do on a computer. Buffering can prevent lag when you’re streaming video or prevent slow performance when you’re playing a graphics-intensive video game on your desktop computer. Buffering involves pre-loading data into a certain area of memory known as a “buffer,” so the data can be accessed more quickly when one of the computer’s processing units — such as a GPU for video games or other forms of graphics, or a CPU for general computer processing — needs the data. Too Much Internet Buffering Could Mean a Slow Network Connection One common form of buffering occurs when your broadband connection is too slow to stream a video in real time. So your computer will buffer the video data — starting playback when there is enough to prevent video lag. If you see this happen often, it might be time to upgrade your broadband speed, or maybe reset your router, if the download rate is lower than advertised by your Internet provider. Buffer Overflow Can Be a Problem Sometimes too much data gets loaded into a buffer, causing a buffer overflow, which is a technique used by hackers to take control of a computer or infect it with a virus. Recent advancements in the ways that programming languages handle memory lessens the chance of a buffer overflow happening, but some older programs are still at risk. While buffering in general helps you to enjoy better computing performance, it can also mean that your Internet connection isn’t quite up to snuff (remember: check your performance regularly with a bandwidth speed test from BandwidthPlace.com). Whatever the reason your computer or video game system is using its buffer, you’re better off now that you have a buffering definition.
The process of heating certain metals to form implements has been around for thousands of years. The introduction of metalworking changed as harder, stronger, sharper, and tougher tools made work easier and more productive. Today’s welders owe their craft to blacksmiths who forged metal tools using little more than heat and hammers. Early Iron Working The first iron working looked quite different than welding does today especially with technology automating some of the processes to a degree. During the Iron Age, the most common type of metal produced was wrought iron. Wrought iron was formed as blacksmiths heated metal in a furnace then hammered (worked or “wrought”) into shape. While some processes were invented to refine other irons like cast iron and pig iron into the more useful, wrought iron, it was still inefficient into the 1800s. 1 Discovery of Coke In the 1700s, converting charcoal to coke for the smelting of iron became widespread due to deforestation as using charcoal required burning wood. This process was later adopted in the United States. The led to the rise of the successful coke production industry in places like Pennsylvania. With the ability to melt down heavier metals came the proliferation of true welding, as individual blacksmiths could melt metal at specific points rather than breaking down an entire piece. In 1881, scientists discovered how to create heat from an arc while the first patent for electrode arc welding was filed ten years later. This led to the decline of blacksmithing, which was eventually replaced by welding. In 1930, the New York Navy Yard developed stud welding, a technique that allowed the mass construction of ships. The Emergence of Steel As anybody with welding training knows, steel has far greater strength, durability, and corrosion resistance than iron. However, up until the 1800s, steel was difficult and expensive to manufacture. This changed after the Bessemer process was introduced. In 1856 he invented a converter that used compressed air to remove carbon allowing wrought iron to be converted to steel making it much cheaper to mass produce. But, the Bessemer was not perfect and some refinements needed to be made. Many other inventors flooded the industry including the open-hearth furnace which converts iron into steal in a broad, open hearth furnace. Steel continued to dominate through the 20th Century. Have You Considered a Career in Welding or HVAC? Fill out the form to recieve a no obligation info packet. Today, the United States employs nearly 400,000 welders. 2 Welders have moved past making horseshoes or chainmail like blacksmiths of centuries past did, but instead they tackle modern projects like skyscrapers, aircraft, and even amusement park rollercoasters. As blacksmithing was, welding is still an essential component of a functioning society. 1 – http://www.anselm.edu/homepage/dbanach/h-carnegie-steel.htm 2 – http://www.bls.gov/ooh/production/welders-cutters-solderers-and-brazers.htm#tab-1
1. Prime Numbers In this video, Teacher Jon introduces secondary students to the primes, making use of the sieve of Eratosthenes, and looksRead more. 2. Fibonacci Numbers In this video, Teacher Jon introduces the Fibonacci sequence using the question posed by Fibonacci in 1202, and continues withRead more. 3. Indices – an Introduction Indices : the First 4 Laws (Indices, Powers or Exponents) Introduces students to the first 4 laws of indices withRead more. 4. Indices – Word Problems Indices – Word Problems (Indices, Powers or Exponents) Solving simple word problems by looking for patterns using indices (or exponentsRead more.
|Unit of Instruction||Integers and Rational Numbers| |Learning Targets and Learning Criteria| • describe the relative position of two numbers on a number line when given an inequality. • understand the absolute value of a number as its distance from zero on the number line. • understand that each nonzero integer, has an opposite and integers are opposites if they are on opposite sides of zero and are the same distance from zero on the number line (-4 and 4). • write, interpret, and explain statements of order for rational numbers in the real-world. • recognize that if a < b, then -a > -b, because a number and its opposite are equal distances from zero; recognize that moving along the horizontal number line to the right means the numbers are increasing. • describe the relative position of two numbers on a number line when given an inequality. • write, explain, and justify inequality statements. In class, we complete the MATH workshop each period. This is a blended, rotation model of teaching where your student is receiving the standards in multiple different settings while completing many different activities. This allows me to have time with your students in a very small group setting to help when necessary further develop the understanding of the standards. Wednesdays we do not complete our rotation. We work lessons customized to each students level on I-Ready. These lessons are customized to each learning level and scaffolds the current unit for each student allowing advanced students to work ahead and help remediate struggling students. The week students will be assigned the following to practice class concepts: IXL: (Monday/Tuesday/Wednesday) 7th Grade C7, C9, B3, B5 students are expected to reach at least an 80% on these assignments. Mathspace: (Thursday/Friday) None If additional help is needed, students are encouraged to view a lesson regarding a given topic on Khan Academy. This website is free and provides both video instruction as well as practice problems. Students are able to complete additional practice on both Mathspace and IXL **** Accommodations provided as needed Please contact me as needed: Remind @ih6advance or email: [email protected]
Heat – whether flame or liquid – electricity and chemicals can all cause burn injuries. House fires are the main cause of deaths from fire burns, but children are more likely to be hospitalized for burns after contact with steam or hot liquids (scalds), including tap water. Children are at high risk for burns because their skin is thinner than an adult’s skin. For instance, young children’s bathwater should be no hotter than 38 °C even though the recommended standard temperature for household hot water is 49 °C. A child’s skin burns four times more quickly and deeply than an adult’s at the same temperature. Burns from fire From 2010 to 2014, an average of 110 Canadians died yearly from burns suffered in a fire and ten times that many people were hospitalized from fire-related injury yearly in the same time period.”Source: Statistics Canada Serious burns can have long-term consequences - Burn victims often must have many skin grafts and may have to wear compression garments for up to two years. - Many are left with disfigurement, permanent physical disability and emotional difficulties. - Burns in children have the added complication that, because children continue to grow, they are likely to have scarring and contracting of the skin and underlying tissue as they heal. Safety tips to reduce the likelihood of burns in your home Smoke detectors save lives The risk of fire-related deaths is 50 per cent lower in homes with at least one functioning smoke detector. Read Parachute’s tips on choosing the right kind of detectors for your home and how to maintain them. Install home fire extinguishers While these have limited ability to control a fire, and are not a substitute for calling the fire department, they can extinguish a small fire. Make sure to install your extinguishers: - In plain view - Above the reach of children - Near an exit route - Away from stoves and heating appliances Ideal locations for extinguishers: - In the kitchen - In the workshop - Upstairs if you live in a multi-storey dwelling - At the top of the basement stairwell There are several varieties of extinguishers, each designed to fight a different kind of fire. Learn more about the kinds of extinguishers, how to operate them and their limitations through these tips provided by the City of Toronto. Consider not using candles They are one of the most common causes of household fires. If you do use candles, place them in sturdy holders that aren’t likely to tip and place them away from any flammable materials, such as curtains or tablecloths. Remember to extinguish them when you leave the room. You can also use battery-operated flameless candles. Hazards to manage if you have children in your home Keep your child away from gas fireplaces. The glass barrier on your gas fireplace can heat up to over 200 °C (400 °F) in about six minutes during use. It takes an average of 45 minutes for the fireplace to cool to a safe temperature after the fire is switched off. Place a barrier around your gas fireplace. Install safety gates around the gas fireplace or at doorways to the room that has the fireplace. Young children under five years of age, and especially those under two years, are most at risk. When young children begin walking, they often fall. Hands and fingers are burned on the glass and metal parts of the door as young children raise their arms to stop their fall. Also, young children are attracted to the flames and want to touch them. Supervise your child. Never leave a young child alone near a fireplace; they can be burned before, during and after use of the fireplace. Teach children about the dangers of fire, and supervise. Teach your child the dangers of fire but teaching alone will not prevent your child from an injury. Young children, especially toddlers, may know a safety rule but will not necessarily follow it. Lighters and matches - Keep lighters and matches out of your child’s sight and reach. - Use child-resistant lighters. Protect your child from scalds Hot water or other liquids can burn skin as badly as does fire and scalding is a particular risk to young children with their thin skin. In fact, young children under the age of five suffer 60 per cent of scald injuries seen in emergency departments. Here’s how to lower the risk of your child being scalded. Keep your child away from hot liquids Spilled tea, coffee, soup and hot tap water are the leading causes of this injury. Take extra care not to spill hot liquids while drinking or carrying them around your children. Reduce the hot water temperature in your home to 49 °C (120 °F) Hot tap water could seriously burn your child. Tap water causes nearly one-third of scald burns requiring hospitalization. Many Canadian homes have hot water set at 60 °C (140 °F). This can cause a third-degree burn on your child’s skin in just one second. For more information and how to check the hot water temperature, visit our page on hot tap water. Keep your child safely out of the way when you are cooking In a matter of seconds, hot liquids could fall on your child and burn them badly. - Put your baby or toddler in a high chair or playpen to keep them away from the food preparation area, especially the stove. - Make sure preschoolers stay seated at the kitchen table or out of the way while you are cooking. - Use a safety gate to keep your children out of the kitchen when cooking. - Cook on the back burners and turn the pot handles toward the back to prevent your child from being able to reach the pots and tip them. Keep cords from your kettle and other appliances out of your child’s reach Your child could pull at the cords hanging over the edge over the counter and scald themselves with hot water from the kettle.
A magnet is an object that has magnetic poles and therefore exerts forces or torques (twists) on other magnets. There are two types of these magnetic poles—called, for historical reasons, north and south. Like poles repel (north repels north and south repels south) while opposite poles attract (north attracts south). Since isolated north and south magnetic poles have never been found in nature, magnets always have equal amounts of north and south magnetic poles, making them magnetically neutral overall. In a permanent magnet, the magnetism originates in the electrons from which the magnet is formed. Electrons are intrinsically magnetic, each with its own north and south magnetic poles, and they give the permanent magnet its overall north and south poles.
Brushing and flossing are just as important for your child as they are for you. Instilling proper oral hygiene techniques at an early age allows for your child to develop the skills they will need to maintain their teeth for a lifetime. Teeth should be brushed twice per day (morning and night) and flossed once per day. For children under 3 years old, use no more than a grain-of-rice sized smear of fluoride toothpaste, and for children ages 3-6 use a pea-sized amount. You may allow your child to try a turn at brushing on their own, but it is critical that you brush his or her teeth until he or she has the skills to do it properly on their own. As a general guideline, a child is not ready to brush by themselves until they can properly tie their own shoes, and in many cases not until ages 8-10. Dr. V, Dr. Megan, and our hygienists will help you determine if and when your child is ready. Sealants are placed to reduce the risk of tooth decay, and are easy to apply in only a few minutes. Placing a sealant is non-invasive, and it can last several years before needing to be replaced. A sealant is a material that is most often applied to the chewing surfaces of the back teeth. These areas of the teeth are at increased risk for developing cavities, as they have pits and grooves that make it difficult for toothbrush bristles to reach and clean. Dr. V and Dr. Megan can discuss whether placing sealants are an option for your child’s teeth, as sealants can save time, money, and the discomfort or anxiety commonly associated with repairing a decayed tooth. Fluoride is commonly referred to as “nature’s cavity fighter,” as it is a naturally occurring mineral that helps prevent cavities by strengthening the enamel layer of the teeth. Fluoride provides benefit to our teeth in different ways. During tooth development, and prior to tooth eruption, the fluoride we ingest from foods, water, and supplements helps to ensure that our teeth form a strong enamel layer that will be more resistant to cavities. Something as simple as drinking fluoridated tap water instead of bottled water can lower your risk of developing cavities, as science has proven that community water fluoridation prevents at least 25% of the cavities in children and adults. Once teeth are erupted, topically applied fluorides from toothpastes, rinses, and in-office varnishes act to rebuild or remineralize weakened tooth enamel and slow or arrest early decay. Dr. V and Dr. Megan can help you decide if a particular fluoride application can benefit your child’s teeth. For more information about fluoride from the ADA, click the PatientSmart box below.
If you are wondering how to find the area of any quadrilateral, check out this quadrilateral calculator. We implemented three quadrilateral area formulas, so you can find the area given diagonals and angle between them, bimedians and angle between them, or all sides and two opposite angles. In the default option you can also find the quadrilateral perimeter. If you are looking for a specific quadrilateral shape - as, for example, a rhombus or a kite - check our comprehensive list of area calculators below. What is a quadrilateral? A quadrilateral is a polygon with four edges and four vertices. Sometimes it is called a quadrangle or a tetragon, by analogy to three-sided triangles and polygons with more sides (pentagon, hexagon, heptagon, octagon etc.). Quadrilaterals can be: - simple (not self-intersecting) - convex - all interior angles < 180°, both diagonals lie inside the quadrilateral - concave - one interior angle > 180°, one diagonals lie outside the quadrilatera - crossed, also called complex, butterflies, or bow-ties (self-intersecting) There are many types of convex quadrilaterals. The basic ones are: - Irregular quadrilateral (UK) / trapezium (US): no sides are parallel. That's the case in which our quadrilateral area calculator is particularly useful. - Trapezium (UK) / trapezoid (US): at least one pair of opposite sides are parallel. Isosceles trapezium (UK)/isosceles trapezoid (US) is a special case with equal base angles. - Parallelogram: has two pairs of parallel sides. - Rhombus or rhomb: all four sides are of equal length. - Rectangle: all four angles are right angles. - Square: all four sides are of equal length (equilateral), and all four angles are right angles. - Kite: two pairs of adjacent sides are of equal length Quadrilateral area formulas In this calculator, you can find three ways of determining the quadrilateral area: - Given four sides and two opposite angles According to Bretschneider's formula, you can calculate the quadrilateral area as: area = √[(s - a) * (s - b) * (s - c) * (s - d) - a * b * c * d * cos2(0.5 * (angle1 + angle2))] a, b, c d are quadrilateral sides, s is the semiperimeter ( 0.5 *(a + b + c + d)), and angle2 are two opposite angles. - Given diagonals and angle between them area = p * q * sin(angle), where p, q are diagonals - Given bimedians and angle between them area = m * n * sin(angle), where m, n are bimedians - lines that join the midpoints of opposite sides How to find the quadrilateral area? Assume that you want to calculate the area of a parcel. It's in the quadrilateral shape, but not any specific - it's neither a rectangle, nor a trapezoid. The quadrilateral area calculator is the best choice! - Choose the option with your given parameters. The easiest to measure in field or on the map is the default value, with 4 sides and 2 opposite angles given. Let's pick that option. - Enter the given values. For example, a = 350 ft, b = 120 ft, c = 280 ft, d = 140 ft, angle1 = 70 , angle2 = 100. Remember that you can easily change the units by clicking on the unit name and selecting the one you need. - The quadrilateral area calculator displays the area, as well as perimeter. The area is equal to 39,259 ft2 and perimeter 890 ft in our example. Now you know how much material you need to fence the parcel.
Definition of Norse in English: 1 [mass noun] The Norwegian language, especially in an ancient or medieval form, or the Scandinavian language group. - Many of the events are legendary and bear similarities to other Germanic historical and mythological literature in Old English, Norse and German. - Borrowings from Gaelic, Norse, and Norman French have created a diverse patchwork of regional dialects. - The inscriptions are in runes and Old Norse, but the personal names (both Norse and Celtic) and the grammatically-confused language suggest a thoroughly mixed community. 2 (as plural noun the Norse) Norwegians or Scandinavians in ancient or medieval times: he spent a lifetime fighting against the Norse More example sentences - The maritime supremacy of the Norse, however, was destroyed and surpassed by the cities that belonged to the Hanseatic League. - He was killed in battle by Malcolm III Canmore, Duncan's son, in alliance with the Norse. - In particular, the Danes, Norse and Saxons, regularly tattooed themselves with family symbols and crests, and the early Britons used tattoos in ceremonies. adjectiveBack to top Relating to ancient or medieval Norway or Scandinavia: Loki was the Norse god of evil Norse settlements in Ireland Keld is a Norse word meaning ‘a spring’ More example sentences - These are rather crude divisions, further complicated from the late C8 onwards by raids and settlement involving Norse peoples from what is now Scandinavia. - Stories about Inuit with distinct European features - blue eyes, fair hair, beards - living in the central Arctic have their roots in ancient tales of Norse settlements and explorations. - He named the property Asgaard, the name given to the home of the ancient Norse gods. - Example sentences - Alfred's dynasty, which had survived Danes, Norsemen, and Danes again, had succumbed at last to foreign invasion. - Large-scale migrations of Angles, Saxons, Jutes, Danes, and Norsemen, and substantial movements between Ireland, Scotland, and Wales, make estimates very hazardous. - Iona had meanwhile, in consequence of the occupation of the Western Isles by the Norsemen, been practically cut off from Scotland, and had become ecclesiastically dependent on Ireland. Pronunciation: /ˈnɔːsmən/noun (plural Norsemen) Words that rhyme with Norsecoarse, corse, course, divorce, endorse (US indorse), enforce, force, gorse, hoarse, horse, morse, perforce, reinforce, sauce, source, torse Definition of Norse in: - US English dictionary What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
How do I find the length of a segment of a circle? If we know the degree measure, we can use the following "formula".... [degree measure / 360] x [2*pi*r] = s (where s = arc length) If you have taken Trig, you might be familiar with "radian" measure ("rads") The "formula" for this is s = rΘ (where Θ is in "rads") There are two main "slices" of a circle: The Quadrant and Semicircle are two special types of Sector: |Quarter of a circle is called a Quadrant.| Half a circle is called a Semicircle. Area of a Sector You can work out the Area of a Sector by comparing its angle to the angle of a full circle. Note: I am using radians for the angles. This is the reasoning: Area of Sector = ½ × θ × r2 (when θ is in radians) Area of Sector = ½ × (θ × π/180) × r2 (when θ is in degrees) By the same reasoning, the arc length (of a Sector or Segment) is: L = θ × r (when θ is in radians) L = (θ × π/180) × r (when θ is in degrees) Area of Segment The Area of a Segment is the area of a sector minus the triangular piece (shown in light blue here). There is a lengthy reason, but the result is a slight modification of the Sector formula: Area of Segment = ½ × (θ - sin θ) × r2 (when θ is in radians) Area of Segment = ½ × ( (θ × π/180) - sin θ) × r2 (when θ is in degrees) Great pictures here. Thanks anonymous. I am confused by the question. A segment has an area and it has a circumference but it does not have a length. So what is it that you really wanted? Ninja, I love all these circle definitions. do you think that they should make there way into one of your threads? I'm not sure which one. Umm I guess if you reference the info post directly it can go straight inot the reference post. What do you think? If you click on the title of the post that you want (rather than the thread) then you will reference directly to THAT post.
1. _______________________ communication means communicating with oneself. 2. The type of communication most often characterized by an unequal distribution of speaking time is _______________________. 4. A __________________________ is the method by which a message is conveyed between people. 5. _________________________ is the discernible response of a receiver to a sender's message. 1. _____________________________ is like a mental mirror that reflects how we view ourselves. 2. _____________________________ are personal stories that we and others create to make sense of our personal world. 3. ____________________ is the ability to re-create another person's perspective, to experience the world from the other's point of view. 4. __________________________ is the compassion one may feel for another person's predicament. 3. Fast or slow, small or large, smart or stupid, and short or long are examples of ______________________ words. 4. ________________ ______________ contains words that sound as if they are describing something when they are really announcing the speaker's attitude toward something. _____________________ is the process in which sound waves strike the eardrum and cause vibrations that are transmitted to the brain. 2. ________________________ occurs when the brain interprets sound and gives meaning to the sound. 3. _______________________ listeners respond only to the parts of a speaker's remarks that interest them, rejecting everything else. 4. ___________________ listeners listen carefully, but only because they are collecting information to attack what you have to say. 1. ____________________ are deliberate nonverbal behaviors that have precise meaning known to everyone within a cultural group. 3. ________________ ____________ is/are the combination of two or more expressions showing different emotions. ____________________ is the process of deliberately revealing information about oneself that is significant and that would not be normally known by others. Recent research shows that women often build friendships through shared positive feelings, whereas men often build friendships through _______________ __________________.
An Infectious Idea: Prevention of Communicable Diseases “No sooner had diphtheria been conquered and almost cast into oblivion than this new horror appears on the horizon,” wrote Medical Officer of Health Dr. G.P. Jackson in 1934. He was referring to polio, but he could have been talking about any new disease in history—smallpox, typhoid, scarlet fever, tuberculosis, diphtheria, polio, or AIDS. Every generation has its plague. In the early 20th century, there was resistance to compulsory vaccination, even amongst some doctors. Some people believed that vaccination was unproven scientifically, that it polluted the body, and that catching smallpox was unlikely in any case. Others feared that vaccination might cause them physical harm, and sometimes this did occur. Vaccination then was more dangerous and painful than it is today. For example, smallpox vaccinations required scratching a patch of skin raw, and occasionally this led to sores that did not heal for weeks. With better vaccination techniques and education, public opinion slowly changed, and in time vaccination of school children against a wide variety of diseases, including diphtheria, polio, and measles, became standard. In 1937, Dr. Jackson observed that vaccination is a case in which “individual hazard is improved in direct ratio to the intelligent action of one’s neighbours.” The Department of Public Health has always encouraged people to take charge of their own health. In 1912 Medical Officer of Health Dr. Charles Hastings introduced a “swat the fly” campaign to reduce the spread of typhoid from garbage to food via flies. The three photographs below illustrate how the Department of Public Health used epidemiology (the study of what causes and spreads disease) to decrease risks in such common activities as drinking water. The first photograph shows a drinking fountain with a tin cup chained to it that was used by every person who needed a drink. Horses used the large basin on the opposite side, and dogs drank from the basin near the ground. Dr. Canniff supported the installation of these fountains in the 1880s, saying that they prevented disease and, as a bonus, might deter people from going into saloons to quench their thirsts. Canniff’s successor, Dr. Hastings, believed that this kind of fountain could spread disease and documented scientifically how the common drinking cup could pass bacteria and sickness from one user to the next. Hastings’ solution was the “bubble fountain”—the kind of drinking fountain we use today, where the drinker’s mouth touches only clean water. The fountains below are waiting for installation by the Department of Public Works. The department knew it was important to trace the spread of diseases. Doctors were legally required to report to the medical officer of health when they learned of cases of infectious diseases such as typhoid and scarlet fever. However, many doctors didn’t know of the requirement, or didn’t bother. Tracing people who may have been infected with disease required discretion and an understanding of the need to balance public health with the right to personal privacy. Once a contagious illness was revealed, the department had the right to quarantine a household until the danger of infecting others was past. Stanley Barracks at Exhibition Place was used as civilian housing before and after World War II. Five families were quarantined there because one child had polio. The children shown here were pictured in The Globe and Mail newspaper for ignoring the quarantine and playing with other children. As communicable diseases became less common, the need for quarantines declined. However, quarantine was used in 2003 to combat SARS and it is possible that new diseases in the future may bring it back into use.
Would you like to do a simple but slightly dangerous physics demo? It will be fun. You just need a basketball and a tennis ball (or any two bouncy balls of different mass). Hold the basketball above the ground with the tennis ball on top of that. Let go of both at the same time. Here's what should happen. Isn't that awesome? You can get that tennis ball flying off with some pretty serious velocity. Why is this dangerous? Well, if you don't drop them such that they are absolutely vertical, the tennis ball can launch off at an angle. It's possible that that angle gives the tennis ball a trajectory right towards your face. It's happened to me more than once. OK, but what is going on here? It might seem like this is a physics cheat to have the tennis ball bounce so much higher than it started (and that's probably why it's so cool to see it). But in terms of energy, it's all legitimate. Both the basketball and the tennis ball are moving at the same speed right before they hit the ground. This means they both have some amount of kinetic energy, but the basketball has more due to its larger mass. After the collision, the basketball has a very low speed and thus very little kinetic energy. That means the tennis ball gets a bunch of kinetic energy—and with a low mass, you get a high velocity. But how do you make the basketball stop after the collision? Let's answer this question by looking at a slightly simpler problem. Suppose I have two balls of different mass moving towards each other with the same speed. Like this. With different mass balls, the final speeds of the two balls will also be different. How do you find the speeds of the balls after the collision? One way is to consider both the total momentum (product of mass and velocity) and total kinetic energy (one half of mass and velocity squared). For a very bouncy collision, both momentum and kinetic energy should be conserved. It's very straightforward to mathematically solve for the ratio of masses such that one of the balls stops. But there is another way to approach this problem—a way that is more interesting (at least to me). What if we instead make a numerical calculation for two colliding balls? In a numerical calculation, the motion can be broken into many small steps in time. During each small time step, the forces can be considered to be approximately constant to make very many simple physics problems. In this case, there will only be a force pushing the balls apart when they collide (overlap). I can model this collision force as though it were a spring pushing them apart. This is essentially what happens anyway. OK, let's just jump to the numerical model. You can press "play" to run the code and the "pencil" to view or edit it. If you want to play with the code (and you should), you can change the ratio of the masses and run it again. Just to be clear—you need to change the mass ratio so that after the collision the heavy mass is stopped. Since I am already doing this collision with a computer, I can do it a whole bunch of times and create a plot of final velocity as a function of the ratio of masses. Here is the code (it's sloppy) and there is the plot. Here we get the answer. If the mass ratio is 3 to 1, the heavy mass stops. In that case all of the kinetic energy ends up with the lower-mass ball. However, notice that the 3 to 1 ratio doesn't produce the greatest final speed—the final speed of that approaches a maximum value as the mass ratio goes to zero. But the goal was to make the heavier mass stop—it's more fun that way. Dropping a tennis ball on top of a basketball might make everyone think you are cool (the ultimate goal in physics). But about dropping more than 2 balls? Here is a toy you can buy that has FOUR balls of different masses. Watch what happens when I drop it (in slow motion). Of course the goal in physics is to build models. So, let's build a model of multiple dropped balls. Instead of 4 balls, I will just do 3 and let you add more balls if you like. I will still use my same ball collision model (where there is a spring force between them), but this time I am going to have the balls fall and hit a ground just like the actual balls. Here is the basic idea of how this calculation works. - Make three balls (that part is obvious). - Calculate the gravitational force on each ball (so they will fall). - Check to see if any balls overlap. If so, then calculate the "spring" force between them based on the amount of overlap. - Check the bottom ball to see if it "hits" the floor. If so, just reverse its momentum. - Repeat forever or until you get bored. Now for the code. Press "run" to play. If you want to answer the homework questions below, you will need to modify the code. At the very least, you should try changing the masses of the balls (I just pick random starting values). That looks pretty good. Now for some homework. This is gonna be great. - Check to see if momentum is conserved in the first horizontal collision program. Momentum isn't conserved in the three-ball-drop case since I didn't include the momentum of the Earth. - Is kinetic energy conserved in the horizontal collision? If it's not, then there is a problem—a big problem. - What about for the three ball drop? Is total energy conserved? - How high does the top ball go? - What happens if you change the collision spring constant? - Most important question: can you find a ratio of masses that gives the small ball the highest bounce? - Modify the code so that there are four bouncing balls. - Based on energy calculations alone, suppose you drop three balls such that the bottom two stay at rest. How high above the ground would you have to drop this so that the top ball makes it into outer space? What about the drop height to get the ball into orbit? Assume no air resistance. More Great WIRED Stories - Everyone wants to go to the moon—logic be damned - College Humor gives comedy subscription a serious effort - Tips to get the most out of Screen Time controls on iOS 12 - Tech disrupted everything. Who's shaping the future? - An oral history of Apple's Infinite Loop - Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories
Blood Pressure Basics You can’t see your blood pressure or feel it, so you may wonder why this simple reading is so important. The answer is that measuring your blood pressure gives your doctor a peek into the workings of your circulatory system. A high number means that your heart is working overtime to pump blood through your body. This extra work can result in a weaker heart muscle and potential organ damage down the road. Your arteries also suffer when your blood pressure is high. The relentless pounding of the blood against the arterial walls causes them to become hard and narrow, potentially setting you up for stroke, kidney failure, and cardiovascular disease. Having your blood pressure measured is a familiar ritual at most visits to the doctor’s office. The examiner inflates a cuff around your upper arm, listens through a stethoscope, watches a gauge while deflating the cuff, and then scribbles some numbers on your chart. You may be relieved if you learn your blood pressure is normal or alarmed if the examiner says “150 over 100.” But what do these numbers actually mean? Figure 1: Measuring blood pressure Understanding the numbers Blood pressure is recorded as millimeters of mercury (mm Hg) because the traditional measuring device, called a sphygmomanometer, uses a glass column that’s filled with mercury and is marked in millimeters. A rubber tube connects the column to an arm cuff. As the cuff is inflated or deflated, mercury rises and falls within the column (see Figure 1). Although mercury gauges are still considered the gold standard for measuring blood pressure, mercury-free devices are available. Many modern instruments use a spring gauge with a round dial or a digital monitor, but even these are calibrated to give readings in millimeters of mercury. The top number, or systolic pressure, reflects the amount of pressure during the heart’s pumping phase, or systole. As the heart contracts with each beat, pressure in the arteries temporarily increases as blood is forced through them. The bottom number, or diastolic pressure, represents the pressure during the resting phase between heartbeats, or diastole. Hypertension is defined as having a systolic reading of at least 140 mm Hg or a diastolic reading of at least 90 mm Hg, or both (see Table 1). The Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC), a group of physicians and researchers from across the United States, developed guidelines for classifying blood pressure in 2003. The figures are based on extensive reviews of the scientific literature and are updated periodically to keep pace with new research. To classify your blood pressure, a health professional averages two or more readings taken after you have been seated quietly for at least five minutes. For example, a patient with a measurement of 135/85 mm Hg on one occasion and 145/95 mm Hg on another has an average blood pressure of 140/90 mm Hg and is said to have stage 1 hypertension. The JNC guidelines emphasize the importance of tackling escalating blood pressure earlier rather than later, thereby heading off heart disease, stroke, and kidney damage. The consensus among hypertension specialists is that people should do whatever it takes to get their blood pressure numbers down to the healthy range. Whatever works for you is the right strategy to adopt, but it most likely will involve some combination of diet, exercise, stress reduction, and medication. Table 1 offers a brief summary of these strategies, depending on which blood pressure category you fall into; the following chapters discuss lifestyle modifications and antihypertensive medications in greater detail (see “Lifestyle changes to lower your blood pressure” and “Medications for treating hypertension”). Table 1: Classifying and treating hypertension |Category*||Systolic blood pressure |Diastolic blood pressure |What you should do| |Normal||Less than 120||Less than 80||Stick with a healthy lifestyle, including following a diet rich in fruits and vegetables and low in salt, using alcohol moderately, and maintaining a healthy weight.| |Prehypertension||120–139||80–89||Change health habits. If you’re heavy, lose weight. Reduce salt in your diet. Eat more fruits and vegetables, and get more exercise. Drink alcohol only in moderation. You do not need medication at this stage if you don’t have other health conditions. If you have diabetes or kidney disease, begin drug therapy if your blood pressure is at or above 130/80.**| |Stage 1 hypertension||140–159||90–99||Change your health habits and take a blood pressure drug. Many people start with one medication, but may need to go to a second or third to find a treatment that works. If you have other health conditions, you may need a different drug or an additional one.| |Stage 2 hypertension||160 or higher||100 or higher||Change your health habits. It’s likely that you’ll need to take at least two blood pressure medications.| |*Note: When systolic and diastolic pressures fall into different categories, physicians rate overall blood pressure by the higher category. For example, 150/85 mm Hg is classified as stage 1 hypertension, not prehypertension.**Because both hypertension and diabetes target the same major organs, drug therapy is generally initiated at an earlier stage in people with diabetes. The goal is to maintain blood pressures below the 130/80 threshold. For people with diabetes or chronic kidney disease, stage 1 hypertension is defined as systolic pressure of 130–159 or diastolic pressure of 80–89.| If your reading is normal If your blood pressure is below 120/80 mm Hg, this is where you want it to stay. If you are already committed to a healthy lifestyle, keep it up. If you’ve managed to keep within the normal range without much thought about your health habits, you might want to think again. Data from the Framingham Heart Study suggest that even if your blood pressure is normal at age 55, you run a 90% risk of developing hypertension within your lifetime. But a combination of exercise, weight loss, limited salt intake, a diet rich in fruits and vegetables, and limits on alcohol consumption can prevent hypertension (see “Lifestyle changes to lower your blood pressure”). You have prehypertension if your systolic blood pressure reading is 120 to 139, your diastolic pressure is 80 to 89, or both. The risk of cardiovascular disease begins climbing at pressures as low as 115/75 mm Hg, and it doubles for every 20-point increase in systolic pressure and each 10-point increase in diastolic pressure. If your blood pressure falls into the prehypertension category and you do not have any other risk factors, lifestyle changes are the recommended treatment at this stage. If you have diabetes or chronic kidney disease, you should begin using antihypertensive medications at pressures of 130/80 mm Hg. Stage 1 hypertension You have stage 1 hypertension if your systolic blood pressure is 140 to 159 or your diastolic pressure is 90 to 99, or both. If you don’t have any accompanying conditions such as heart disease, diabetes, kidney disease, or a history of stroke, you will usually start with lifestyle modifications and a single medication. Your doctor may let you try lifestyle modifications alone for two or three months to see if you may be able to avoid medication altogether, but many people find that they need to take some type of medication in order to reduce their blood pressure numbers to healthy levels. You may have to try several drugs to find one that works best. The initial choice of drug may depend on whether you have other health problems — such as diabetes, migraine headaches, or cardiac arrhythmias — in addition to hypertension. The JNC guidelines also recommend that African Americans, who are at a higher-than-average risk for hypertension-related complications, start with a two-drug regimen if blood pressure readings top 145/90 mm Hg. Stage 2 hypertension You have stage 2 hypertension if your systolic pressure is at least 160 mm Hg, your diastolic pressure is at least 100 mm Hg, or both. In addition to lifestyle modifications, you will probably need to take at least two medications. If this course of action fails to bring your blood pressure down to your target level (below 140/90 for most individuals and below 130/80 for those with diabetes or chronic kidney disease), your doctor may add additional drugs to the mix. What does blood pressure measure? Blood pressure reflects both how hard your heart is working and what condition your arteries are in. The formula is as simple as ABC — or actually, C × A = B, that is, cardiac output times arterial resistance equals blood pressure. Cardiac output is the amount of blood your heart pumps per minute. With each beat, your heart propels about 5 ounces of blood into the arteries. That adds up to about 4 to 5 quarts over the course of a minute of normal activity. During strenuous activity, your heart must pump considerably more blood to meet your body’s increased demand for oxygen. Arterial resistance is the pressure the walls of the arteries exert on the flowing blood. As blood pushes into the arteries with each heartbeat, it forces the artery walls to expand, much like an elastic waistband stretches to accommodate your body. When the blood flow ebbs, the vessel returns to its original shape. The less flexible the vessels are, the greater the arterial resistance. Narrowed, tightened, or inflexible vessels result in a higher pressure at any level of flow. As cardiac output or arterial resistance increases, so does blood pressure. Figure 2: How the body regulates blood pressure Like an expert driver, the body constantly adjusts blood pressure in response to small changes in the environment. The central mechanism for regulating blood pressure is the renin-angiotensin-aldosterone system. This hormonal interaction takes place primarily in the circulatory system, the nervous system, and the kidneys. However, the renin-angiotensin system appears to operate independently within other organs, such as the brain and the blood vessels. Natural blood pressure controls Your blood pressure is never constant, nor should it be. Your body continually adjusts cardiac output and arterial resistance to deliver oxygen and nutrients to the tissues and organs that most need them — your muscles during a jog or your digestive system at mealtime, for example. Your blood pressure also varies according to the time of day. It’s highest in the morning and lowest at night during sleep. Your body can make dramatic adjustments in blood pressure within seconds. A sprint for the elevator, the sound of breaking glass, or a confrontation with someone may send blood pressure soaring from an idling 110/70 mm Hg to a racing 180/110 mm Hg or higher. These changes occur without conscious thought and are directed by complex interactions among your central nervous system, hormones, and substances produced in your blood vessels. A key player in blood pressure regulation is the layer of cells lining the inner wall of blood vessels. This layer, known as the endothelium, encompasses a surface area approximately equivalent to seven tennis courts. Far from being an inert conduit for blood flow, the endothelial lining secretes dozens of substances that interact with the circulating blood as well as with the cell layer that lies below it. Of particular interest in hypertension research are the vasodilators (nitric oxide and prostacyclin) and the vasoconstrictors (angiotensin II and endothelin-1). These chemical messengers instruct your blood vessels to widen or narrow based on your body’s minute-by-minute blood flow requirements. As long as your blood pressure is in the normal range, healthy vessels tend to be dilated. When blood pressure gets too high (such as during times of stress) or too low (when you’re dehydrated, for example), pressure-sensing nerve cells located throughout your circulatory system relay this information to your autonomic nervous system. The autonomic nervous system manages the involuntary activities of smooth muscles, including those in the intestines, sweat glands, airways, heart, and blood vessels. It responds by setting off a chain of events designed to restore blood pressure to normal levels (see Figure 2).
The Landscape Approach is a process that facilitates land use planning and management for multiple stakeholders in a defined landscape. This can be a cultural region, a district with legal boundaries or a geographical defined region. Importantly, the Landscape Approach seeks to simultaneously address development and conservation goals at a landscape scale. Important stakeholders may be excluded from the Landscape Approach if landscapes are defined purely by cultural or geographical boundaries. A landscape for a water catchment area, for example, could be defined by the geographical catchment area, or by the local community that lives within the catchment area. Developing a land use plan for a water catchment area is likely to involve stakeholders outside the water catchment area, such as governments or remote stakeholders with water access rights. Landscapes can be defined in many ways, but from a land use planning point of view, landscapes are most usefully defined by stakeholders rather than natural catchments or landforms. Ultimately the Landscape Approach seeks to develop an ongoing planning and management process, which delivers an equitable land use balance for all stakeholders within a particular landscape. The Landscape Approach will often include (WWF 2015): - Engaging all key stakeholders in a participatory process to discuss, design and manage landscape action - Recognising the motivations of all key stakeholders, which often involves analysis of issues, drivers and spatial relationships - Collaborative planning WWF supports the Landscape Approach Promoting the Landscape Approach is a core component of WWF’s Sustainable Land Use Planning and Management Program. The Program seeks to develop new tools and concepts for land use planning and management that will help achieve environmental, social and economic benefits in areas where agriculture, forestry and other productive land use compete with environmental and biodiversity goals. 10 principles of the Landscape Approach A special feature on the Landscape Approach was published in the Proceedings of the National Academy of Science (PNAS) during 2013. The special feature, written by Jeffry Sayer et al, identified ten principles that are commonly applied to the Landscape Approach. These principles include: - Landscape processes are dynamic and landscape change must inform decision-making. - Solutions to problems need to be built on a shared negotiation processes. - Landscape function can be influenced by a range of feedback mechanisms, synergies, interactions, external influences and constraints. Outcomes at any scale are shaped by processes operating at other scales. An awareness of these higher and lower level processes can improve local intervention. - Landscapes and their components have multiple uses, each of which is valued in different ways by different stakeholders. - Developing a Landscape Approach requires a patient iterative process of identifying stakeholders and recognising their concerns and aspirations. - Coordinating activities by diverse stakeholders requires a shared vision and a broad consensus on general goals, challenges, concerns, options and opportunities. Building and maintaining such a consensus is a fundamental goal of a Landscape Approach. - Rules on resource access and land use need to be clear. The rights and responsibilities of different stakeholders need to be clear and accepted by all stakeholders. - Information can be derived from multiple sources. To facilitate shared learning, information needs to be widely accessible. - Wholesale unplanned system changes are usually detrimental and undesirable. Planned change must consider the resilience of the landscape. - People require the ability to participate effectively in the Landscape Approach process. Effective participation requires social, cultural and financial skills that can be adapted and applied to new issues as they are raised by the Landscape Approach process.
Problem : If there were no atmosphere, what color would be sky appear?If there were no atmosphere, there would be no atoms to scatter any of the light from the sun downwards towards us and the sky would appear black (as it appears on the moon!). Problem : Use a dimensional argument to show that the proportion of the electric field of a light ray scattered by an atomic oscillator is proportional to λ -2 . Let E i and E s be the incident and scattered amplitudes, respectively, with E s corresponding to a distance d from the scatterer. Assume E sâàùE I , E sâàù1/r and also that the scattered amplitude is proportional to the volume of the scatterer.The problem basically gives us that E sâàù = , where V is the volume of the scatterer and K incorporates anything (including constants) that we might have left out. Clearly the quantity VK/r must be dimensionless, so K must have units of (length) -2 . The only other factor that the scattering might reasonably depend on is the wavelength. So K = f (λ) . λ has units of length so Kâàùλ -2 so E i/E sâàùλ -2 . Problem : Derive an expression for the time taken by light to travel through a substance consisting of m layers of material each with thickness d i and index of refraction n i .The time taken to travel through each layer is given by t i = d i/v i so the total time is a sum: |t = d i/v i| |t = n i d i| Problem : Propose a simple argument to show that for reflection from a planar surface, Fermat's principle demands that the incident and reflected rays share a common plane with the normal to the surface.Take two points P and Q above the surface. The light moves from P , reflects off the surface and ends up at Q . A normal to the surface and a line joining P and Q define a plane called the plane of incidence. Choose some point R on the surface, but also in the plane of incidence (these define a line L ); we will show a ray must reflect at some such point. Choose another point R' also on the surface, but not in the plane of incidence. For any such point it must be true that we can draw a line from R' to a point R perpendicular to L . The distance PR' is then given by which is by definition greater than PR , unless R' lies on L (we have defined it not to, though). Similarly the distance R'Q > RQ since R'Q = . Thus for any point on the surface but not on the plane of incidence we can find a shorter path of the reflecting ray through a point in the plane of incidence. Thus, by Fermat's principle, it should take this shortest path (since we are dealing with a single medium, shortest distance corresponds to shortest time). Since R , P and Q are all in the plane of incidence, PR and RQ must lie in the plane of incidence, which also contains the normal to the surface (we defined it this way). Problem : Derive the law of reflection θ i = θ r using Fermat's principle.Using the result of the last problem we will assume P the point from which the light comes, Q the point to which the light goes, and R , the point from which the light reflects are all coplanar. Once again, since we are considering a single medium only the path of least time will be the path of least distance. Let the x- axis correspond to the surface. Without loss of generality let both P and Q be a distance y 0 above the surface, with P on the y -axis and Q a distance x 0 away (at (x 0, y 0) ). The point R can be anywhere along the x -axis. Call the coordinate of R (x 1, 0) . The distance PR is given by . The distance from RQ is given by . Thus the total distance PRQ is: |D = +| |fracdLdx 1 = x 1(y 0 2 + x 1 2)-1/2 + (x 1 - x 0)(y 0 2 + (x 0 - x 1)2)-1/2 = 0|
Students learn about literary elements, such as plot and characters, across a number of literary genres. They also do some creative writing. Literary Genres unit contains 11 learning experiences. Learning Experiences (Lessons) in Literary Genres Each learning experience takes about 45 minutes to teach in the device-enabled classroom. Students discuss how they guess what a story will be about. Then, they read the title and first sentences of a historical fictional short story to practice those skills. Next, they read the entire story and look back at their questions and confirm or correct predictions. Finally, they create or find an illustration for the story. What It’s About: Plot and Theme Students discuss how they can find theme in a story. Then, they read “The Necklace,” by Guy de Maupassant, and examine its multiple themes. Next, they compare themes in “The Necklace” and in Edna St. Vincent Millay’s poem “Sorrow.” Finally, they choose a work that they know and discuss its themes. Different Genres of Literature Students begin by sharing kinds of stories they have enjoyed. Then, they learn seven major genres of fiction and identify passages exemplifying each. Next, they form small groups to report on specific works in assigned genres. Finally, they elaborate on what genre of fiction they would most like to write. Students read Jack’s London’s story “To Build a Fire” and note the character’s actions. Then, they explain the outcome of the connection between the character’s motivations and his behaviors. Finally, they choose a different short story and apply the same analytical process to it. Analyzing Plot Elements Students read a non-linear story and analyze the elements of its plot. Next, they find flashback in the story. Finally, they read a linear story and compare its plot to that of the non-linear example. Analyzing the Setting Students identify the setting of a photo of a county fair at night. Then, they read and respond to Nathaniel Hawthorne’s short story “The Minister’s Black Veil.” Next, they explain how setting relates to characters’ values and beliefs. Finally, they apply what they have learned to a story they know and like. Point of View Students look at a photo that prompts them to think about points of view. Then, they read a passage that uses multiple points of view and identify those points of view. Next, they compare and contrast advantages and disadvantages of multiple and single points of view. Finally, they work in small groups to write a brief narrative using multiple points of view. Students view a cartoon and identify the humor in it, which stems from irony. Then, they learn about the three types of irony. Next, they read a classic short story and explain examples of irony in it. Finally, they brainstorm and write scenes that use irony. Students respond to a shape poem. Then they explore in depth the use of graphical devices in poetry. Next, they study figurative language and explain an extended metaphor. Finally, they write a poem using extended metaphor. Drama: Enacting Literature Students begin by showing basic knowledge of what a theater is. Then, they explore major elements of drama, including the structure of acts and scenes. Next, they read an adapted scene from a classic play and explain it in terms of what they have learned. Finally, they write new dialogue and stage directions to extend the scene. Book Report (Fiction) Students learn the elements of writing a book report. Then, they choose a book to read and explain why they chose it. Next, they read the book and write their book report. Finally, they rate the book and create a new cover for it.
View lab report - baking soda and vinegar reaction, introduction and theory lab from chemistry sch3u7 at bayview secondary school design lab abstract: in this experiment, baking soda and vinegar. Law of conservation of matter lab: teacher notes 1 describe what happens when the vinegar was poured into the cup of baking soda answers may vary, but students should mention release of a gas. Introduction in this lab, we mixed together the reactants, 005 moles of baking soda and some vinegar into a flask the products were the. Documents similar to experiment baking ssoda part l accuracy and precision - lab report uploaded by arianna mae norem quality control baking soda lab report. Research question: how will the different concentrations of vinegar to water affect the overall reaction with the baking soda hypothesis: if the concentration is higher, the experiment will be more reactive because more surface area is being covered, causing more collision between the molecules to happen. Stoichiometry lab answer key student vinegar g 15 a known amount of vinegar and baking soda to read the stoichiometry lab report directions and. Stoichiometry and baking soda (nahco 3) purposes: 1 calculate theoretical mass of nacl based on a known mass of nahco 3 baking soda lab, percent yield. Add water to test tube and a pinch of baking soda count the bubbles to measure the rate of photosynthesis name lab report include the. Baking soda and vinegar investigations the complete investigating chemistry though inquiry lab manual includes 25 inquiry-based 02 baking soda vinegardoc. Chemistry - limiting reactant lab experiment using baking soda and vinegar - this report aimed to replicate stroop's (1935) experiment. Experiment 8: chemical moles: converting baking soda to table salt what is the purpose of this lab we want to develop a model that shows in a simple way the relationship between the. Make sure the baking soda goes to the bottom of the balloon post lab calculations: answer the following questions and complete the table below [7 marks] 1. View lab report - exp15 formal lab report from chem chem 1220 at university of utah analysis of baking soda: experiment 15 alexa ellis cherame lindley lab section #004 09/29/15 introduction in our. Tear a paper towel into a square that measures about 5 inches by 5 inches put 1 1/2 tablespoons of baking soda in the center of the square, then fold the square as shown in the picture, with the baking soda inside. Decomposition of baking soda each person would be required to submit a lab report containing the items listed below in separate sections with the following. Decomposition of baking soda mass of baking soda and its decomposition products in order to decide what chemistry is taking laboratory report. I believe that the smaller amounts of baking soda such as the five grams and ten grams will react slower than the larger amounts i think this because the larger amounts of sodium bicarbonate will completely “absorb” the. Laboratory report december 6, 2010 title: combining vinegar and baking soda statement of problem: we wanted to observe what kind of reaction would take place when combining white vinegar and sodium bicarbonate (baking soda. An inquiry-based lab investigation from energy foundations for high school chemistry skip navigation energy foundations for high school chemistry menu baking. Baking soda/vinegar report if we change the amount of baking soda used in our mixture, clean up lab area proceed with calculations. Any baking soda and vinegar experiment i've ever done with the kids has always been a success adding color to the experiment makes it that much more fun. This is a simple experiment where we mix baking soda and vinegar together we weigh it before and after mixing the ingredients to help show what happens when. Sweets lovers and not, rejoice here comes your new favorite- cranberry-orange cookies these cookies are so heavenly wonderful that couple of years ago i’ve even considered to start my own baking business. Stoichiometry: baking soda and vinegar reactions teacher version in this lab, students will examine the chemical reaction between baking soda and vinegar, and. Have you ever seen the diet coke and mentos experiment that is all over the internet and wondered what makes the reaction work you might think that there is some ingredient in a mentos candy that causes a chemical reaction with the soda pop, like the way baking soda reacts with vinegar but the. Experiment 15: quality control for the athenium baking soda company lab report katrina le kaitlynn jackson lab section 201 wednesday, january 25, 2017. Combining an acid with a base typically results in water and salt in some cases, these reactions release gas and heat as well mixing baking soda, or nahco3, with hydrochloric acid, or hcl, results in table salt, nacl. Stoichiometry: baking soda and vinegar reactions student advanced version in this lab, students will examine the chemical reaction between baking soda and vinegar, and. Report abuse transcript of baking soda and vinegar lab baking soda and vinegar lab purpose put the baking soda into the measuring cup 5. Link to the lab report: baking soda is awesome for cleaning 10 cleaning uses for baking soda (clean my space) - duration: 4:26.
JPG, GIF, TIFF, PNG, BMP. What are they, and how do you choose? These and many other file types are used to encode digital images. The choices are simpler than you might think. Part of the reason for the plethora of file types is the need for compression. Image files can be quite large, and larger file types mean more disk usage and slower downloads. Compression is a term used to describe ways of cutting the size of the file. Compression schemes can by lossy or lossless. Another reason for the many file types is that images differ in the number of colours they contain. If an image has few colours, a file type can be designed to exploit this as a way of reducing file size. You will often hear the terms "lossy" and "lossless" compression. A lossless compression algorithm discards no information. It looks for more efficient ways to represent an image, while making no compromises in accuracy. In contrast, lossy algorithms accept some degradation in the image in order to achieve smaller file size. A lossless algorithm might, for example, look for a recurring pattern in the file, and replace each occurrence with a short abbreviation, thereby cutting the file size. In contrast, a lossy algorithm might store colour information at a lower resolution than the image itself, since the eye is not so sensitive to changes in colour of a small distance. Images start with differing numbers of colours in them. The simplest images may contain only two colours, such as black and white, and will need only 1 bit to represent each pixel. Many early PC video cards would support only 16 fixed colours. Later cards would display 256 simultaneously, any of which could be chosen from a pool of 224, or 16 million colours. New cards devote 24 bits to each pixel, and are therefore capable of displaying 224, or 16 million colours without restriction. A few display even more. Since the eye has trouble distinguishing between similar colours, 24 bit or 16 million colours is often called TrueColour. |TIFF is, in principle, a very flexible format that can be lossless or lossy. The details of the image storage algorithm are included as part of the file. In practice, TIFF is used almost exclusively as a lossless image storage format that uses no compression at all. Most graphics programs that use TIFF do not compression. Consequently, file sizes are quite big. (Sometimes a lossless compression algorithm called LZW is used, but it is not universally This is usually the best quality output from a digital camera. Digital cameras often offer around three JPG quality settings plus TIFF. Since JPG always means at least some loss of quality, TIFF means better quality. However, the file size is huge compared to even the best JPG setting, and the advantages may not be noticeable. A more important use of TIFF is as the working storage format as you edit and manipulate digital images. You do not want to go through several load, edit, save cycles with JPG storage, as the degradation accumulates with each new save. One or two JPG saves at high quality may not be noticeable, but the tenth certainly will be. TIFF is lossless, so there is no degradation associated with saving a TIFF file. Do NOT use TIFF for web images. They produce big files, and more importantly, most web browsers will not display TIFFs. |JPG is optimised for photographs and similar continuous tone images that contain many, many colours. It can achieve astounding compression ratios even while maintaining very high image quality. GIF compression is unkind to such images. JPG works by analysing images and discarding kinds of information that the eye is least likely to notice. It stores information as 24 bit colour. Important: the degree of compression of JPG is adjustable. At moderate compression levels of photographic images, it is very difficult for the eye to discern any difference from the original, even at extreme magnification. Compression factors of more than 20 are often quite acceptable. Better graphics programs, such as Paint Shop Pro and Photoshop, allow you to view the image quality and file size as a function of compression level, so that you can conveniently choose the balance between quality and file size. This is the format of choice for nearly all photographs on the web. You can achieve excellent quality even at rather high compression settings. I also use JPG as the ultimate format for all my digital photographs. If I edit a photo, I will use my software's proprietary format until finished, and then save the result as a JPG. Digital cameras save in a JPG format by default. Switching to TIFF or RAW improves quality in principle, but the difference is difficult to see. Shooting in TIFF has two disadvantages compared to JPG: fewer photos per memory card, and a longer wait between photographs as the image transfers to the card. Never use JPG for line art. On images such as these with areas of uniform colour with sharp edges, JPG does a poor job. These are tasks for which GIF and PNG are well suited. |PNG is also a lossless storage format. However, in contrast with common TIFF usage, it looks for patterns in the image that it can use to compress file size. The compression is exactly reversible, so the image is recovered exactly. PNG is of principal value in two applications: PNG will eventually replace GIF, but GIF is still more widely used on the web, since even old web browsers support it. |GIF creates a table of up to 256 colours from a pool of 16 million. If the image has fewer than 256 colours, GIF can render the image exactly. When the image contains many colours, software that creates the GIF uses any of several algorithms to approximate the colours in the image with the limited palette of 256 colours available. Better algorithms search the image to find an optimum set of 256 colours. Sometimes GIF uses the nearest colour to represent each pixel, and sometimes it uses "error diffusion" to adjust the colour of nearby pixels to correct for the error in each pixel. GIF achieves compression in two ways. First, it reduces the number of colours of colour-rich images, thereby reducing the number of bits needed per pixel, as just described. Second, it replaces commonly occurring patterns (especially large areas of uniform colour) with a short abbreviation: instead of storing "white, white, white, white, white," it stores "5 white." Thus, GIF is "lossless" only for images with 256 colours or less. For a rich, true colour image, GIF may "lose" 99.998% of the colours. If your image has fewer than 256 colours and contains large areas of uniform colour, GIF is your choice. The files will be small yet perfect. Here is an example of an image well-suited for GIF: Do NOT use GIF for photographic images, since it can contain only 256 colours per image. |RAW is an image output option available on some digital cameras. Though lossless, it is a factor of three of four smaller than TIFF files of the same image. The disadvantage is that there is a different RAW format for each manufacturer, and so you must use the manufacturer's software to view the images. It is reasonable to use this to store images in the camera, but be sure to convert to TIFF, PNG, or JPG immediately after transferring the image to your PC. Use RAW only for in-camera storage, and convert to TIFF, PNG, or JPG as soon as you transfer to your PC. You do not want your image archives to be in a proprietary format. Will you be able to read a RAW file in five years? JPG is the format most likely to be readable in 50 years. |BMP is an uncompressed proprietary format invented by Microsoft. There is really no reason to ever use this format.| |PSD, PSP, etc. , are proprietary formats used by graphics programs. Photoshop's files have the PSD extension, while Paint Shop Pro files use PSP. These are the preferred working formats as you edit images in the software, because only the proprietary formats retain all the editing power of the programs. These packages use layers, for example, to build complex images, and layer information may be lost in the non-proprietary formats such as TIFF and JPG. However, be sure to save your end result as a standard TIFF or JPG, or you may not be able to view it in a few years when your software has changed.| |ZIP is a defacto standard for compressing all sorts of computer files both images and programs. It works by creating a separate archive file usually called a ZIP file and then inserting other files into the archive. As it is used for all types of files it is a lossless compression method but you usually need a separate program to handle the archive files (WinZip being the most common, although there are others). Its best uses is to collect a whole series of files into one compressed file also it is possible to create an archive that can 'span' several disks, so it can be used to copy large files over several floppy disks.| Currently, GIF and JPG are the formats used for nearly all web images. PNG is supported by most of the latest generation browsers. TIFF is not widely supported by web browsers, and should be avoided for web use. PNG does everything GIF does, and better, so expect to see PNG replace GIF in the future. PNG will not replace JPG, since JPG is capable of much greater compression of photographic images, even when set for quite minimal loss of quality. Beware of the double compression trap. Once a file is compressed it is usually impossible to compress it further in fact trying can make the file larger. This is especially common with lossless compression. It does depend on the compression techniques used, but the classic problems is with zipping GIF files as the GIF needs to create the table of 256 colours then map all the colours into that 256. Zip compression again uses a header which takes storage but will be unable to compress the body enough top save the save taken up by the header.
This situation would change. Seemingly out of nowhere, and in a very short period of time, the federal courts transformed the concept of civil rights, taking it in a new and expansive direction almost impossible to predict a mere decade before. Reinterpreting a mix of government laws, regulations and past judicial orders, the courts, along with other branches of the federal government, began to reallocate social and economic resources such as access to education, jobs, political power and housing away from the majority toward the social margins. By 1974, a system of governmnt-ordered, race and gender-based, redistributive remedies to the problems of the past was in place. The years immediately following saw a maturation of this system. The result transformed American society and politics as group affiliation, rather than individual worth, became the defining standard in public life. One should not underestimate the impact of this shift in public policy after 1968. While the civil rights’ movement’s traditional dream of a color-blind Constitution had often been just that -- a dream -- the formal emphasis prior to 1968 had been on protecting individual rights through the medium of a generally status-blind access to law. The goal was the implementation of equality through the removal of race as an issue of public consideration, and most civil rights laws and decisions were formulated -- at least technically -- to achieve that end. Zelden, Charles L. "From Rights to Resources: The Southern Federal District Courts and the Transformation of Civil Rights in Education, 1968-1974," Akron Law Review: Vol. 32 , Article 2. Available at: https://ideaexchange.uakron.edu/akronlawreview/vol32/iss3/2
Welcome to AtBatt.com! | A battery cell is not meaningless to any battery pack. You may wonder; what exactly is a battery cell? A battery cell is the basic “building block” of a battery. In a battery it is the power source. A cell is a single energy or charge-storing unit within a pack of cells that form the battery. Each cell has a voltage and capacity rating that is combined with the other cells to form the overall battery voltage and capacity rating. We will explore different types of batteries and the cell packs within them. In reference to laptop batteries; there are usually two types available for your laptop, the standard battery and an extended battery. The extended battery will provide you with more run time. In order to achieve the higher capacity, additional cells must be added to the battery pack. To accommodate the additional cells, the battery will either protrude from the bottom of your laptop (setting your laptop at a slight incline on a flat surface) or protrude from the back of the laptop (pictured). We have additional resources available for information on the Anatomy of a Laptop Battery, How to Choose a Laptop Battery (which explains in detail the difference of battery capacities and cells) and more on our Laptop Battery Guides page. Pictured right are the cells inside of a laptop battery. The anatomy of a laptop battery will explain the inside and how the cells are connected to achieve the voltage and capacity needed to power the laptop. In our Battery Times article, Battery Hacks, you can see videos on “batteries within batteries.” For example, a there are 6 AAAA batteries within a 9v battery. Each AAAA battery is 1.5v; the 6 AAAA batteries (pictured left) connected in series will increase the voltage to meet 9v. Please note that we never recommend opening battery packs or cells. Injury, fire or damage to the battery may occur. These articles are for informational purposes. Rechargeable power tool batteries are typically a group of individual cells connected together to achieve the appropriate voltage. Depending on the type of battery, NiMH, NiCD or Li-Ion, the voltage on the individual cells will vary. Pictured is an example of the inside of a power tool battery. This will impact the performance of your power tool. Before selecting your power tool, ask yourself “how am I going to use this tool?” NiCD battery chemistry for power tools is tough, sturdy and inexpensive. This type is good for smaller jobs as it does not have as much power or capacity compared to NiMH and Li-Ion. As it tends to be heavier than NiMH or Li-Ion batteries; it may be uncomfortable to use in one long sitting. NiMH battery chemistry for power tools generally has a higher capacity than NiCD, which means you usually get a longer run time on a single charge. Usage for moderate to heavy jobs; these batteries tend to be higher in price. Li-Ion battery chemistry for power tools are the lightest compared to NiCD and NiMH batteries. This type of power tool battery recharges quickly and has a high capacity. The li-Ion power tool battery can be designed in almost any shape, which is nice for better tool balance. This battery is usually the most expensive. Usage is great for moderate to heavy jobs. If you take out your cordless phone battery, you can see the battery cells basically shrink wrapped in a green or colored plastic type casing. There are different cell sizes depending on the type or model of cordless phone. Pictured is an example of three AA NiMH rechargeable batteries. Each battery is 1.2v; connected in series you will meet the 3.6v which is the most common voltage for cordless phone batteries. Once the batteries are soldered together, leads are attached to the battery and then it is wrapped to look like the assembled cordless phone battery also pictured. So you can see how important cell quantity is in most battery packs. It can affect the overall performance, longevity, and power of the device. You can most certainly configure a battery in almost any shape to accommodate the device it is being used in. These are just a few examples, the list and possibilities are endless. Again we never recommend opening batteries or cells, or assembling battery packs on your own; it is best to leave that to a battery expert.
morpheme was our Word of the Day on 09/25/2015. Hear the podcast! Theme music by Joshua Stamper ©2006 New Jerusalem Music/ASCAP Examples of morpheme in a Sentence The word “pins” contains two morphemes: “pin” and the plural suffix “-s.” Recent Examples of morpheme from the Web Each of the, say, two hundred and fifty passengers on each flight hanging unwittingly on each morpheme. These example sentences are selected automatically from various online news sources to reflect current usage of the word 'morpheme.' Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. Send us feedback. Did You Know? Morphemes are the indivisible basic units of language, much like the atoms which physicists once assumed were the indivisible units of matter. English speakers borrowed morpheme from French morphème, which was itself created from the Greek root morphē, meaning "form." The French borrowed -ème from their word phonème, which, like English phoneme, means "the smallest unit of speech that can be used to make one word different from another word." The French suffix and its English equivalent -eme are used to create words that refer to distinctive units of language structure. Words formed from -eme include lexeme ("a meaningful linguistic unit that is an item in the vocabulary of a language"), grapheme ("a unit of a writing system"), and toneme ("a unit of intonation in a language in which variations in tone distinguish meaning"). Origin and Etymology of morpheme First Known Use: 1896See Words from the same year MORPHEME Defined for English Language Learners Seen and Heard What made you want to look up morpheme? Please tell us where you read or heard it (including the quote, if possible).
I allowed myself to become rather distracted by my second years last week as the class was finishing. They were talking about an episode of Horizon that discussed General Relativity and theories of Quantum Gravity. What followed was a free ranging discussion on the nature of infinity, mentioned briefly in the program. But we also talked about the nature of a black hole and its size. It’s surprisingly easy to calculate this with reasonably elementary maths and physics. I first did this when I was about 17 (how very sad) using classical physics equations, and was astounded to discover that even so, the answer was correct (I checked it in the Encyclopedia Britannica in the library at the time). Here is Newton’s universal law of gravitation, between two bodies. It describes the force F between two bodies that are r metres apart. Let’s take the one with mass M to be the black hole. G is a small (though mysterious) constant. You can work out the energy needed to escape the black hole using the old stand by equation that work done is the force times distance traveled against that force, but that only works with a constant force, this force will change as we move, so we need to use the big daddy of multiplication, integration. Specifically, we will work out the energy needed to escape from the event horizon, the surface at which the escape velocity is the speed of light, which is c (299,792,458 m/s). So the energy will be given by moving my little mass m from the radius of the event horizon, let’s call is R to infinity, to show we have broken away. Now, this should just balance the kinetic energy possessed by my little mass m traveling at the speed of light. and if we rearrange for R we get that In other words, the radius of the event horizon, the bit we think of as the “hole” is dependent entirely upon the mass of the object. Please note this is based on a very simple model of a non rotating black hole. Nevertheless we can do some nice calculations from this. The Sun, would have to be compressed from its diameter of about 700 million kilometres into a radius of just under 3 kilometres. The Earth’s mass would need to be compressed so much to form a black hole you would need to squeeze its radius of over 6 thousand kilometres into a radius of around 9 millimetres. That’s how dense we’re talking here. We can also consider the radius as described by the contained energy of the black hole, since we know that and so, replacing our M in our above equation we get Wow. Remember c is a big number, taking it to the power of four is a lot. So why do this? There’s been a lot of speculation about the possibility the Large Hadron Collider (LHC) could create a black hole. This has caused a fair degree of panic, and at least one suicide. It’s a physicist’s dream that a black hole might be created. I just looked up the “high” energies used by the LHC, and high is a relative term. It plans to bash protons together with 7 TeV (Tera electron volts) of energy each, or lead nucleii with 574 TeV each, let’s take the latter. Just how much energy is that in a collision? Well, doubling and converting to good old Joules gives 184 micro Joules. That’s really not a lot, 184 millionth’s of a Joule. A 100W light bulb uses 100 Joules each and every second. How big would the radius of such a black hole that might form be, from that energy? Check the maths, because so far I haven’t but I get. which is 0.000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,003 metres wide (I hope, I didn’t double check the zeros either), which is probably not the planet swallower of people’s imagination (but this is fun). But the problem is people think this tiddler will grow very rapidly, but that’s because they don’t know about Hawking Radiation. This is an interesting quantum effect that means black holes aren’t really black, they do emit a little radiation. Large holes would gather surrounding matter faster than their low radiation rate, but small holes have the opposite situation, they radiate more rapidly. The maths for all that is pretty complex, and you need to make lots of assumptions, but the time taken for our little black hole to “evaporate” is (hurriedly calculated by me) a tiny, tiny fraction of a second. Even allowing for the ambient temperature and some fall in of matter, this little baby is not in equilibrium, it’s not getting mass fast enough to accumulate more. It’s safe*. * All disclaimers apply. No liability is assumed for foolish unvalidated experiments done by you or other members of your species. Do not attempt to create black holes in your garage. Any subsequent destruction of your civilization, planet or solar system is at your own risk, and any “EPIC FAIL” signs placed by aliens on the remains is not due to me or my calculation. No calculations have been done on the matter of strange matter either. If you break the planet / system / galaxy or universe you own all the parts.