content
stringlengths
275
370k
heic1314 - Science Release Hubble finds source of Magellanic Stream Astronomers explore origin of gas ribbon wrapped around our galaxy 8 August 2013 Astronomers using the NASA/ESA Hubble Space Telescope have solved the 40-year-old mystery of the origin of the Magellanic Stream, a long ribbon of gas stretching nearly halfway around the Milky Way. New Hubble observations reveal that most of this stream was stripped from the Small Magellanic Cloud some two billion years ago, with a smaller portion originating more recently from its larger neighbour. The Magellanic Clouds, two dwarf galaxies orbiting our galaxy, are at the head of a huge gaseous filament known as the Magellanic Stream. Since the Stream's discovery in the early 1970s, astronomers have wondered whether this gas comes from one or both of the satellite galaxies. Now, new Hubble observations show that most of the gas was stripped from the Small Magellanic Cloud about two billion years ago — but surprisingly, a second region of the stream was formed more recently from the Large Magellanic Cloud. A team of astronomers determined the source of the gas filament by using Hubble's Cosmic Origins Spectrograph (COS), along with observations from ESO's Very Large Telescope, to measure the abundances of heavy elements, such as oxygen and sulphur, at six locations along the Magellanic Stream. COS detected these elements from the way they absorb the ultraviolet light released by faraway quasars as it passes through the foreground Stream. Quasars are the brilliant cores of active galaxies. The team found low abundances of oxygen and sulphur along most of the stream, matching the levels in the Small Magellanic Cloud about two billion years ago, when the gaseous ribbon was thought to have been formed. In a surprising twist, the team discovered a much higher level of sulphur in a region closer to the Magellanic Clouds. "We're finding a consistent amount of heavy elements in the stream until we get very close to the Magellanic Clouds, and then the heavy element levels go up," says Andrew Fox, a staff member supported by ESA at the Space Telescope Science Institute, USA, and lead author of one of two new papers reporting these results. "This inner region is very similar in composition to the Large Magellanic Cloud, suggesting it was ripped out of that galaxy more recently." This discovery was unexpected; computer models of the Stream predicted that the gas came entirely out of the Small Magellanic Cloud, which has a weaker gravitational pull than its more massive cousin. "As Earth's atmosphere absorbs ultraviolet light, it's hard to measure the amounts of these elements accurately, as you need to look in the ultraviolet part of the spectrum to see them," says Philipp Richter of the University of Potsdam, Germany, and lead author on the second of the two papers. "So you have to go to space. Only Hubble is capable of taking measurements like these." All of the Milky Way's nearby satellite galaxies have lost most of their gas content — except the Magellanic Clouds. As they are more massive than these other satellites they can cling on to this gas, using it to form new stars. However, these Clouds are approaching the Milky Way and its halo of hot gas. As they drift closer to us, the pressure of this hot halo pushes their gas out into space. This process, together with the gravitational tug-of-war between the two Magellanic Clouds, is thought to have formed the Magellanic Stream . "Exploring the origin of such a large stream of gas so close to the Milky Way is important," adds Fox. "We now know which of our famous neighbours, the Magellanic Clouds, created this gas ribbon, which may eventually fall onto our own galaxy and spark new star formation. It's an important step forward in figuring out how galaxies obtain gas and form new stars." The Magellanic Clouds can be used as a good testing ground for theories on how galaxies strip gas from one another and form new stars. This process seems episodic rather than smooth, without a continuous, slow stream of gas being stripped away from a small galaxy by a larger one. As both of the Magellanic Clouds are approaching our own galaxy, the Milky Way, they can be used to explore the dynamics of this process. Notes for editors The Hubble Space Telescope is a project of international cooperation between ESA and NASA. These results are presented in a set of two papers, both published in the 1 August issue of The Astrophysical Journal. The first of these papers is entitled "The COS/UVES absorption survey of the Magellanic Stream: I. One-tenth solar abundances along the body of the stream". The international team of astronomers in this study consists of A. J. Fox (STScI, USA; ESA), P. Richter (University of Potsdam; Leibniz Institute for Astrophysics, Potsdam, Germany), B. P. Wakker (University of Wisconsin-Madison, USA), N. Lehner (University of Notre Dame, USA), J. C. Howk (University of Notre Dame, USA), N. B. Bekhti (University of Bonn, Germany), J. Bland-Hawthorn (University of Sydney, Australia), S. Lucas (University College London, UK). The second of these papers is entitled "The COS/UVES absorption survey of the Magellanic Stream: II. Evidence for a complex enrichment history of the stream from the Fairall 9 sightline". The international team of astronomers in this study consists of P. Richter (University of Potsdam; Leibniz Institute for Astrophysics, Potsdam, Germany), A. J. Fox (STScI, USA; ESA), B. P. Wakker (University of Wisconsin-Madison, USA), N. Lehner (University of Notre Dame, USA), J. C. Howk (University of Notre Dame, USA), J. Bland-Hawthorn (University of Sydney, Australia), N. B. Bekhti (University of Bonn, Germany), C. Fechner (University of Potsdam, Germany). Image credit: David L. Nidever, et al., NRAO/AUI/NSF and Mellinger, LAB Survey, Parkes Observatory, Westerbork Observatory, and Arecibo Observatory. University of Potsdam Space Telescope Science Institute & ESA Garching bei München, Germany About the Release |Name:||Magellanic Stream, Milky Way| |Type:||• Milky Way| • X - Galaxies Images/Videos |Facility:||Hubble Space Telescope|
The word depression is used to describe a range of moods - from low spirits to a severe problem that interferes with everyday life. If you are experiencing severe or ‘clinical’ depression you are not just sad or upset. The experience of depression is an overwhelming feeling which can make you feel quite unable to cope, and hopeless about the future. If you are depressed your appetite may change and you may have difficulty sleeping or getting up. You may feel overwhelmed by guilt, and may even find yourself thinking about death or suicide. There is often an overlap between anxiety and depression, in that if you are depressed you may also become anxious or agitated. Sometimes it is difficult to decide whether you are responding normally to difficult times, or have become clinically depressed. A rough guide in this situation is that if your low mood or loss of interest significantly interferes with your life (home, work, family, social activities), lasts for two weeks or more, and brings you to the point of thinking about suicide then you may be experiencing clinical depression and you should seek some kind of help. What Causes Depression? Feeling sad, or what we call “depressed”, happens to all of us. The sensation usually passes after a while. However, a person with a depressive disorder - clinical depression - finds that his state interferes with his daily life. His normal functioning is undermined to such an extent that both he and those who care about him are affected by it. According to MediLexicon’s Medical Dictionary, depression is “a mental state or chronic mental disorder characterized by feelings of sadness, loneliness, despair, low self-esteem, and self-reproach; accompanying signs include psychomotor retardation (or less frequently agitation), withdrawal from social contact, and vegetative states such as loss of appetite and insomnia.” What are the different forms of depression? There are several forms of depression (depressive disorders). Major depressive disorder and dysthymic disorder are the most common. - Major depressive disorder (major depression) Major depressive disorder is also known as major depression. The patient suffers from a combination of symptoms that undermine his ability to sleep, study, work, eat, and enjoy activities he used to find pleasurable. Experts say that major depressive disorder can be very disabling, preventing the patient from functioning normally. Some people experience only one episode, while others have recurrences. - Dysthymic disorder (dysthymia) Dysthymic disorder is also known as dysthymia, or mild chronic depression. The patient will suffer symptoms for a long time, perhaps as long as a couple of years, and often longer. However, the symptoms are not as severe as in major depression, and the patient is not disabled by it. However, he may find it hard to function normally and feel well. Some people experience only one episode during their lifetime, while others may have recurrences. A person with dysthymia might also experience major depression, once, twice, or more often during his lifetime. Dysthymia can sometimes come with other symptoms. When they do, it is possible that other forms of depression are diagnosed. - Psychotic depression When severe depressive illness includes hallucinations, delusions, and/or withdrawing from reality, the patient may be diagnosed with psychotic depression. - Postpartum depression (postnatal depression) Postpartum depression is also known as postnatal depression or PND. This is not to be confused with ‘baby blues’ which a mother may feel for a very short period after giving birth. If a mother develops a major depressive episode within a few weeks of giving birth it is most likely she has developed PND. Experts believe that about 10% to 15% of all women experience PND after giving birth. Sadly, many of them go undiagnosed and suffer for long periods without treatment and support. - SAD (seasonal affective disorder) SAD is much more common the further from the equator you go. In countries far from the equator the end of summer means the beginning of less sunlight and more dark hours. A person who develops a depressive illness during the winter months might have SAD. The symptoms go away during spring and/or summer. In Scandinavia, where winter can be very dark for many months, patients commonly undergo light therapy - they sit in front of a special light. Light therapy works for about half of all SAD patients. In addition to light therapy, some people may need antidepressants, psychotherapy, or both. Light therapy is becoming more popular in other northern countries, such as Canada and the United Kingdom. - Bipolar disorder (manic-depressive illness) Bipolar disorder is also known as manic-depressive illness. It used to be known as manic depression. It is not as common as major depression or dysthymia. A patient with bipolar disorder experiences moments of extreme highs and extreme lows. These extremes are known as manias.
Pathophysiology: Little is known about the route and the source of transmission of the virus. VZV is certainly transmissible through the airborne route and does not require close personal contact. The skin lesions are certainly full of infectious virus particles whilst in contrast, it is almost impossible to isolate virus from the upper respiratory tract. It is possible that aerial transmission originates from symptomless oral lesions. Disease statistics: In terms of overall population incidence there have been quite dramatic changes in the past decade due to the introduction of the varicella vaccine (chicken pox vaccine). Before the vaccine was introduced, 83% of all 10–14 year olds in Australia were estimated to have contracted chicken pox at some stage of their lives, further increasing to 95.5% of all 40 year olds. Treatment: Several studies indicate that antiviral medications decrease the duration of symptoms and the likelihood of postherpetic neuralgia, especially when initiated within 2 days of the onset of rash. In typical cases that involve individuals who are otherwise healthy, oral acyclovir may be prescribed. An important study by Kubeyinje (1997) suggested that the use of acyclovir in healthy young adults with zoster is not clearly justified, especially in situations of limited economic resources. Research: DNA techniques have made it possible to diagnose “mild” cases, caused by VZV or HSV, in which the symptoms include fever, headache, and altered mental status. Mortality rates in treated patients are decreasing.
Why is sensory play such as finger-painting, play dough, sand, mud, and etc important to a child’s development? When engaged in sensory play children use all of their senses. It promotes sensory integration which is the ability of the body to integrate and process all of the information it receives via the sensory modalities of touch, taste, smell, hearing, and vision. As children pour, dump, build, scoop, and explore they are learning about spatial concepts (full, empty). They learn pre-math concepts along with language and vocabulary. Messy play can be calming to children. It is not just about making a mess and getting dirty; it is an essential component to learning that encourages exploration and discovery through play. You can easily create simple sensory (messy) activities for you preschooler: - Make mud and sand pies - Sift, pour, and stir sand, water, and dirt - Drive toy trucks and cars through dirt, mud, and water - Play with vinegar, baking soda and color (from Play Counts) - Play in the water hose and with sprinklers this summer - Play with play dough, include add-ins such as glitter, sequins, flavors, and scents such as chocolate play dough cupcakes from NurtureStore. - You can also put similar add-ins into your painting projects - Jump into puddles after a rain - Blow bubbles (Preschool Projects) - Squeeze colored water from eye droppers and turkey basters - Play with colored ice and mix the colors. - Paint with not only paint but with water, glue, and shaving cream; use hands and fingers! - Create sensory boxes using beans, rice, packing peanuts, coffee grounds, and just about any materials you feel are safe and interesting to your child. Here are some edible options that both preschoolers and toddlers will enjoy. It’s not that we want a child to eat these recipes, these are just non toxic options just in case they are placed in the mouth: Home-made finger paint recipe from I Can Teach My Child (use food coloring) Create a sensory box using giant pasta shells from Plain Vanilla Mom If your child doesn’t like to get their hands dirty, play at their pace but encourage them to try the activity. You might begin by using brushes and utensils then move to using fingers and hands. Ick. The Mess If you are like me, you struggle facing the mess and clean up. Sometimes that alone would deter me from having messy play. Try using a tray to place materials on as this will help contain the mess and set boundaries. Another way to create a simple, but natural boundary is to use a vinyl table cloth to create a play space on the floor. I use one that is at least 80 inches in length, then fold it in half with the vinyl out. This keeps dirt, lint, and other goodies from sticking to the cloth back. I have found this to be very successful on home visits. It creates a play space that the children seem to naturally stay on. Of course it also helps contain materials and makes clean up easier. From the beginning, have your child help with clean up. It will likely be easier to do it yourself, but it is important to teach them this life skill and responsibility. Have a great time getting messy together! Connect to CCK on Facebook! We are tickled to be linked to these great sites!
A short introduction to the pi-calculus This is a short introduction to the pi-calculus for readers unfamiliar with process algebra. It informally introduces the basic concepts of the pi-calculus and allows for getting a grasp of the formulas. Agents and Names The pi-calculus is based on the concepts of names and agents. A name is a collective term for things like links, pointers, references, identifiers and so on. These names could be used to synchronize actions between a community of concurrent agents. Each agent represents a process; a description of a timely and logical sequence of things to do. When one agent wants to talk to another agent they use a common name for communication (for the sake of simplicity, we omit one important thing right now): The definitions given above consists of two agents, TIM and TOM. The identifiers of agents are always written in uppercase letters to distinguish them from names, which are always written in lowercase letters. Both agents, TIM and TOM, define a process of how they can communicate. This definition is written behind the equal signs. As can be seen, both agents have the common name talk. Agent TIM has an overlined talk. This denotes that he is a sender on the name talk. A corresponding receiver is written without an overline, like the talk in agent TOM. The communication is always synchronous. This means that agent TIM blocks the execution until some other agent is ready to receive his message and TOM blocks until someone has a message for him. The actual message is represented through another name message. It is written in a different kind of bracket behind the name used for communication. Both agents can communicate and thereby "consume" their names talk. The next step is marked with a dot (.); it divides sequences of actions. The next action for agent TIM is 0. This symbol stands for actually ending the process right now. The agent TIM can't to anything else anymore. Agent TOM has another symbol after he received the message. It is a tau with an subscripted TOM. The symbol tau denotes an unobservable action; it is something that TOM does with the message that we cannot observe. Afterwards TOM also stops execution. We give the message a bit more meaning and add a third agent that represents a printer. Yes, everything could be an agent, even a printer. The printer agent has its own process like accepting a print job and printing it. To make the situation worse, only agent TIM has access to the printer. The access is represented by the knowledge of the name that is used for communication with the printer agent. The agents are defined as follows (we again omit one important thing): The agent TIM now sends the name print to agent TOM. TOM is now be able to use the received name print as an outgoing communication channel to another agent PRINTER. This is a very important point! There exists no distinction between names for communication and names for the parameters of the communication. As stated, a name could be used for anything! TOM uses the name he received from TIM to send the name file along the name print to an agent named PRINTER. Forget a minute about the exclamation mark at the begin of PRINTER. The agent PRINTER then receives a name file on the name print and processes the file somehow inside of tau subscripted with PRINT. The file is just a name; however it could point to another agent that represents a data-structure. This goes a bit beyond this introduction; right now it is enough to know that the name file somehow represents a file. The agent PRINTER represents a printer. But wouldn't be nice if several agents could send their documents in parallel and the printer queues and prints them one after the other? We do not want to implement a queue in the agent PRINTER; we just would like to state that he can accept several documents in parallel. This is done through the exclamation mark at the begin of the process definition from agent PRINTER. He can accept as many files on the name print as wished. Whenever PRINTER receives a new file it creates a copy of itself. This is called replication. Choice, Parallelism, and all the Rest Another important aspect in a process algebra is choice. When we want to specify that agent TIM could talk to TOM - or to another agent called TIL, we could write this as follows: The summation operator (+) is used to denote the exclusive choice. Either the left or the right part of the operator are chosen for execution. So either the name talk subscribed with TOM or TIL will be used for communication. The agent TIM could model the decision which to chose explicitly with a match operator: The match operator [x=y] continues the execution if a name x equals another name y. Agent TIM tests a name x on equality and if this evaluates true he contacts agent TOM. If the expression [x=false] evaluates to true, he contacts agent TIL. It is also possible to send the message from agent TIM to both other agents, TOM and TIL, in parallel: The composition operator (|) executes the left and right part of the operator in parallel. Having introduced this one, we can hint that TIM, TOM, (and the PRINTER) agent need to be placed in parallel to actually interact together (that's it about the important thing left out). So far we have seen definitions for agents that already contain names; but what if we want to dynamically create names inside an agent? This is done with the special operator v. We define an agent that generates new names on request: The agent GENERATOR creates a new - and yet unknown - name x, where x is just a placeholder for a new unique name. The agent is able to communicate the new name via the name get. All the way, the generator agent replicates itself, so that it can generate a multiplicity of new names. What we have considered is only a very basic introduction on how the pi-calculus actually works. There are restrictions on how the operators could be combined as well as much more semantics than explained right now.
The description for this assignment is as follows: “Given a non-normally distributed population such as the bimodal population which is pictured in figure 6-8, discuss and explain how such a population can have a frequency distribution of sample x-bars as shown in figure 6-9. How does Figure 6-8 relate to Figure 6-9 and then how does figure 6-9 relate to 6-10? Explain what concept is being demonstrated. In short write an explanation of how we move from figure 6-8 to 6-9 to 6-10.” This assignment was the second written response assignment for the Statistics II course (Quantitative Tools for Management) during the Spring 2007 semester at the University of Massachusetts at Amherst’s online program; I received a 5/5 for the below answer: Looking at figure 6-8*, we see that the x values with the highest frequency are 10 and 18, the lower and upper limits of the x values, respectively. This population is not exactly symmetrical but is close to being so, as it closely resembles a “U” shape. Figure 6-9* shows the distribution of the average mean of 3 x values chosen at random, 3000 times. Even though 10 and 18 are the most common x values in the population, there are only a few average mean x values in the distribution in figure 6-9. In order to have a sample mean of 10 or 18 all three x values in a random sample would be to be all 10 or all 18. Thus the probability of choosing three 10’s or three 18’s in a random sample is quite low, thus why the distribution for 10 and 18 is so low in figure 6-9. Moving to the middle of figure 6-9, shows a rise in the number of occurrences of the sample means ranging from 13 to 16. Again, this makes sense because there are many more ways a sample mean could be in the 13 to 16 range and thus would be more commonly chosen in a random sample. Since an increasing amount of sample means will lie in the middle of the range, the standard deviation will be lower than the total population, as a higher proportion of the values will be closer to each other; whereas a high proportion of the values for the population in figure 6-8 lie at the upper and lower limits of the range, thus increasing the probability that the deviation between any two randomly chosen values will be higher. Looking at figure 6-8, an eyeball estimate would lead me to say the median for this population would lie somewhere between 13 and 15. Figure 6-9 is showing that for 3,000 random samples of size 3, it is more likely the average mean will be close to the median than at the upper or lower limits [in this example, the median is equal to the mean of the total population, this is not always the case and when the median and mean are not equal the middle (and highest point for a high sample size) of the distribution in figures 6-9 and 6-10 will approach the mean]. By increasing the sample size to 10, and thus increasing the reliability and accuracy of the results, figure 6-10* is showing that as more and more x values are included in the sample, the sample mean will approach the population mean because the chance of picking ten x values that average out to be similar to the population mean is higher than in a sample of three. Since the likelihood of ten random values equaling the population mean is higher in figure 6-10, the population mean is the value most often represented in the 3,000 random average means. Likewise, since the likelihood of the average mean of ten random values being equal to or close to the upper or lower limits is low, these values are either not represented or much less so than the population mean. The principle behind figure 6-10 is that if 3,000 random samples were taken, with a sample size close to or equal to the population size, the average means would all come out close to or equal to the population mean, the proximity of the sample size to the population size determines the range of the distributions we see in figures 6-9 and 6-10 and increases (if sample size is not close to population size) or decreases (if population size and sample size are close or equal) the deviation between values. If the sample size was close to or equal to the population size, the standard deviation would be close to or equal to zero (as most of the average means would be equal to the population mean). The idea being shown through these three figures is the Central Limit Theorem, which in essence states that as a sample size increases, so does the resemblance of the sample distribution of the average means to a normal distribution (e.g. the shape shown in figure 6-10).
Read, Read, Read to Succeed Growing Independence and Fluency Lesson By: Rebekah Beason Rationale: Good readers read fluently. When a person reads fluently, they automatically identify words as they read. A fluent reader reads with ease and expression. This is the ultimate goal of reading instruction. One way to grow to be a fluent reader is to do repeated readings. By reading and re-reading a text, unfamiliar words become easier to read thus making your reading quicker. This lesson is made to further develop students to be fluent readers by improving their speed of reading and teaching them to change their tone as well as expression as they read. A Speed Record Sheet for every student A Fluency Literacy Rubric for every student Pencils for every student Book: a copy of Silly Tilly Book: a copy of "The Crash in the Shed" for every student 1. Give an explanation of fluency and why good readers read fluently. Say: Good morning friends!! To become great readers, we have to learn to read fluently. This is done when you read quickly and correct with no pausing. When you read fluently, you also show feelings; your voice changes and you show feeling as you read. Today, to become fluent readers we are going to read and re-read the book "The Crash in the Shed." When we read and re-read a passage it helps us to become familiar with words we did not know. It is completely ok if you come to a word that you do not know, just use your cover up critter or crosscheck which means that when you come to an unknown word you read the rest of the sentence and go back to the word to figure out the unknown word. The next time you read the passage I know you will remember that new word." 2. Say: "Before you do any reading, I am going to model what reading fluently looks and sounds like by reading Silly Tilly. While I read, listen for changes in my voice and my speed. This book is about a silly goose that gets into some crazy situations on the farm. Let's find out what the other animals think about Tilly's pranks." 3. Now, it is your turn to practice reading fluently. Then, I will give a brief booktalk on "The Crash in the Shed." "In this story, Ben and Jess can't make up their minds whether to fish or collect shells. Suddenly they hear a crash in the shed. Sounds like trouble! You will have to read to find out what happens." 4. Before dividing into groups, pass out the books and model how to use the Speed Record Sheet and Fluency Literacy Rubric. Divide the students into partner groups. In these partner groups, each child should get a Speed Record Sheet and a Fluency Literacy Rubric. Model how the students will take turns being the "reader" and how the "reader" will read the whole text. Then, explain that they will read again to see positive changes in their reading; and the other partner will be the "recorder" whose job is to record the number of words that were just read on the Speed Record Sheet. "This process that I demonstrated should be repeated two more times. At the end of the third time, the "recorder" should fill out the Fluency Literacy Rubric by shading the circle that best describes the reading they just heard. When this is done, the "reader" will come the "recorder and the "recorder" will become the "reader." Say: "Remember we are becoming good readers by reading fluently, so you should read more words each time and show emotion by changing their voice with each character of the story as you read." 5. To end our lesson, I will assess the students by asking them to my desk one by one to have them read as much of the story as they can. After reading, I will ask them to recollect everything they remembered from the story to assess their comprehension of the text. Another assessment would be for the students to summarize Silly Tilly in their own words and share it with a partner. Murray, Geri. "The Crash in the Shed" Reading Genie: http://www.auburn.edu/academic/education/reading_genie/ Eileen, Spinelli. Silly Tilly. Marshall Cavendish Corporation, 2009. Vanhooser, Holly. http://www.auburn.edu/academic/education/reading_genie/invitations/vanhoosergf.htm
Bushfires not only have the power to destroy crops, native bush, livestock and homes, they can also affect the water quality in our creeks and rivers. These impacts can range from short-term changes noticed immediately after the fire to long-term impacts that can last for many years. Fire can result in an increase in nutrients and sediment in rivers. Nutrients can be released from sediment or debris from burnt vegetation, or come from ash and smoke that can be carried to the water by wind or through runoff following rain. The volume of runoff from a catchment can increase after a fire and this can lead to increases in the amount of sediment entering the river. This excess runoff has the capacity to change the channel structure and flow, through bank erosion and sediment deposition. In some cases this alteration may be beneficial, such as providing additional habitats or refuge pools in the river, particularly for fish. The EPA monitored the recovery of the Tod River on the Eyre Peninsula for one year following the bushfire on 11 January 2005 in which more than 80,000 hectares of land north of Port Lincoln were burnt. The impacts of the fire on the water quality were found to be minor and very short-lived. As the aquatic macroinvertebrates in the Tod River are quite tolerant and able to withstand the brackish and ephemeral nature of this river, they were not affected by the minor changes in water quality due to the bushfire. More long-term changes to the river will become apparent in the future. Additional monitoring of the Tod River, through the EPA's Ambient Monitoring Program, will enable us to determine if there are long-term changes.
Strength training is a component of every athlete's training regimen. Strength training also places additional nutritional demands on the body; the extent of those demands will depend on the intensity and the volume of the strength training program. Strength training has three essential aspects: the development of maximum strength, the ability to generate the greatest possible force in a single repetition; the building of elastic strength, the ability to direct the muscles to respond quickly and dynamically; and endurance strength, which will involve the promotion of both cardiovascular and muscular endurance. Proper nutrition, the nourishment of the body through foods and dietary supplements, is essential to general health and well-being. Athletes must pay particular attention to nutrition, given that the body requires a steady, properly proportioned supply of macronutrients, including carbohydrates, proteins, and fats, as well as numerous micronutrients, substances essential to the function of many human systems, including all vitamins and most of the minerals absorbed into the body. The maintenance of proper fluid levels, dependent on the mineral electrolyte sodium, and the absorption of sufficient calcium and vitamin D for bone maintenance are two examples of areas where a nutritional deficit will have a negative impact on strength training. Strength training imposes stresses and impacts on the body that must be addressed through careful attention to nutrition. The most fundamental of these impacts is the need for additional energy to participate in the training itself. Whether strength training is the only form of athletic activity undertaken by a person, or when it is supplemental to other sports training, strength training carries with it the need to ensure that the body has sufficient energy to train and to properly recover. Most persons will obtain sufficient energy from a diet that is proportioned as 60-65% carbohydrates, 12-15% proteins, and less than 30% fats. Protein consumption is another aspect of nutrition that is of particular interest in strength training, as dietary proteins, and their constituent amino acids, are the building blocks of muscle. There are 20 different amino acids, 10 of which are produced within the body, 10 of which must be obtained through diet. These dietary amino acids are also known as the essential amino acids. Unlike carbohydrates, stored as glycogen, and fats, stored as triglycerides, amino acids cannot be stored within the body and must be replenished on a daily basis. Myoblasts, the muscle cells that repair the cellular damage caused by training, are created from amino acids. Conventional sports science wisdom once held that extra protein consumption would speed muscle development; modern science supports the view that while there may be circumstances in the case of an individual athlete to support short-term increases in protein consumption, these are exceptional circumstances; the typical strength training athlete requires only fractionally more than the protein requirements of a healthy non-athlete. All protein sources do not provide an equal protein value when ingested into the body. The amino acid pattern in an egg is the standard for the measurement of protein quality in all foods. Plant proteins are generally inferior in amino acid quality to dairy, soya, and meat products. The healthiest and the surest way to ensure that the optimal amount of micronutrients are absorbed into the body is through diet; dietary supplements are a second choice, to be utilized when a dietary source is not available. The exceptions to this rule are with regard to the consumption of sport drinks and creatine supplementation. Sport drinks are useful to assist a strength training athlete to maintain carbohydrate, sodium, and potassium levels, especially as a recovery tool after a hard workout. Creatine is a supplement that attracted wide ranging attention in the 1990s when notable professional athletes, including baseball homerun hitter Mark McGuire and English sprinter Linford Christie, were adding the substance into their training diets. Creatine is a naturally occurring chemical, found in every cell in the body as creatine phosphate, or phosphocreatine. Creatine is essential to the production of energy through an electrochemical reaction involving the creation and reduction of adenosine triphosphate (ATP). Creatine supplementation, when conducted according to manufacturer specification, has been proven to assist athletes in the maintenance of the short-term energy stores essential for the explosive movements in strength training. Excess amounts of creatine are not known to produce any toxic effect on the body, as creatine is processed and excreted through the urine by way of the kidneys. Creatine does not contribute to the building of muscle, as might an anabolic steroid or human growth hormone; creatine supplementation is directed to the production and storage of the fuel the body needs for anaerobic activity such as strength training.
Although each is focused on a specific sub-skill needed for each test, they can be adapted to provide more general practice. The activities are as follows: - In activity 1 learners practise the different functions of Part 3 of the IELTS test - In activity 2 learners practise comparing and contrasting things for Part 2 of the FCE test - In activity 3 learners practise speaking for short turns in similar conditions to those of the TOEFL test. Activity 1 - IELTS This activity focuses on the functions needed for Part 3 of the test. This kind of activity (learners generating questions) also helps raise learner awareness of the task type and demands. 1. Explain to the learners that in Part 3 of the IELTS exam the candidate and examiner have a discussion relating to the subject the candidate has spoken about in Part 2. 2. Introduce these functions and elicit examples of language we might use for each: f. Express preferences g. Provide analysis 3. Put the learners in pairs or small groups and give them Handout A (see Attachment box below). Ask them to complete the prompts to form questions; the topic on the handout is education but this can be adapted to your context. 4. Regroup the learners in different pairs or small groups and tell them to ask each other the questions. If there are three in a group, one learner can act as an examiner and give feedback. 5. Some things to remind the learners: a. The answers they give should be longer than in other parts of the test. b. They should really try to show how good their English is at this stage, as the examiner uses this part of the test to see what their limits are. c. They should focus on ideas as well as language. d. You can also encourage learners to use strategic language such as ‘That's an interesting question - can I have a moment to think about it?' Activity 2 - FCE This activity focuses on the functions needed to compare and contrast in part 2 of the test. The interaction pattern is open class and so it can be done as a competition. 1. Before you do this activity make a collection of photographs. You need 12. These can be of almost any subject but should have enough detail to talk about. Pictures of people doing things are a common theme. Separate these pictures into two groups of 6 and number them. You also need some dice. 2. Explain to the learners that in Part 2 of the FCE the interlocutor gives each candidate two photographs and asks them to compare and contrast the two. The candidate needs to speak for one minute. 3. If you haven't practised this before, elicit examples of the kind of language we use to do this, e.g. ‘The first picture shows... but the second...' and ‘The main difference is...' 4. Place the two groups of photos on a table, face up, and ask the learners to look at them. They can discuss them and check vocabulary with you. 5. In turn, each learner throws the dice twice. The dice roll tells them which photos they have to compare and contrast, for example if they roll a 4 and a 1 they have to use photo 4 from the first group and photo 1 from the second. 6. Learners talk about their two pictures; others listen and then give feedback. 7. Ways to make this more challenging (and authentic): a. Keep the photographs face down, so learners don't get a chance to prepare what they are going to say. b. Vary the time learners need to speak for, from short turns to 1 minute. c. Learners speak in pairs, and then alone. Activity 3 - TOEFL This is more a way of setting up speaking activities than an activity itself but it is a useful way to recreate the challenging conditions of the TOEFL test. The procedure is as follows: 1. Explain to the learners - or remind them - that in the TOEFL test they are working with a computer, not other people. 2. Elicit ideas about why this is difficult, for example because you can't see the other speaker, or get any feedback on what you are saying. 3. Explain that your speaking activity is going to help learners with this problem. 4. Set up your pair or small group speaking activity in the following way: a. Put learners into pairs or small groups. b. Sit them back to back, but close to each other. c. Give them the speaking task. d. Ask the unseen partner to listen and then give feedback, but not to interact during the exercise. Almost all stages of the TOEFL test are suitable for this kind of interaction. For example, you could practise Task 1 of the test by giving learners questions on common topics such as their studies, or do Task 6 by playing an extract to the group and then asking them to give an opinion to their unseen partner. Written by Paul Kaye, Freelance Writer, Teacher , Trainer
The latest news from academia, regulators research labs and other things of interest Posted: June 22, 2006 When gold becomes a catalyst (Nanowerk News) Gold has always been perceived as a precious material: you win a gold medal when you prove to be the best in a competition; you only get a Gold credit card when you are a preferential customer, and the jewelry made of this material is amongst the most valuable. However, gold has also unexpected properties: It can act as a catalyst and transform carbon monoxide (CO) to carbon dioxide (CO2) when it comes in the form of tiny pieces, called nano-particles. Gold suddenly enhances desired chemical reactions as a catalyst for example in the removal of odours and toxins or to clean automotive exhaust gases. Researchers from Switzerland, UK, the USA and the ESRF (Grenoble) have monitored the catalytic process and proposed an explanation for the high catalytic activity of gold. They publish today their results in the journal Angewandte Chemie online. The team used nano-particles of gold instead of bulk gold. The catalyst structure looks as if someone had pulverized a piece of gold and spread the tiny nano-sized pieces over an aluminum oxide support. The properties of the nano-particles are very different from those of bulk gold. Only when the gold atoms are confined to the size of just a few millionth of a millimetre they start showing the desired catalytic behaviour. Scientists already knew that gold nano-particles react with this kind of setup and catalyses CO with oxygen (O2) into CO2. What they did not know was how the oxygen is activated on the catalyst. In order to find that out, they set up a cell where they could carry out the reaction, and in situ perform an X-ray experiment with the ESRF beam. The researchers first applied a flow of oxygen over the gold nano-particles and observed how the oxygen becomes chemically active when bound on the gold nano-particles using high-energy resolution X-ray absorption spectroscopy. While constantly monitoring the samples, they switched to a flow of toxic carbon monoxide and found that the oxygen bound to the gold reacted with the carbon monoxide to form carbon dioxide. Without the gold nano-particles, this reaction does not take place. "We knew beforehand that the small gold particles were active, but not how they did the reaction. The nice thing is that we have been able to observe, for the first time, the steps and path of the reaction. The results followed almost perfectly our original hypotheses. Isn't it beautiful that the most inert bulk metal is so reactive when finely dispersed?" comments Jeroen A. van Bokhoven, the corresponding author of the paper. The possible applications of this research could involve pollution control such as air cleaning, or purification of hydrogen streams used for fuel cells. "Regarding the technique we used, the exceptionally high structural detail that can be obtained with it could be used to study other catalytic systems, with the aim of making them more stable and perform better", says van Bokhoven. One of the great advantages of this experiment is the nature of catalysis. The fact that once the material has reacted, it goes back to its initial state, has made the experiments easier. Nevertheless, in technological terms, it has been very demanding: "We combined the unique properties of our beamline with an interesting and strongly debated question in catalysis. Some extra time was needed to adapt the beamline, to the special requirements of this experiment," explains Pieter Glatzel, scientist in charge of ID26 beamline, where the experiments were carried out. At the end, it only took the team a bit over half a year to prepare and carry out the experiments and publish the paper. "This is a very nice recognition of our work," says Glatzel. Source: European Synchrotron Radiation Facility If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks! Check out these other trending stories on Nanowerk:
Basal and Squamous Cell Carcinomas Basal and squamous cell carcinomas are the most common types of cancer. Both arise from epithelial tissue (see epithelium). They are rare in dark-skinned people; light-skinned, blue-eyed people who do not tan well but who have had significant exposure to the rays of the sun are at highest risk. Both types usually occur on the face or other exposed areas. Basal cell carcinoma typically is seen as a raised, sometimes ulcerous nodule. The nodule may have a pearly appearance. It grows slowly and rarely metastasizes (spreads), but it can be locally destructive and disfiguring. Squamous cell carcinoma typically is seen as a painless lump that grows into a wartlike lesion, or it may arise in patches of red, scaly sun-damaged skin called actinic keratoses. It can metastasize and can lead to death. Basal and squamous cell carcinomas are easily cured with appropriate treatment. The lesion is usually removed by scalpal excision, curettage, cryosurgery (freezing), or micrographic surgery in which successive thin slices are removed and examined for cancerous cells under a microscope until the samples are clear. If the cancer arises in an area where surgery would be difficult or disfiguring, radiation therapy may be employed. Genetic scientists have discovered a gene that, when mutated, causes basal cell carcinoma. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
Sixteen symbols can be divided in three ways into sets containing an equal number of symbols, viz., into two 8's, four 4's, or eight 2's. In the following list a group is classed under the first systems of intransitivity to which its head belongs. For example a group having both two and four systems of imprimitivity is listed among those having two systems. In previous lists letters of the alphabet have usually been used as elements. In this list letters with subscripts are used, the letters to distinguish between the systems and the numbers a subscripts to distinguish between elements within the system. The sixteen symbols used differ therefore with the number of systems. For two systems of imprimitivity they are a1 a2 a3 a4 a5 a6 a7 a8 b1 b2 b3 b4 b5 b6 b7 b8 for four systems they are a1 a2 a3 a4 b1 b2 b3 b4 c1 c2 c3 c4 d1 c2 c3 c4 for eight systems they are a1 a2 b1 b2 c1 c2 d1 d2 e1 e2 f1 f2 g1 g2 h1 h2 The transformations necessary for the final comparison of groups of different systems are very simple. The transitive constituents of the heads are distinguished by numbering them as they are numbered by Professor G. A. Miller in this "Memoir on the Substitution Groups whose Degree does not Exceed Eight" [American Journal of Mathematics, vol. 21 (1899)]. For example, (a1a2a3a4a5a6a7a8)3211 is the eleventh group of order 32 and degree 8 in Professor Miller's list. Each distinct head is written and is followed by substitutions which generate groups that multiplied into the given head produce distinct imprimitive groups. In this list the following notation is used: (1) Two systems, t = a1b1.a2b2.a3b3.a4b4.a5b5.a6b6.a7b7.a8b8 (2) Four systems, t = a1b1.a12b2.a3b3.a4b4.c1d1.c2d2.c3d3.c4d4 t1 = a1b1c1.a2b2c2.a3b3c3 t3 = a1b1.a2b2.a3b3.a4b4 Of these t t1 generate a group simply isomorphic to the alternating group of degree four and t t1 t3 generate one simply isomorphic to the symmetric group of degree 4. (3) Eight systems, t2 = a1e1f1d1b1c1g1h1.a2e2f2d2b2c2g2h2 t3 = b1d1c1g1e1f1.b2d2c2g2e2f2 t4 = h1a1.b1d1.c1g1.e1f1.h2a2.b2d2.c2g2.e2f2 t5 = b1g1d1c1.e1f1.b2g2d2c2.e2f2 t6 = a1b1h1.a2b2h2 t7 = a1h1.a2h2 Of these t1 t7 generate a group simply isomorphic to the symmetric group of degree eight; t1 t6 to the alternating group; t1 t4 t32 t5 to the primitive group of order 1344; t1 t2 t3 to the 336; t1 t22 t32 to 1681; t1 t4 t32 to 1682; t1 t4 to the 56 group.
The Oxford Dictionary gives several definitions of the word imperialism. It refers to the regulation of an emperor, particularly when arbitrary or despotic. Imperialism as it is defined on the national degree refers to a state or group of states that has power over others and uses it to determine the fate of other states. In American history, this construct that a state has the ability to alter the class of history and other states as good is sometimes referred to as patriotism. In the nineteenth century, imperialism and patriotism was at its highest, spurred by events such as the Spanish-American War and the sinking of the Maine. As a consequence of the political clime, there were two really distinguishable cantonments that evolved as a consequence of the political clime. The Nationalists had the steadfastly held belief that a legitimate province is based on the people instead than a dynasty, God, or faith. Much of the nationalist credo involves inclusion, which does basically destruct regional ethnicities and linguistic communication discrepancies. Part of the American democratic system assumes that civic engagement is a signifier of patriotism, while a system like the German state during the nineteenth century was more ethnically chauvinistic. This secondary reading of cultural patriotism led to the rise of the Aryan Nation and finally the Nazi Movement that created the first and 2nd World Wars. While it is non needfully true that patriotism leads to absolutisms and despotic disposals, it does at really least hold several ideals that require for boundaries to be drawn. The early 1900 ‘s showed a period of universe enlargement both on the American and European states that were looking to better the universe view their manner. This is yet another manner of depicting colonialism. Colonialism, the ideal of Manifest Destiny, and the proposal to spread out democracy through the freshly underdeveloped universe at the beginning of the century. The direct ends for America and Europe at this clip were to distribute themselves beyond their boundary lines. As a consequence of such an addition in imperialistic policies and behaviour, the Anti-Imperialist League was formed in 1899. The intent of this conference was to make public concern over the authorities doing determination from the citizens of the well-thought-of states. At that clip, the United States was busying Cuba, Puerto Rico and the Philippines. It was about a strictly pacificist platform, because the anti-imperial conference primary end was to stop the Spanish-American War. There was besides a contention with the place the nationalist party, and the United States in general, had on ‘preventive business ‘ because the chief end at the clip by the ground forces was to stop local rebellion, alter local authorities leaders, and make a new democratic system. It is considered as such, because the chief end the Army had at each of the locations was to stamp down the local rebellions, get rid of local authorities, and make new and democratic authorities. The Spanish-American War was a brief struggle won rapidly by the United States over an inexperient Spanish ground forces and naval forces. “Thanks to the encouragement of expansionists and the reckless as a consequence of the expansionist, and the esthesis news media pattern by the imperativeness, Americans enthusiastically supported the war. Many volunteered, but the long fresh Army, was non good prepared to pull off the combat. The Navy, on the other manus, was in good trim, holding been built up get downing with the Harrison disposal in response to the Hagiographas of Mahan and the support of other “ navalists ” like Theodore Roosevelt” . ( www.sagehistory.net ) “The Navy fought good – from the devastation of the Spanish fleet in Manila Bay to the licking of the Spanish fleet by Cuba but it was ruled by licking and bureaucratism. Although plagued by inefficiency, disease and upset, the Army, bolstered by voluntaries such as the celebrated “ Rough Riders, ” fought courageously adequate to get the better of a miserable Spanish ground forces near Santiago” . ( www.sagehistory.net ) . American military personnels besides occupied Puerto Rico. “The Treaty of Paris that ended the war granted independency to Cuba ; Spain turned over Puerto Rico, Guam, and the Philippine Islands to the United States, for which the U.S. paid $ 20 million to Spain. The “ Splendid Little War ” lasted merely four months, the contending itself merely weeks” . ( www.sagehistory.net ) “Thanks to Dewey ‘s triumph in Manila, American military forces occupied the Philippine Islands. Philippine radical refused to accept the business and they continued to contend. It was a short war, and when the Philippines were annexed, more contention for America as an imperial power. Imperialists argued that the U.S. had a responsibility to assist educate and command the developing parts of the universe, but Anti-Imperialist League was founded that opposed America ‘s acquisition of settlements as anti-democratic and destructive of American ideals. The consequence of the argument was the Philippine were granted independency, and Puerto Rico was easy given independent regulation, which is still being decided. Even with all of this, a 3rd theory or point of view, sometimes known as American Imperialism, seems to suit the perfect balance between imperialism or patriotism and anti-imperialism. American Imperialism refers to the theory that the United States occupies a particular niche among the states of the universe in footings of its national creed, historical development, political and spiritual establishments and beginnings. This means that even though we may non desire to coerce our democratic ideals and beliefs onto other states in the pretense of the greater good, in order to keep balance in the universe we are unwillingly cast in the function of the universe ‘s police officer, while fighting to keep a impersonal and nonsubjective stance regardless of the issues. The meeting of the puritanical position of the metropolis on a hill and the individualistic tradition are the best description of the roots of imperialism, which is the best theory for where the altering political clime led to after the old ages of patriotism. Davidson, J. W. , Gienapp, W. E. , et Al. ( 2008 ) . State of states: a narrative history of the American Republic ( 6th ed. , Vol. 2 ) . Boston: McGraw Hill. hypertext transfer protocol: //www.fordham.edu/halsall/mod/1899antiimp.html hypertext transfer protocol: //www.sagehistory.net/worldpower/topics/imperialism.htm
In the following blogs I will be going back to basics, thus I will be walking a step by step process, through the textbook "Fundamentals of Physics Extended", 8th Edition, by David Halliday, Robert Resnick and Jearl Walker, and through the investigation of this book I will open up points as they come up in regards to how we can utilize the world of physics to derive common sense perspective that can, within application, support humanity to align ourselves with the law of physics, as we have been existing in separation from it and only using it to our benefit within manipulating the physical through our acquired knowledge and research, instead of co-existing with the physical, learning from it and projecting what we see in the physical onto ourselves to find inner insight as to how we can perfect ourselves as the physical within the law of physics works as perfection, as wholeness, as unity. Chapter 1.1 - STANDARDS OF LENGTH, MASS, AND TIME The three basic quantities in Mechanics are length [L], mass [M], and time [T], while all other quantities in mechanics can be expressed in terms of these three. For instance, as has been shown in previous blogs, velocity is defined by how much length a body moves in a period of time, thus it would be expressed by length divided to time, thus [L]/[T], volume is three dimensions of length as height times width times depths, thus will be expressed by [L]^3, acceleration is the change in velocity through time and thus will be expressed as velocity divided by time which will give us [L]/[T]^2, and force as we've seen is mass times acceleration F=ma, thus force will be expressed as [M][L]/[T]^2. Though, as we know all these quantities can be expressed in different units, length can be referring to a meter, a centimeter, a mile etc. mass can be described as a pound, a kilo, a gram etc. and time can be a second, a month, a year etc. Due to all the different options to express the same quantity, we can clearly see that a standard unit must be defined for effective communication within the scientific community and humanity as a whole - to be able to build a bridge over a river we must be able to communicate effectively as to how wide the river is, we must thus speak the same "language" of units, for this reason the scientific community was wise to create and define an international standardized system known as the SI System units, which defined the units of length (meters), mass (kilograms), and time (seconds), and other units for temperature, electric current and more. It's important to emphasize that the units chosen are completely arbitrary from the perspective that they don't have any actual physical value nor any actual physical meaning - a meter is merely the distance between two lines on a specific platinum–iridium bar, it is a length chosen randomly, it was accepted historically and "stuck", same as with time - the duration of the day was divided to 24 hour periods and then every hour was divided to 60 minutes and every minute to 60 seconds, and this was convenient due to the era and how the measurements were done and it was accepted, though it could have been any other division of the day and the laws of physics would not have been interfered. This next text is for you to be amused as I was when I read how some of these units came about, here is a glimpse of how arbitrary they are: Copied from the textbook (pg. 4) - "In A.D. 1120 the king of England decreed that the standard of length in his country would be named the yard and would be precisely equal to the distance from the tip of his nose to the end of his outstretched arm. Similarly, the original standard for the foot adopted by the French was the length of the royal foot of King Louis XIV. This standard prevailed until 1799, when the legal standard of length in France became the meter, defined as one ten-millionth the distance from the equator to the North Pole along one particular longitudinal line that passes through Paris." The practicality of having an international system for units is clear, yet it's interesting to see that to this day some countries and organizations are still not willing to let go their historical choice of units and to join the rest of the world in unison - there are some units that have proven to be more practical than others, for instance the metric / decimal system is very convenient for length as it goes in the power of 10, so mathematically and intuitively it's easy to deal with this unit and transition from different scales within it as from millimeters to kilometers and so on, you just multiply or divide by a power of 10 - despite the simplicity of the metric system, still today you will find the mile system being used even though it is much more complicated as there are 1760 yards in a mile, and 3 feet in a yard - the only reason this awkward system is still being used is based in ego, separation and money as those that insist to keep on using this system, as America for instance, are doing so from self interest reasons, not considering what would be best for the scientific, engineering and other professional communities and humanity as a whole, as they want to maintain their individuality within "respecting" their heritage or for other political and economical reasons, even though it doesn't make any sense and is actually dangerous as there have been many documented occasions over the years of mistakes happening due to the different unit systems, some fetal and some amusing, due to the confusion in unit conversions or a simple disregard to which unit is being used at the moment - some of these mistake are presented in this document. Here is a chance for us, as humanity, to learn from this example and take in some supportive features as well as learn from the compromising aspects of this point - within creating the standardized units we are given both an example of applying a common sense principle that is best for all as having one standardized system of units for the entire world, as we all communicate and share information constantly and continually, and thus, having one system has many benefits as well as preventing misunderstanding as the link above clearly show exist, and thus acts within the principle of prevention is the best cure - on the other hand, we are simultaneously shown the nature of the human, as the human exist within self interest to keep an individualized self definition, as a unit system in this case, as the desire to hold on to ones individuality is greater than the practical common sense of all agreeing on one system that will benefit all and will through having one system prevent dangerous mishaps. Now, just to be clear, it's not to say that the SI unit is the most effective and practical system that can be, though the decimal system has proven itself to be very effective and easy to use, it's the principle of having one system for all, just as the common sense in having one language for all as English is the accepted scientific language, and thus scientist can communicate and share findings through this common language. Once all agree on a system, the system can be adjusted and improved to perfection, and thus form the one most effective system that all work with. The principle of having a standardized system will have an economic and ecological effect as all equipment for any industry will be geared to one system, so we will not need to produce the same equipment in different unit scales, as well as all the accessory equipment to go with it. Standardizing allows us to put our efforts in perfecting the chosen standard and improving it as new research and data allow us - this type of common sense is suggested in the equal money system, as having the best and highest standard for everything we need, thus for instance, we will have only one best cell phone charger that will be suitable for all phones, this will allow the highest standard taking into consideration all knowledge in the field, and will eliminate all the extra production we are now creating with many different chargers, as each phone uses a different charger and when you change your phone, all the accessories have to change with it because nothing is yet to be standardized. Another example is electricity, there isn't an international standard in electricity so in every country you have to have converters to use your electronic shaver, or laptop, and thus you need to purchase more "accessories" to you equipment, and thus more needs to be produced, and thus more energy is wasted and more pollution is being created - all because we have not standardized. But as long as we are working with many systems, we will have all equipment in all different units, thus we now see that standardization has an economical effect and thus economic interests are pulling the strings, and not the interest of humanity as a whole, as many people have a profit based interest to keep the different standards alive, because it requires more factories, and thus more jobs and more money flow - this might sound like a good thing, but this is actually a good thing only to those that benefit from the capitalistic system, and it is seen as good only through the narrow sighted mind of capitalism, thus, I suggest investigate the equal money system, as there is explicit and detailed information as how we can create a system that actually support us within being aligned with all that is here, and thus creating a system that is aligned with the law of the physical, as one with the physical as all life that is here, which should be the goal of the science of physics, as this is the only way we can walk with the forces of nature and not against them - as we can be sure that if at war with nature - nature will eventually win, if not the temporary battle, than certainly the war. Thus, we must align ourselves, use all the insight physics is providing us, use the brightest brains according to the system, and apply them to create an entire system that in all aspect supports us as humanity through aligning ourselves to support and walk with all that is here. Having said that, another point to look at and consider is that, as I've said, the individual quantities and their units have no value in the physical, what does have value is the relationships and interactions between them. Humanity, and scientists as representatives and main influencers on our belief system, have been emphasizing the quantity as what has value instead of looking at and investigating further the relationships and interactions - they have considered these three quantities as mass, length and time to be stable and constant and have gone to great measures to standardize them, but then came along Einstein's special theory of relativity, which I will expand on in later blogs, but in short, what Einstein had found is that all these quantities are relative from perspective that their value changes according to the velocity they are moving in, and according to the frame of reference they are being looked at from, and thus he demonstrated that even mass, length and time are relative, and thus exist within a relationship to their velocity, which means that their property as their value is aligned and defined according to their relationship and interaction with the rest of existence and are not in fact standing stable and independent as initially they have been considered to be. Yet, the general perspective hasn't changed all that much in over 100 years since relativity and it's implications were discovered - we as humanity, thanks to the paradigm that we exist within, thanks to the scientists that have become like gods in the modern world and their word and perspective of reality has been valued to such an extent due to their influence on our economy and life in all arenas, still view the world within a starting point of individual quantities while disregarding the interconnectedness, relationships and interactions between all these quantities that we so thrive to measure, compare and perfect. As long as we hold onto the starting point of defining a quantity by it's value, and thus looking for the value of a quantity and disregard it's relationship and interaction with everything else in existence we are disregarding the essence of the world in which we live in, we are ignoring the actuality of reality as everything is in fact in a constant interaction and cannot be isolated and defined as an individualized quantity - this is clearly seen in the education system, as we educate children to compete and achieve good grades as the value given to their studies, but there is no emphasis in teaching them to relate with their environment, there is no attempt to teach them about real intimate communication with each other, the whole education system, as a reflection of our society, is directing our children towards competition, as isolated individual with a value defining them, not through their ability to interact and thus not educating them to interact, but through standardized test, a quantity, a grade, a number - ignoring who they are as life, and looking only at the standard. To learn more about yourself and how reality functions, please consider a FREE online Course Desteni I Process Lite - Learn Practical Life Skills Online Also, Please check out the following Links: Desteni I Process Equal Money System Journey to Life Group Eqafe Life Products - Self Help Creation's Journey to Life Heaven's Journey to LIfe Earth's Journey to Life
Holocentrids are nocturnal, large-eyed, typically reddish, small to moderate sized fishes, comprising 8 genera and 83 species. During the day, they typically remain hidden in crevices, caves, or under ledges at depths from the shoreline to 100 m (330 ft). The family is divided into two main subfamilies, the Holocenrinae (Squirrelfishes) and Myripristinae (Soldierfishes). Soldierfish feed mainly on large zooplankton whereas squirrelfish feed on benthic invertebrates and small fishes. Some Holocentrids, such as soldierfishes (mempachi) in Hawaii, are commercially important as food fishes. Hawaiian Squirrelfish – Sargocentron xantherythrum (June 2015) Hawaiian Squirrelfish (Sargocentron xantherythrum) were raised from eggs collected in coastal waters off Oahu, Hawaii in May 2015. S. xantherythrum eggs are pelagic and measure 0.7 mm in diameter with an embryo and oil globule that have a light orange tint. The larvae began to feed on copepod nauplii 3 dph at 3.1 mm TL. The large, single, serrate rostral spine, characteristic of all squirrelfish larvae (the rostral spine of soldierfish is bifurcate), began to form near 10 dph at 3.5 mm TL. The larvae underwent flexion between 19 dph (5.7 mm TL) and 25 dph (7.3 mm TL). Postflexion larvae developed a blue sheen below the dorsal fin and became increasingly nervous, swimming actively, sometimes erratically, around the tank. Mortality was highest during the late flexion to early postflexion period. The first red pigment and bottom orientation was observed near 35 dph (10 mm TL) at which time activity level decreased. Settlement appeared to be complete near 50 dph (16 mm TL) when the rostral spine had fully receded and red pigmentation had filled in. The major specialization to pelagic life of S. xantherythrum larvae is the complex rostral spine and extensive spination of the head and the early formation of the pelvic fins and scales. This is the first documented larval rearing of a squirrelfish species.
Clubfoot [Talipes Equinovarus] What is clubfoot? Clubfoot, also known as talipes equinovarus, is a congenital deformity of the foot that occurs in about 1 in 1,000 births in the United States. The affected foot tends to be smaller than normal, with the heel pointing downward and the forefoot turning inward. The heel cord [achilles tendon] is tight, causing the heel to be drawn up toward the leg. This position is referred to as "equinus," and it is impossible to place the foot flat on the ground. Since the condition starts in the first trimester of pregnancy, the deformity is often quite rigid at birth. The three classic signs of clubfeet are 1.) Fixed plantar flexion (equinus) of the ankle, characterized by the drawn up position of the heel and inability to bring to foot to a plantigrade (flat) standing position. This is caused by a tight achilles tendon 2.) Adduction (varus), or turning in of the heel or hindfoot 3.) Adduction (turning under) of the forefoot and midfoot giving the foot a kidney-shaped What does clubfoot look like? No one really knows what causes the deformity. Most commonly, it is an isolated congenital birth defect and the cause is idiopathic (unknown). Clubfoot is believed to be a "multifactorial trait" meaning that there are many different factors involved. The majority of clubfeet result from the abnormal development of the muscles, tendons, and bones, while the fetus is forming in the uterus during the first trimester of pregnancy (about the 8-12 week). While researchers are unable to pinpoint the exact cause, both genetic and environmental conditions play a role. Clubfoot is about twice as common in males, and occurs bilaterally (both feet) in about 50% of cases. If both parents are normal with an affected child, the risk of the next child having a clubfoot is 2-5%. There is also an increased risk for clubfoot associated with certain neurogenic conditions (spina bifida, cerebral palsy, tethered cord, arthrogryposis), connective tissue disorders (Larsen's syndrome, diastrophic dwarfism), and mechanical conditions (oligohydramnios, congenital constriction bands). The foot deformity seen with the above conditions is often more severe and often requires early surgical correction. How do I know if my child has clubfoot? Clubfoot is easily diagnosed during the initial physical examination of the newborn. Oftentimes, the diagnosis of clubfoot can now be made prenatally during the 16-week ultrasound. If the diagnosis is made prenatally, we encourage you to schedule an appointment in the pediatric orthopaedic clinic to discuss diagnosis and treatment options available. During the initial examination, your child's physician or nurse practitioner will obtain a complete prenatal, birth history, and family history. In addition, a complete physical examination will be done. Babies born with clubfoot have a slight increased risk for having developmental dysplasia of the hip (DDH). DDH is a condition of the hip joint in which the top of the thigh bone (femur) slips in and out of its socket because the socket is too shallow to keep the joint intact. Therefore, a detailed hip examination will be done to ensure that there is no instability. If left untreated, the deformity will not go away. It will continue to get worse over time, with secondary bony changes developing over years. An uncorrected clubfoot in the older child or adult is very disabling. Because of the abnormal development of the foot, the patient will walk on the outside of his/her foot which is not designed for weight-bearing. Although each child is different, treatment for clubfoot usually begins immediately after diagnosis. It is important to treat clubfoot as early as possible (shortly after birth). Specific treatment will be determined by on your child's age, overall health, and medical history. In addition, the severity of the condition, as well as the child's tolerance and parent's preference will be considered in the treatment plan. The long-term goal of all treatment is to correct the clubfoot and maintain as normal a foot as possible while facilitating normal growth and development of the child. Ponseti's Method of Treatment: Dr. Ignacio Ponseti of the University of Iowa pioneered this method of treatment in the 1940's. Treatment will ideally start immediately after birth. The treatment involves serial manipulation and plaster casting of the clubfoot. The ligaments and tendons of the foot are gently stretched with weekly, gently manipulations. A plaster cast is then applied after each weekly sessions to retain the degree of correction obtained and to soften the ligaments. Thereby, the displaced bones are gradually brought into the correct alignment. Four to five long leg (from the toes to the hip) are applied with the knee at a right angle. When a baby is born with clubfoot, a pediatric orthopaedic surgeon with expertise in the manipulation and plaster-cast method should start correction as soon as the diagnosis is made. Ideally, the plaster casting should begin immediately after birth in congenital clubfoot. However, this method has been shown to be effective even when treatment is delayed for several months. What are other non-surgical options used in treating clubfoot? There are many different treatment options for clubfoot. Many surgeons prefer to use soft fiberglass casting material instead of the plaster that is used in Ponseti's method. The parent removes the cast prior to each weekly visit in the orthopaedic clinic. The manipulation and casting is continued until the deformity is either corrected or the degree of correction plateaus. 1.) Soft tissue releases that release the tight tendons/ligament around the joints and result in lengthening of the tendons 2.) Bony procedures such as osteotomies/arthrodeses that divide bone or surgically stabilize joints to enable the bones to grow solidly together. 3.) Tendon transfers to place the tendons, or ligaments in an improved position. How long will my child need to see the orthopaedic surgeon? Children will need regular follow-up for several years after treatment (casting or surgery) to ensure that the clubfoot does not recur. The most common time for recurrence is within one to two years following treatment. However, clubfoot can also recur several years after casting or surgery. Clubfoot recurrence can be treated with manipulation/casting or additional surgery. Therefore, we usually recommend that patients continue follow-up care until the end of growth (around 18 years of age) What to expect? Children with clubfoot will usually do well with treatment, develop normally, and participate fully in athletic or recreational activities. The long- term goal of treatment is to provide your child with a working foot that looks as normal as possible. Kristi Yamaguchi: 1992 Olympic Figure Skating Gold Medalist: treated with serial manipulations/casting Troy Aikman: Quarterback Dallas Cowboys: treated with serial manipulations/casting
I’ve been on vacation and upon returning I had a full inbox of questions about how to integrate multiple language arts elements into a single assignment. I thought I would use an example from my own curriculum to illustrate the idea of integration. One novel we teach during the Sophomore year is Harper Lee’s To Kill A Mockingbird, and we also teach SAT-frequent vocabulary words and grammatical skills. Thus, I now have three elements to combine. Many teachers prefer to teach each of these items separately–which may be fine for introductory lessons–but I prefer to combine them in the application stage. A possible in-class assignment could be as follows: Describe two types of courage in Part I of To Kill A Mockingbird using at least two cited quotations from the novel. In a response of at least two 3-5 sentence paragraphs, use at least four of our vocabulary words correctly and use each of the sentence types learned in this class (simple, compound, complex, and compound-complex). This seemingly simple assignment forces the students to do the following: - identify and describe two types of courage in the novel (analysis), - locate, incorporate, and cite two quotations into the response (evidence and citation use), - organize the two types of courage into two short paragraphs (organization/structure of ideas), - apply the use of at least four vocabulary words (vocabulary application), and - incorporate the four types of sentence (sentence fluency and variation). Of course, now comes the difficult part for the teacher. How do you score or assess the student products? Or, do you? Possibly, one may decide not to score the products for the purpose of the grade book (an assessment of learning) but may decide to use this assignment as a means of improving the students’ skills (an assessment for learning). I would most likely not enter a score in the grade book with the students’ first attempt but might use this as a rough draft assignment to be edited and improved over time or as an introduction to another assignment using the same elements. However, when I do decide to enter something like this into the grade book, I would recommend one of two methods. Either score each element separately for the grade book (the analysis, citation use, organization, vocabulary application, and sentence fluency) to reveal the students’ abilities in each of the five areas, or use a rubric separating each of these elements into a distinct column resulting in a final total score. Regardless, the students need to know how well they performed in each of the five areas. I would hope that these five areas also relate to the course’s core requirements (learning outcomes, Power Standards, etc.). These five areas would either be end of course learning targets or skills leading to the end of course learning targets. By integrating the elements in a course, the students can begin to add complexity to their products while also saving the teacher time. Plus, this mixing of skills allows students to see the interconnected nature of the course’s learnings. P.S. I tend to have the students label each element for me before they turn in their final drafts. For example, I would have the students circle the four (or more) vocabulary words, label the four sentence types (and possibly the individual elements of each non-simple sentence), and number each description of courage (a 1 and a 2 would suffice). This simply forces the students to identify what they have and have not done as well as help me identify where problems may lie, much like showing one’s work in math.
In 1978, the World Health Organization (WHO) adopted the Declaration of Alma-Ata. The declaration, named for the host city, Almaty, Kazakhstan (formerly known as Alma-Ata), outlined the organisation's stance towards health care made available for all people in the world. The declaration also defined eight essential components of primary health care, which helped outline a means of providing health care globally. Public education is the first, and one of the most essential, component of primary health care. By educating the public on the prevention and control of health problems, and encouraging participation, the World Health Organization works to keep disease from spreading on a personal level. Nutrition is another essential component of health care. WHO works to prevent malnutrition and starvation and to prevent many diseases and afflictions. Clean Water & Sanitation A supply of clean, safe drinking water, and basic sanitation measures regarding trash, sewage and water cleanliness can significantly improve the health of a population, reducing and even eliminating many preventable diseases. Maternal & Child Health Care Ensuring comprehensive and adequate health care to children and to mothers, both expecting and otherwise, is another essential element of primary health care. By caring for those who are at the greatest risk of health problems, WHO helps future generations have a chance to thrive and contribute to globally. Sometimes, care for these individuals involves adequate counselling on family planning and safe sex. By administering global immunisation, WHO works to wipe out major infectious diseases, greatly improving overall health globally. Local Disease Control Prevention and control of local diseases is critical to promoting primary health care in a population. Many diseases vary based on location. Taking these diseases into account and initiating measures to prevent them are key factors in efforts to reducie infection rates. Another important component of primary health care is access to appropriate medical care for the treatment of diseases and injuries. By treating disease and injury right away, caregivers can help avoid complications and the expense of later, more extensive, medical treatment. By providing essential drugs to those who need them, such as antibiotics to those with infections, caregivers can help prevent disease from escalating. This makes the community safer, as there is less chance for diseases to be passed along.
If you ever need to count the number of cells that contain text in Excel, there is a very easy way to do it. You will need two basic functions: - COUNT – this Excel function returns the number of cells in a range that contain numbers - COUNTA – this function returns the number of cells that are not empty Intuitively, we know that the number of cells that contain text (not numbers!) is equal to the number of non blank cells – the number of cells containing numbers. In other words: COUNTA – COUNT. Let’s look at a simple example to illustrate. Suppose we have the following data in our spreadsheet: Place the cursor in the cell that you want to hold the results and type Alternatively, if you type out you can then drag your cursor over the cell range to select it and Excel will insert D1:D12 into the formula. You can do the same with the COUNT argument, too. When you press Enter, Excel resolves the equation and displays 5.
Video courtesy of NOAA Okeanos Explorer Program In crystal clear high definition video, NOAA’s Little Hercules remotely operated vehicle (ROV) flies over the remnants of a copper-sheathed sailing ship that disappeared at some point in the early to mid-19th century. The video footage was captured on April 26, 2012 from the NOAA Ship Okeanos Explorer during the Gulf of Mexico Expedition 2012. The dive was conducted at site 15577 – a recently mapped but never-before seen shipwreck in the western Gulf of Mexico. While most of the wood has since disintegrated, the oxidized copper sheathing remained along with a variety of artifacts. These included plates, glass bottles, guns, cannons, the ship’s stove, navigational instruments, and anchors. A few days before discovering this wreck, the Okeanos Explorer filmed some incredible geologic features from the bottom of the Gulf of Mexico including rivers and pools of brine, salt “volcanoes”, and natural seepage of oil from the sea floor. These features contribute to a highly unique biodiversity that ultimately exist via a chemosynthesis-based food chain and the chemicals seeping from the earth. The following is the dive report from the Okeanos Explorer: During yesterday’s dive, we searched for natural hydrocarbon seeps — areas where oil and natural gas slowly leak out of the seafloor. This is an entirely natural phenomenon and an important characteristic of the Gulf of Mexico ecosystem. Just as oil and gas provide energy to power our modern society, these chemicals provide energy to support dense animal communities. Although seeps account for a much smaller area of the seafloor than the completely flat mud bottom that characterizes the majority of the Gulf, they are still quite common and contain an astounding density of life within a relatively small area. Because of the patchy distribution of hydrocarbon seepage, seep communities have been described as ‘oases’ of primary productivity in an otherwise food-poor deep sea. However, the degree to which seep communities represent isolated ‘islands’ having very little interaction with one another and the rest of the Gulf of Mexico ecosystem is unknown. Thus, studying the interactions among animals within seep ecosystems, especially food web interactions, is important to the understanding of the function of seep ecosystems and how they fit into the broader Gulf of Mexico ecosystem. Vestimentiferan tubeworms and bathymodiolin mussels dominate biomass in seep communities. These animals have symbiotic bacteria living inside their bodies. Through chemosynthesis, the bacteria harness energy from the chemicals in seeping fluid to produce food for their host, much like plants harness the sun’s energy to produce food via photosynthesis. These animals, in turn, provide habitat for an entire community of smaller animals, including shrimp, squat lobsters, brittle stars, anemones, and polychaete worms. Interestingly, there is no evidence that any of these animals are actually eating the mussels or tubeworms. Instead, the associated animals get their energy from free-living bacteria that harness chemical energy in the same way as the symbiotic bacteria. Recently, scientists collected whole aggregations of tubeworms and mussels, and their associated communities. This study showed that most of these smaller animals feed within a single mussel or tubeworm aggregation (as opposed to jumping tens to hundreds of meters from one to another). This supports the ‘oasis’ or ‘island’ analogy — at least for those animals that spend most of their lives at seeps. There are still some missing links that would help complete the picture of how energy is transferred from bacterial primary production through the seep food web and beyond. One link is meiofauna – very small, sometimes microscopic animals such as nematodes and copepods (figure 1). These tiny creatures are likely to be an important link in the transfer of energy from chemosynthetic microbes to higher predators. Another is the export of seep primary production to the surrounding deep-sea ecosystem. It is not uncommon to see fish and larger benthic (bottom dwelling) crabs “visiting” (figure 2). These animals spend most of their lives away from seeps, but may feed on seep-associated animals before moving on. It is hard to measure how much energy leaves the seep ecosystem, because fish and large crabs are less common in these habitats than the resident animals and are difficult to capture. Additionally, seep nutrition may make up a very small amount of the diet of one individual fish, so that it is difficult to detect. However, many fish, each carrying away a small amount of seep material, could be quite significant in transferring energy from seeps into the greater Gulf of Mexico food web.
This Month in Physics History September 1904: Robert Wood debunks N-rays Shortly after the discovery of X-rays in 1895, there was a flurry of research activity in the area, with many scientists expecting more similar discoveries. So when another new type of radiation was reported in 1903, it generated a lot of excitement before it was proved false in September 1904. The first claimed discovery of the new type of radiation was made by René Prosper Blondlot, a physicist at the University of Nancy in France. Blondlot, a respected scientist and member of the French Academy of Sciences, had been experimenting with the polarization of X-rays when he found what he thought was a different type of radiation. In the spring of 1903, Blondlot published his first report on the new rays in the Proceedings of the French Academy (the Comptes Rendus). He called the new rays N-rays, with N standing for Nancy, his hometown. The N-ray discovery was something of a matter of national pride for the French, since X-rays had been discovered by Wilhelm Conrad Roentgen, a German. Blondlot used various kinds of apparatus to observe the rays, which were purportedly just barely detectable. In his first experiments, he detected the rays through slight variations in the brightness of a small electric spark when the rays fell on it. Later, he used screens with a phosphorescent coating, which would supposedly glow slightly brighter when hit by N-rays. He thought the new rays were also a form of light, and found that they could be polarized, reflected, and refracted. Within months of Blondlot’s first announcement, many scientists–mostly French scientists, but a few others as well–would claim to have seen the rays. Hundreds of papers were soon published on the topic, including 26 papers by Blondlot himself. Soon various properties of the N-rays were “discovered.” For instance, the rays were found to go through wood and metals, but were blocked by water. They were emitted by the sun, gas burners, and metals, but not wood, and could be stored in a brick. Other scientists proposed applications of the mysterious radiation. For instance, Augustin Charpentier, a professor of medical physics at the University of Nancy, reported that the rays were emitted by rabbits and frogs, and the human brain, muscles, and nerves. He predicted that N-rays, like X-rays, could be useful for medical imaging, to see the outline of internal organs. Another N-ray researcher, Jean Becquerel, son of Henri Becquerel who discovered radioactivity, claimed that N-rays could be transmitted over a wire. These scientists seem to have genuinely believed in their claimed observations, but many other scientists found they could not replicate the results. In fact, they could not see any evidence of N-rays at all. Blondlot and others N-ray believers argued that those who couldn’t see the rays simply didn’t have sufficiently sensitive eyes to detect the effects of N-rays, which were supposedly just at the limits of visibility. The physics community was divided on the issue. One physicist who had been unable to detect the N-rays in his own lab was Robert Wood, of Johns Hopkins University. Wood, who did research on optics and electromagnetism, was known for his diverse interests and his enjoyment of pranks. In the summer of 1904, Wood was sent to France to observe Blondlot’s experiments, in hopes of clearing up the matter. Blondlot and his assistants set up several of their demonstrations for Wood. In the most well known demonstration, Blondlot showed how N-rays could be spread out into a spectrum by an aluminum prism. Blondlot claimed to detect this spectrum by noting a slight increase in brightness at some points along a phosphorescent strip. Wood could see no evidence of the N-ray spectrum. The experiments had to be done in a darkened room, which gave Wood the opportunity to play a trick: unseen by Blondlot and his assistant, Wood removed the crucial prism from the apparatus. He then asked Blondlot to repeat the observations of the N-ray spectrum. Not knowing the prism had been removed, Blondlot continued to insist he saw the very same pattern he had claimed to see when the prism was in place. After several similar demonstrations, Wood was completely convinced that Blondlot and others were imagining the phenomenon. On September 22, 1904, Wood sent off a letter to Nature describing his visit to Blondlot’s lab, and his conclusion that N-rays were non-existent. “After spending three hours or more in witnessing various experiments, I am not only unable to report a single observation which appeared to indicate the existence of the rays, but left with a very firm conviction that the few experimenters who have obtained positive results have been in some way deluded,” he wrote in his report to Nature. Although Wood didn’t mention Blondlot by name in the article, anyone reading it would have known whose experiments it referred to. Wood’s report was published in the September 29, 1904 issue of Nature. Within months, almost no one believed in N-rays anymore. The issue was considered resolved. Blondlot, however, refused to admit he had been in error, and kept working on N-rays for years after others had given up on them. The story of N-rays, which fooled many respectable scientists, has been used ever since as a cautionary tale of how easy it is to deceive oneself into seeing something that is not really there. ©1995 - 2015, AMERICAN PHYSICAL SOCIETY APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed. Contributing Editor: Jennifer Ouellette Staff Writer: Ernie Tretkoff Art Director and Special Publications Manager: Kerry G. Johnson Publication Designer and Production: Nancy Bennett-Karasik
Vivid auroras like those seen in Blakley's images are caused by charged particles from the sun (the solar wind) that interact with the Earth's upper atmosphere (at altitudes above 50 miles, or 80 km), causing a glow. The particles are drawn to Earth's polar regions by the planet's magnetic field. The auroras over the North Pole are known as the aurora borealis, or northern lights. The lights over the South Pole are known as the aurora australis, or southern lights. When the aurora is most active, it creates a spectacular display of bright colors called the aurora corona.
Blackmon brings to light one of the most shameful chapters in American history--the re-enslavement of black Americans from the Civil War to World War II--in a moving, sobering account that explores the insidious legacy of white racism that reverberates today. James Anderson critically reinterprets the history of southern black education from Reconstruction to the Great Depression. By placing black schooling within a political, cultural, and economic context, he offers fresh insights into black commitment to education, the peculiar significance of Tuskegee Institute, and the conflicting goals of various philanthropic groups, among other matters. Initially, ex-slaves attempted to create an educational system that would support and extend their emancipation, but their children were pushed into a system of industrial education that presupposed black political and economic subordination. Because blacks lacked economic and political power, white elites were able to control the structure and content of black elementary, secondary, normal, and college education during the first third of the twentieth century. Nonetheless, blacks persisted in their struggle to develop an educational system in accordance with their own needs and desires. Chronicling the emergence of deeply embedded notions of black people as a dangerous race of criminals by explicit contrast to working-class whites and European immigrants, this book reveals the influence such ideas have had on urban development and social policies. Richard Rothstein's The Color of Law offers "the most forceful argument ever published on how federal, state, and local governments gave rise to and reinforced neighborhood segregation" (William Julius Wilson). Exploding the myth of de facto segregation arising from private prejudice or the unintended consequences of economic forces, Rothstein describes how the American government systematically imposed residential segregation: with undisguised racial zoning; public housing that purposefully segregated previously mixed communities; subsidies for builders to create whites-only suburbs; tax exemptions for institutions that enforced segregation; and support for violent resistance to African Americans in white neighborhoods. A groundbreaking, "virtually indispensable" study that has already transformed our understanding of twentieth-century urban history (Chicago Daily Observer), The Color of Law forces us to face the obligation to remedy our unconstitutional past. Once America's arsenal of democracy, Detroit over the last fifty years has become the symbol of the American urban crisis. In this reappraisal of racial and economic inequality in modern America, Thomas Sugrue explains how Detroit and many other once prosperous industrial cities have become the sites of persistent racialized poverty. He challenges the conventional wisdom that urban decline is the product of the social programs and racial fissures of the 1960s. Probing beneath the veneer of 1950s prosperity and social consensus, Sugrue traces the rise of a new ghetto, solidified by changes in the urban economy and labor market and by racial and class segregation. He focuses on urban neighborhoods, where white working-class homeowners mobilized to prevent integration as blacks tried to move out of the crumbling and overcrowded inner city. The author weaves together the history of workplaces, unions, civil rights groups, political organizations, and real estate agencies.
Emmy Noether may not be a household name, but her compatriot Albert Einstein — someone who definitely is — once called her “the most significant creative mathematical genius thus far produced since the higher education of women began.” Noether, born in a small town in Germany in 1882, would have been 133 on Monday, and Google is celebrating her life with a doodle. She is credited with revolutionizing the fields of mathematics and physics with her theory of noncommutative algebras, where answers are determined by the order in which numbers are multiplied. Born on March 23, 1882 and an Emmy abstract algebra is a German mathematician known for hes groundbreaking contributions to the theoretical physics. Pavel Alexandrov, Albert Einstein , Jean Dieudonné, Hermann Weyl, Norbert Wiener, and many others by the ring area, and the revolutionary theory in algebra is described as the most important female mathematician in history. Noether’s theorem explains the fundamental link between conservation laws and symmetry. In the Bavarian town of Erlangen Jew is born into a family. Her father was a mathematician Max Noether. After passing the required exams in French and English were planning to teach but in the end she studied mathematics at the University of Erlangen gave her father’s lesson. Mathematical Institute in Erlangen after finishing her thesis under the supervision of Paul Gordan’s in 1907, she worked for seven years without salary. At that time, women were excluded from academic position). 1915 at the University of Göttingen by David Hilbert and Felix Klein, the world-famous mathematical research center, was called to participate in the mathematics department. But it was rejected by the university administration and continued to teach in the name of Hilbert four more years. Lectures were given the right in 1919, whereby the Privatdozent (in Germanic university, the owner, which means that the title could give lessons independently, without having authority professorship) was able to take the title.
If you want to get the Universe we see, a multiverse comes along for the ride. When we look out at the Universe today, it simultaneously tells us two stories about itself. One of those stories is written on the face of what the Universe looks like today, and includes the stars and galaxies we have, how they’re clustered and how they move, and what ingredients they’re made of. This is a relatively straightforward story, and one that we’ve learned simply by observing the Universe we see. But the other story is how the Universe came to be the way it is today, and that’s a story that requires a little more work to uncover. Sure, we can look at objects at great distances, and that tells us what the Universe was like in the distant past: when the light that’s arriving today was first emitted. But we need to combine that with our theories of the Universe — the laws of physics within the framework of the Big Bang — to interpret what occurred in the past. When we do that, we see extraordinary evidence that our hot Big Bang was preceded and set up by a prior phase: cosmic inflation. But in order for inflation to give us a Universe consistent with what we observe, there’s an unsettling appendage that comes along for the ride: a multiverse. Here’s why physicists overwhelmingly claim that a multiverse must exist. Back in the 1920s, the evidence became overwhelming that not only were the copious spirals and ellipticals in the sky actually entire galaxies unto themselves, but that the farther away such a galaxy was determined to be, the greater the amount its light was shifted to systematically longer wavelengths. While a variety of interpretations were initially suggested, they all fell away with more abundant evidence until only one remained: the Universe itself was undergoing cosmological expansion, like a loaf of leavening raisin bread, where bound objects like galaxies (e.g., raisins) were embedded in an expanding Universe (e.g., the dough). If the Universe was expanding today, and the radiation within it was being shifted towards longer wavelengths and lower energies, then in the past, the Universe must have been smaller, denser, more uniform, and hotter. As long as any amount of matter and radiation are a part of this expanding Universe, the idea of the Big Bang yields three explicit and generic predictions: - a large-scale cosmic web whose galaxies grow, evolve, and cluster more richly over time, - a low-energy background of blackbody radiation, left over from when neutral atoms first formed in the hot, early Universe, - and a specific ratios of the lightest elements — hydrogen, helium, lithium, and their various isotopes — that exist even in regions that have never formed stars. All three of these predictions have been observationally borne out, and that’s why the Big Bang reigns supreme as our leading theory of the origin of our Universe, as well as why all its other competitors have fallen away. However, the Big Bang only describes what our Universe was like in its very early stages; it doesn’t explain why it had those properties. In physics, if you know the initial conditions of your system and what the rules that it obeys are, you can predict extremely accurately — to the limits of your computational power and the uncertainty inherent in your system — how it will evolve arbitrarily far into the future. But what initial conditions did the Big Bang need to have at its beginning to give us the Universe we have? It’s a bit of a surprise, but what we find is that: - there had to be a maximum temperature that’s significantly (about a factor of ~1000, at least) lower than the Planck scale, which is where the laws of physics break down, - the Universe had to have been born with density fluctuations of approximately the same magnitude of all scales, - the expansion rate and the total matter-and-energy density must have balanced almost perfectly: to at least ~30 significant digits, - it must have been born with the same initial conditions — same temperature, density, and spectrum of fluctuations — at all locations, even causally disconnected ones, - and its entropy must have been much, much lower than it is today, by a factor of trillions upon trillions. Whenever we come up against a question of initial conditions — basically, why did our system start off this way? — we only have two options. We can appeal to the unknowable, saying that it is this way because it’s the only way it could’ve been and we can’t know anything further, or we can try to find a mechanism for setting up and creating the conditions that we know we needed to have. That second pathway is what physicists call “appealing to dynamics,” where we attempt to devise a mechanism that does three important things. - It has to reproduce every success that the model it’s trying to supersede, the hot Big Bang in this instance, produces. Those earlier cornerstones must all come out of any mechanism we propose. - It has to explain what the Big Bang cannot: the initial conditions the Universe started off with. These problems that remain unexplained within the Big Bang alone must be explained by whatever novel idea comes along. - And it has to make new predictions that differ from the original theory’s predictions, and those predictions must lead to a consequence that is in some way observable, testable, and/or measurable. The only idea we’ve had that met these three criteria was the theory of cosmic inflation, which has achieved unprecedented successes on all three fronts. What inflation basically says is that the Universe, before it was hot, dense, and filled with matter-and-radiation everywhere, was in a state where it was dominated by a very large amount of energy that was inherent to space itself: some sort of field or vacuum energy. Only, unlike today’s dark energy, which has a very small energy density (the equivalent of about one proton per cubic meter of space), the energy density during inflation was tremendous: some 10²⁵ times greater than dark energy is today! The way the Universe expands during inflation is different from what we’re familiar with. In an expanding Universe with matter and radiation, the volume increases while the number of particles stays the same, and hence the density drops. Since the energy density is related to the expansion rate, the expansion slows over time. But if the energy is intrinsic to space itself, then the energy density remains constant, and so does the expansion rate. The result is what we know as exponential expansion, where after a very small period of time, the Universe doubles in size, and after that time passes again, it doubles again, and so on. In very short order — a tiny fraction of a second — a region that was initially smaller than the smallest subatomic particle can get stretched to be larger than the entire visible Universe today. During inflation, the Universe gets stretched to enormous sizes. This accomplishes a tremendous number of things in the process, among them: - stretching the observable Universe, irrespective of what its initial curvature was, to be indistinguishable from flat, - taking whatever initial conditions existed in the region that began inflating, and stretching them across the entire visible Universe, - creating minuscule quantum fluctuations and stretching them across the Universe, so that they’re almost the same on all distance scales, but slightly smaller-magnitude on smaller scales (when inflation is about to end), - converting all that “inflationary” field energy into matter-and-radiation, but only up to a maximum temperature that’s well below the Planck scale (but comparable to the inflationary energy scale), - creating a spectrum of density and temperature fluctuations that exist on scales larger than the cosmic horizon, and that are adiabatic (of constant entropy) and not isothermal (of constant temperature) everywhere. This reproduces the successes of the non-inflationary hot Big Bang, provides a mechanism for explaining the Big Bang’s initial conditions, and makes a slew of novel predictions that differ from a non-inflationary beginning. Beginning in the 1990s and through the present day, the inflationary scenario’s predictions agree with observations, distinct from the non-inflationary hot Big Bang. The thing is, there’s a minimum amount of inflation that must occur in order to reproduce the Universe we see, and that means there are certain conditions that inflation has to satisfy in order to be successful. We can model inflation as a hill, where as long as you stay on top of the hill, you inflate, but as soon as you roll down into the valley below, inflation comes to an end and transfers its energy into matter and radiation. If you do this, you’ll find that there are certain “hill-shapes,” or what physicists call “potentials,” that work, and others that don’t. The key to making it work is that the top of the hill need to be flat enough in shape. In simple terms, if you think of the inflationary field as a ball atop that hill, it needs to roll slowly for the majority of inflation’s duration, only picking up speed and rolling quickly when it enters the valley, bringing inflation to an end. We’ve quantified how slowly inflation needs to roll, which tells us something about the shape of this potential. As long as the top is sufficiently flat, inflation can work as a viable solution to the beginning of our Universe. But now, here’s where things get interesting. Inflation, like all the fields we know of, has to be a quantum field by its very nature. That means that many of its properties aren’t exactly determined, but rather have a probability distribution to them. The more time you allow to pass, the greater the amount that distribution spreads out. Instead of rolling a point-like ball down a hill, we’re actually rolling a quantum probability wavefunction down a hill. Simultaneously, the Universe is inflating, which means it’s expanding exponentially in all three dimensions. If we were to take a 1-by-1-by-1 cube and call that “our Universe,” then we could watch that cube expand during inflation. If it takes some tiny amount of time for the size of that cube to double, then it becomes a 2-by-2-by-2 cube, which requires 8 of the original cubes to fill. Allow that same amount of time to elapse, and it becomes a 4-by-4-by-4 cube, needing 64 original cubes to fill. Let that time elapse again, and it’s an 8-by-8-by-8 cube, with a volume of 512. After only about ~100 “doubling times,” we’ll have a Universe with approximately 10⁹⁰ original cubes in it. So far, so good. Now, let’s say we have a region where that inflationary, quantum ball rolls down into the valley. Inflation ends there, that field energy gets converted to matter-and-radiation, and something that we know as a hot Big Bang occurs. This region might be irregularly shaped, but it’s required that enough inflation occurred to reproduce the observational successes we see in our Universe. The question becomes, then, what happens outside of that region? Here’s the problem: if you mandate that you get enough inflation that our Universe can exist with the properties we see, then outside of the region where inflation ends, inflation will continue. If you ask, “what is the relative size of those regions,” you find that if you want the regions where inflation ends to be big enough to be consistent with observations, then the regions where it doesn’t end are exponentially larger, and the disparity gets worse as time goes on. Even if there are an infinite number of regions where inflation ends, there will be a larger infinity of regions where it persists. Moreover, the various regions where it ends — where hot Big Bangs occur — will all be causally disconnected, separated by more regions of inflating space. Put simply, if each hot Big Bang occurs in a “bubble” Universe, then the bubbles simply don’t collide. What we wind up with is a larger and larger number of disconnected bubbles as time goes on, all separated by an eternally inflating space. That’s what the multiverse is, and why scientists accept its existence as the default position. We have overwhelming evidence for the hot Big Bang, and also that the Big Bang began with a set of conditions that don’t come with a de facto explanation. If we add in an explanation for it — cosmic inflation — then that inflating spacetime that set up and gave rise to the Big Bang makes its own set of novel predictions. Many of those predictions are borne out by observation, but other predictions also arise as consequences of inflation. One of them is the existence of a myriad of Universes, of disconnected regions each with their own hot Big Bang, that comprise what we know as a multiverse when you take them all together. This doesn’t mean that different Universes have different rules or laws or fundamental constants, or that all the possible quantum outcomes you can imagine occur in some other pocket of the multiverse. It doesn’t even mean that the multiverse is real, as this is a prediction we cannot verify, validate, or falsify. But if the theory of inflation is a good one, and the data says it is, a multiverse is all but inevitable. You may not like it, and you really may not like how some physicists abuse the idea, but until a better, viable alternative to inflation comes around, the multiverse is very much here to stay. Now, at least, you understand why. Starts With A Bang is written by Ethan Siegel, Ph.D., author of Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.
Subscribe to never miss an important update! Temperature Distribution of Oceans - The study of the temperature of the oceans is important for determining the - movement of large volumes of water (vertical and horizontal ocean currents), - type and distribution of marine organisms at various depths of oceans, - climate of coastal lands, etc. Source of Heat in Oceans - The sun is the principal source of energy (Insolation). - The ocean is also heated by the inner heat of the ocean itself (earth’s interior is hot. At the sea surface, the crust is only about 5 to 30 km thick). But this heat is negligible compared to that received from sun. How does deep water marine organisms survive in spite of absence of sunlight? - Photic zone is only about few hundred meters. It depends on lot of factors like turbidity, presence of algae etc.. - There are no enough primary producers below few hundred meters till the ocean bottom. - At the sea bottom, there are bacteria that make use of heat supplied by earth’s interior to prepare food. So, they are the primary producers. - Other organisms feed on these primary producers and subsequent secondary producers. - So, the heat from earth supports wide ranging deep water marine organisms. But the productivity is too low compared to ocean surface. Why is diurnal range of ocean temperatures too small?, Why oceans take more time to heat or cool? - The process of heating and cooling of the oceanic water is slower than land due to vertical and horizontal mixing and high specific heat of water. - (More time required to heat up a Kg of water compared to heating the same unit of a solid at same temperatures and with equal energy supply). The ocean water is heated by three processes. - Absorption of sun’s radiation. - The conventional currents: Since the temperature of the earth increases with increasing depth, the ocean water at great depths is heated faster than the upper water layers. So, convectional oceanic circulations develop causing circulation of heat in water. - Heat is produced due to friction caused by the surface wind and the tidal currents which increase stress on the water body. The ocean water is cooled by - Back radiation (heat budget) from the sea surface takes place as the solar energy once received is reradiated as long wave radiation (terrestrial radiation or infrared radiation) from the seawater. - Exchange of heat between the sea and the atmosphere if there is temperature difference. - Evaporation: Heat is lost in the form of latent heat of evaporation (atmosphere gains this heat in the form of latent heat of condensation). Factors Affecting Temperature Distribution of Oceans - Insolation: The average daily duration of insolation and its intensity. - Heat loss: The loss of energy by reflection, scattering, evaporation and radiation. - Albedo: The albedo of the sea (depending on the angle of sun rays). - The physical characteristics of the sea surface: Boiling point of the sea water is increased in the case of higher salinity and vice versa [Salinity increased == Boiling point increased == Evaporation decreased]. - The presence of submarine ridges and sills [Marginal Seas]: Temperature is affected due to lesser mixing of waters on the opposite sides of the ridges or sills. - The shape of the ocean: The latitudinally extensive seas in low latitude regions have warmer surface water than longitudinally extensive sea [Mediterranean Sea records higher temperature than the longitudinally extensive Gulf of California]. - The enclosed seas (Marginal Seas – Gulf, Bay etc.) in the low latitudes record relatively higher temperature than the open seas; whereas the enclosed seas in the high latitudes have lower temperature than the open seas. - Local weather conditions such as cyclones. - Unequal distribution of land and water: The oceans in the northern hemisphere receive more heat due to their contact with larger extent of land than the oceans in the southern hemisphere. - Prevalent winds generate horizontal and sometimes vertical ocean currents: The winds blowing from the land towards the oceans (off-shore winds-moving away from the shore) drive warm surface water away from the coast resulting in the upwelling of cold water from below (This happens near Peruvian Coast in normal years. El-Nino). - Contrary to this, the onshore winds (winds flowing from oceans into continents) pile up warm water near the coast and this raises the temperature (This happens near the Peruvian coast during El Nino event)(In normal years, North-eastern Australia and Western Indonesian islands see this kind of warm ocean waters due to Walker Cell or Walker Circulation). - Ocean currents: Warm ocean currents raise the temperature in cold areas while the cold currents decrease the temperature in warm ocean areas. Gulf stream (warm current) raises the temperature near the eastern coast of North America and the West Coast of Europe while the Labrador current (cold current) lowers the temperature near the north-east coast of North America (Near Newfoundland). All these factors influence the temperature of the ocean currents locally. Vertical Temperature Distribution of Oceans - Photic or euphotic zone extends from the upper surface to ~200 m. The photic zone receives adequate solar insolation. - Aphotic zone extends from 200 m to the ocean bottom; this zone does not receive adequate sunrays. - The profile shows a boundary region between the surface waters of the ocean and the deeper layers. - The boundary usually begins around 100 – 400 m below the sea surface and extends several hundred of meters downward. - This boundary region, from where there is a rapid decrease of temperature, is called the thermocline. About 90 per cent of the total volume of water is found below the thermocline in the deep ocean. In this zone, temperatures approach 0° C. - The temperature structure of oceans over middle and low latitudes can be described as a three-layer system from surface to the bottom. - The first layer represents the top layer of warm oceanic water and it is about 500m thick with temperatures ranging between 20° and 25° C. This layer, within the tropical region, is present throughout the year but in mid-latitudes it develops only during summer. - The second layer called the thermocline layer lies below the first layer and is characterized by rapid decrease in temperature with increasing depth. The thermocline is 500 -1,000 m thick. - The third layer is very cold and extends up to the deep ocean floor. Here the temperature becomes almost stagnant. - In the Arctic and Antarctic circles, the surface water temperatures are close to 0° C and so the temperature change with the depth is very slight (ice is a very bad conductor of heat). Here, only one layer of cold water exists, which extends from surface to deep ocean floor. The rate of decrease of temperature with depths is greater at the equator than at the poles. - The surface temperature and its downward decrease is influenced by the upwelling of bottom water (Near Peruvian coast during normal years). - In cold Arctic and Antarctic regions, sinking of cold water and its movement towards lower latitudes is observed. - In equatorial regions the surface, water sometimes exhibits lower temperature and salinity due to high rainfall, whereas the layers below it have higher temperatures. - The enclosed seas in both the lower and higher latitudes record higher temperatures at the bottom. - The enclosed seas of low latitudes like the Sargasso Sea, the Red Sea and the Mediterranean Sea have high bottom temperatures due to high insolation throughout the year and lesser mixing of the warm and cold’ waters. - In the case of the high latitude enclosed seas, the bottom layers of water are warmer as water of slightly higher salinity and temperature moves from outer ocean as a sub-surface current. - The presence of submarine barriers may lead to different temperature conditions on the two sides of the barrier. For example, at the Strait of Bab-el-Mandeb, the submarine barrier (sill) has a height of about 366 m. The subsurface water in the strait is at high temperature compared to water at same level in Indian ocean. The temperature difference is greater than nearly 20° C. Horizontal Temperature Distribution of Oceans - The average temperature of surface water of the oceans is about 27°C and it gradually decreases from the equator towards the poles. - The rate of decrease of temperature with increasing latitude is generally 0.5°C per latitude. - The horizontal temperature distribution is shown by isothermal lines, i.e., lines joining places of equal temperature. - Isotherms are closely spaced when the temperature difference is high and vice versa. - For example, in February, isothermal lines are closely spaced in the south of Newfoundland, near the west coast of Europe and North Sea and then isotherms widen out to make; a bulge towards north near the coast of Norway. The cause of this phenomenon lies in the cold Labrador Current flowing southward along the north American coast which reduces the temperature of the region more sharply than in other places in the same latitude; at the same time the warm Gulf Stream proceeds towards the western coast of Europe and raises the temperature of the west coast of Europe. Range of Ocean Temperature - The oceans and seas get heated and cooled slower than the land surfaces. Therefore, even if the solar insolation is maximum at noon, the ocean surface temperature is highest at 2 p.m. - The average diurnal or daily range of temperature is barely 1 degree in oceans and seas. - The highest temperature in surface water is attained at 2 p.m. and the lowest, at 5 a.m. - The diurnal range of temperature is highest in oceans if the sky is free of clouds and the atmosphere is calm. - The annual range of temperature is influenced by the annual variation of insolation, the nature of ocean currents and the prevailing winds. - The maximum and the minimum temperatures in oceans are slightly delayed than those of land areas (the maximum being in August and the minimum in February [Think why intense tropical cyclones occur mostly between August and October – case is slightly different in Indian Ocean due to its shape]). - The northern Pacific and northern Atlantic oceans have a greater range of temperature than their southern parts due to a difference in the force of prevailing winds from the land and more extensive ocean currents in the southern parts of oceans. - Besides annual and diurnal ranges of temperature, there are periodic fluctuations of sea temperature also. For example, the 11-year sunspot cycle causes sea temperatures to rise after a 11- year gap.
As any real science nerd will tell you (or anyone who watched Jurassic Park during their formative years), birds are really dinosaurs. So it stands to reason that by studying how a modern bird--say, a chicken--walks, we should be able to quickly figure out how dinosaurs roamed the earth. Not so fast, all you Jack Horner wannabes. You've forgotten a key fact: many dinosaurs had long tails that chickens don't have. These tails would have drastically altered the center of mass, and thus the locomotion, of dinosaurs. But don't worry. These scientists have solved this quandary by raising chickens with artificial tails and studying how they walk. Check out the video above for all the chickeny goodness.Walking Like Dinosaurs: Chickens with Artificial Tails Provide Clues about Non-Avian Theropod Locomotion. "Birds still share many traits with their dinosaur ancestors, making them the best living group to reconstruct certain aspects of non-avian theropod biology. Bipedal, digitigrade locomotion and parasagittal hindlimb movement are some of those inherited traits. Living birds, however, maintain an unusually crouched hindlimb posture and locomotion powered by knee flexion, in contrast to the inferred primitive condition of non-avian theropods: more upright posture and limb movement powered by femur retraction. Such functional differences, which are associated with a gradual, anterior shift of the centre of mass in theropods along the bird line, make the use of extant birds to study non-avian theropod locomotion problematic. Here we show that, by experimentally manipulating the location of the centre of mass in living birds, it is possible to recreate limb posture and kinematics inferred for extinct bipedal dinosaurs. Chickens raised wearing artificial tails, and consequently with more posteriorly located centre of mass, showed a more vertical orientation of the femur during standing and increased femoral displacement during locomotion. Our results support the hypothesis that gradual changes in the location of the centre of mass resulted in more crouched hindlimb postures and a shift from hip-driven to knee-driven limb movements through theropod evolution. This study suggests that, through careful experimental manipulations during the growth phase of ontogeny, extant birds can potentially be used to gain important insights into previously unexplored aspects of bipedal non-avian theropod locomotion." Related content: What happened to all the giant mounds of dinosaur poop? (Hint: it involves cockroaches). NCBI ROFL: Dried tomato improves semen from Iranian cocks. NCBI ROFL: Penguins on treadmills. Need we say more?
The geoelectrical and seismic prospecting are the best geophysical surveys known and used for detecting buried structures, locating aquifers, studying pollutants and finding landfills. This technique consists in measuring the apparent resistivity in different points of the soil, by stringing electrodes. Computer processing of these measures allows viewing the results by explanatory tomographic images. To measure the resistivity, you enter an electric current in the soil by one or two current electrodes and record the potential difference between two voltage measurement electrodes. The relationship between the voltage and current, multiplied by a correction coefficient that depends on the electrodes geometry, gives the resistivity of the soil between the two voltage electrodes. There are various geometric positions to place these electrodes (dipole, tripole, Wenner and Schlumberger quadrupole). It is also possible to have resistivity profiles (Horizontal Electric Soundings): stringing remains in a fixed geometry and completely shifted, along a predetermined path. By operating in this way, it is therefore possible to investigate a constant depth of soil, highlighting any lateral variations in the subsurface. Otherwise, you can perform the Vertical Electric Sounding: it is kept fixed to the center of the stringing and the distance between the current electrodes is progressively increased, by increasing the investigation depth. Therefore, it is possible to reconstitute a terrain profile relying on empirical methods, which allow obtaining tomographic sections, as shown in the image below:
Otitis externa, better known as swimmers’ ear, is a common summertime ear infection. This infection is found in the outer ear, the portion of the ear canal that runs from the eardrum to outside the head. Swimmer’s ear is not the same as inner ear infection, which is also common in children. Water left in ear canal from swimming pools, lakes and bathing are some of the most common causes of swimmer’s ear. Contaminated water carrying bacteria enters the ear, and since bacteria need a moist place to grow, the ear is an ideal environment for swimmers’ ear to thrive. Your ear has natural defenses to prevent infection such as earwax, which accumulates dead skin cells and other debris that travels to the opening of the ear to keep it clean, a thin slightly acid watery substance that discourages bacterial growth lines the ear canal, and the small opening to the outside of the ear helps prevent foreign objects from entering the ear. The inner ear canal has a thin layer of protective skin that can be injured by using cotton swabs, hairpins or fingernails inserted in the ear. This can lead to bacterial invasion and subsequent infection. Ear devices, such as earbuds or hearing aids, which can cause tiny breaks in the skin, can also leave the ear susceptible to bacterial invasion. Symptoms are classified as mild, moderate, and advanced Mild symptoms include redness and slight itching in the ear canal, drainage of clear odorless fluid. Moderate symptoms include pain in ear canal, more extensive redness in the ear, feeling of fullness inside your ear, decreased or muffled hearing and excessive fluid drainage. Advanced symptoms include redness or swelling of the outer earn fever, swelling of the lymph nodes in the neck, severe pain radiating to the face, neck, or side of head and complete blockage of ear canal. Swimmer’s ear isn’t usually serious if treated promptly. There are some serious complications that can arise if left untreated such as: - Temporary hearing loss- muffled hearing until infection clears - Deep tissue infection (cellulitis) (Rare) - Also, rare but life threatening is bone and cartilage osteomyelitis. This is an in infection of the bone and surrounding cartilage. The infection can spread to the brain and surrounding nerves. This complication is more likely to be found in the older population and diabetic patients. - Long-term infection (chronic otitis externa). An outer ear infection is usually considered chronic if signs and symptoms persist for more than three months. Chronic infections are more common if there are conditions that make treatment difficult, such as a rare strain of bacteria, an allergic skin reaction, an allergic reaction to antibiotic eardrops, a skin condition such as dermatitis or psoriasis, or a combination of a bacterial and a fungal infection. Prevention & Treatment To avoid swimmer’s ear, keep ears as dry as possible and: - Use a bathing cap, ear plugs, or custom-fitted swim molds when swimming. - Use a towel to dry ears well. - Tilt head back and forth so that each ear faces down to allow water to escape the ear canal. - Pull earlobe in different directions when ear faces down to help water drain out. - If there is still water in the ear, consider using a hair dryer to move air within the ear canal. - Put the hair dryer on the lowest heat and speed/fan setting and hold the hair dryer several inches from ear. - Swim wisely. Don’t swim in lakes or rivers on days when warnings of high bacteria counts are posted. - Check with your healthcare provider about using ear-drying drops after swimming. - DON’T use these drops if you have ear tubes, punctured ear drums, swimmer’s ear, or ear drainage. - DON’T put objects in ear canal (including cotton-tip swabs, pencils, paperclips, or keys). - DON’T try to remove ear wax. Ear wax helps protect the ear canal from infection. - If you think the ear canal could be blocked by ear wax, check with your healthcare provider. - They may recommend an in-home earwax removal kit or have you come in to have the earwax removed. - You may be prescribed antibiotic ear drops to kill the invasive bacteria or anti-fungal eardrops to combat the infections and steroids to help reduce swelling. Keeping you informed and safe. (Part 2) Part 2 will discuss: Physiology of blood pressure regulation, Medications to help control hypertension Blood pressure regulation is a complex process involving a series of body systems, hormones and input from the nervous system all working together to... Part 1 High blood pressure (HBP) has been called the silent killer and with good reason. It is estimated that at least 20 percent of the population with high blood pressure have no symptoms. In part 1 we will discuss: Symptoms of hypertension Health risks of... With emergency room department waiting times ranging from just over one and a half hours (North Dakota) to just under 4 hours, (Maryland) the likelihood that you will be exposed to contagious diseases in crowded waiting rooms is almost certain. From urinary tract...
The most common inflammatory bowel diseases are Crohn’s disease and ulcerative colitis. Lack of certain nutrients can contribute to the development of these conditions. On the other hand, the diseases and the therapies used to treat them may also impair the body’s ability to absorb or utilize certain nutrients, thereby starting a vicious cycle that can make the disease worse. This was demonstrated in a new Greek study that is published in Nutrients. Chronic inflammatory bowel diseases primarily occur in the Western countries and especially at northern latitudes, which suggests that sun exposure and typically Western diets pay a major role in the development of these diseases. Our gut flora harbors billions of bacteria that are essential for our general health. Our gut also constitutes a significant part of our complicated immune defense that fights hordes of microorganisms and toxins every single day. Acute inflammation is a normal and vital response that is carried out in an elaborate teamwork involving proinflammatory cytokines, white blood cells, and antibodies. It is essential, however, for the immune defense not to overreact with a chronic inflammatory response. This may damage healthy tissue and cause oxidative stress, which is an imbalance between free radicals and protective antioxidants. Chronic bowel inflammation may be triggered by bacteria that are normally present in the gut or by a derailed immune defense. This can cause redness, swelling, and sores in the mucosa. There may also be symptoms such as abdominal pain, diarrhea, weight loss, and fatigue. Some people get aching joints, skin diseases, and other symptoms if the inflammation spreads to other parts of the body. In severe cases, the disease leads to more operations, surgical removal of parts of the intestine, and stoma. IBS (Inflammatory Bowel Disease) is another term for chronic bowel diseases Crohn´s disease typically affects the last part of the small intestine (ileum) and/or the large intestine (colon). Subsequent scarring may occur in combination with intestinal narrowing or fistulas where the inflammation forms channels between the intestine and adjacent organs such as the bladder. Ulcerative colitis (bleeding colitis) always begins in the rectum, from where the inflammation can spread up into the colon – but never the small intestine. Typical symptoms are painful bowel movements and bloody diarrhea. Anemia is a common problem that accompanies it. Chronic inflammatory bowel disease and lack of certain nutrients Chronic inflammatory bowel diseases are most common in the Western countries and at northern latitudes, especially among women. The diseases are characterized by a constant imbalance between pro-inflammatory and anti-inflammatory cytokines which derails the immune defense. The presence of chronic inflammation releases cascades of free radicals that cause oxidative stress and peroxidation of lipids like cholesterol, which can be very harmful. Chronic inflammatory bowel diseases destroy quality of life, and the nutrient deficiencies are associated with prolonged hospitalization, complications during surgery, and increased mortality. Lack of appetite, unhealthy diets, poor nutrient uptake, and blood loss all contribute to nutrient deficiencies, lack of energy, weight loss, poor muscle strength, and other symptoms. Patients with chronic inflammatory bowel diseases have an increased risk of lacking vitamin D, folic acid (vitamin B9), vitamin B12, and iron, according to a new study of 89 Greek patients with inflammatory bowel disease. The study is published in Nutrients and supports earlier research, where scientists have focused on nutrients and their role in the development of inflammatory bowel disease. The link between sunlight, vitamin D, and chronic inflammation The sun is our primary source of vitamin D. However, at northern latitudes we can only synthesize vitamin D during the summer period when the sun sits sufficiently high in the sky. Many people become vitamin D-deficient during the winter period, and if they fail to get enough sun exposure in the spring and summer their deficiency becomes chronic. People with chronic inflammatory bowel disease typically live at northern latitudes in the United States and Europe where there is less sun and lack of vitamin D may be a possible cause of such diseases. Vitamin D regulates around ten percent of our genes. This is important for various on-off switches that are also relevant for the white blood cells of our immune system. Vitamin D is therefore extremely important for our ability to fight germs and pathogens quickly and effectively. Moreover, vitamin D helps prevent the production of pro-inflammatory cytokines. Lack of vitamin D is linked to an increased risk of infections and chronic inflammatory bowel diseases like Crohn’s disease and ulcerative colitis but also rheumatoid arthritis and other autoimmune diseases. On the other hand, it appears that supplementation with large doses of vitamin D may help prevent or mitigate these conditions by strengthening the immune defense and inhibiting unwanted inflammation. Folic acid (vitamin B9) and vitamin B12 Folic acid and vitamin B12 are both important for blood formation. If you lack one or both nutrients it may cause anemia, fatigue, and several other symptoms because the cells do not get enough oxygen. Folic acid and vitamin B12 are also involved in the conversion of homocysteine, a byproduct of protein metabolism. Lack of these B vitamins may therefore result in elevated homocysteine levels in the blood, which is associated with an increased risk of cardiovascular disease. There is not much focus on vitamin B12’s role in the nervous system, and a deficiency may cause fatigue, dementia-like symptoms, and neurological disorders. Many people are not aware that B vitamins are important for the immune defense, but studies of mice show that lack of folic acid can actually reduce levels of certain white blood cells in the gut that regulate inflammation. The body absorbs folic acid in the middle part of the small intestine (jejunum) but local inflammation can impair the uptake. Lack of folic acid can also be a result of poor diet habits, alcohol and stimulant abuse, pregnancy, smoking, ageing, and birth control pills. Recent studies suggest that therapies where patients are given methotrexate or sulfasalazine can block the uptake and utilization of folic acid. The uptake of vitamin B12 in the lower part of the small intestine (ileum) requires the presence of a carrier protein called intrinsic factor. Vitamin B12 deficiencies are quite common among patients suffering from ulcerative colitis because that part of the small intestine is damaged. Lack of B12 may also be a result of eating a strictly plant-based diet. The vitamin is primarily found in food sources of animal origin. Treatment with metformin against diabetes may also cause a vitamin B12 deficiency. Ulcerative colitis can cause an iron deficiency due to blood loss. Nonetheless, it is important not to take iron supplements unless the physician has detected a deficiency with help from a blood sample. Diet and chronic inflammation In connection with the prevention and treatment of chronic inflammatory bowel disease it is also important to get plenty of omega-3. These essential fatty acids have anti-inflammatory properties due to their influence on certain hormone-like compounds called prostaglandins. Oily fish and fish oil supplements contains EPA (eicosapentaenoic acid) and DHA (docosahexaenoic acid), which we humans can easily utilize. A study from Taiwan has demonstrated that low blood levels of omega-3 fatty acids are associated with pain and other symptoms caused by chronic inflammatory bowel diseases. It is important not to consume too much omega-6 from plant oils, margarine, and ready meals, as omega-6 can promote inflammation. Chronic inflammatory bowel diseases are characterized by oxidative stress and one should therefore make sure to get plenty of antioxidants such as vitamins A, C, and E plus minerals like selenium and zinc. There are numerous cook books out on the market with good advice and useful recipes for anti-inflammatory diets. Aristea Gioxari et al. Serum vitamins D, B9 and B12 in Greek Patients with Inflammatory Bowel Diseases. Nutrients 2020 Koji Hosomi and Jun Kunisawa. The specific Roles of Vitamins in the Regulation of Immunosurveillance and Maintenance of Immunologic Homeostasis in the gut. Immune Network 2017 Cheryl Tay. Abdominal pain in IBS: A lack of omega-3 could be the culprit, says Taiwan Study. NUTRA-ingredients-asia.com. 2018 Search for more information...
Historical Figure Presentations This semester in my upper-level early national and Jacksonian US class (1790-1848), I tried out an alternative to a short paper: oral presentations on historical figures which focused in on one primary source by that figure. Students chose from a list of figures I provided that would mesh well with the reading for that week, and I gave a sample presentation early in the semester to model what I was looking for. To encourage students to take notes, I gave them notetaking worksheets and explained that they could bring these to the midterm and final. They would serve as cheat sheets for short answer questions asking them what selected pairs of figures would discuss. You can skip down to the prompt, or read about pluses and minuses first. Some big advantages of this kind of project: -it’s student-centered learning–not just a buzzword, but something the students specifically said they liked -gives students a stake in the work–some of them got really excited about the people they researched -encourages independent research and analysis of primary sources -builds public speaking skills -less grading–I filled out a rubric as they spoke and just made a few additional notes before giving a grade -created a very successful set of exam questions which drew out thoughtful and creative replies -getting students to use books and scholarly articles; perhaps setting a required number would help -students often found great images of historical documents online, but if the transcription wasn’t posted with the images, they didn’t do good research to find it (esp. if it was only available in a book) -they generally copied the format of my sample talk -the biggest miss was on the supplemental images, objects or documents–they would show images, often from later eras, as illustrations but failed to analyze them For this assignment, you will choose one historical figure from the list of options for your chosen date and find one primary source document by or about that person. You will then craft an oral presentation that connects the person and source to that week’s theme. Your presentation must include: - Brief background on the life and significance of the historical figure - One primary source document, a copy of which you can post digitally or hand out paper copies - Two supplementary images, objects, or documents - Analysis of your source and supplementary items in connection to the theme and reading for that week - Visuals of some sort, via handouts or digital presentation - Bibliography of web and scholarly sources The presentation can take any form you would like: a powerpoint with a talk; an historical reenactment; a short film or media presentation. You may work individually or in pairs. Individual presentations should last 7-10 minutes, while those crafted by a pair should be 15 minutes. A list of websites and databases which will be useful for locating primary sources is available on Canvas. Cite all sources. Your biography should draw from reputable scholarly sources; do not rely on websites. You will also take notes on each presentation on a worksheet, which you will be able to bring with you to the midterm and final. I scored the presentations based on the following: - Visual Aids - Primary Sources and Supplements - Knowledge of Content Figures as options, by week: 4: Thomas Jefferson, Aaron Burr 5: Abigail Adams, Mercy Otis Warren, Judith Sargent Murray, Phyllis Wheatley 6: James Monroe, John Quincy Adams, Tecumseh 8: Nat Turner, Harriet Jacobs, William Lloyd Garrison 10: Joseph Smith, Charles Finney, John Jacob Astor, Robert Fulton 11: Andrew Jackson, Henry Clay, Margaret Eaton, John Ross (cherokee chief) 13: Ralph Waldo Emerson, Nathaniel Hawthorne, Elizabeth Peabody, Emily Dickinson
The biologist David Deamer proposes that life evolved from a collection of interacting molecules, probably in a pool in the shadow of a volcano. By Emily Singer March 17, 2016 For the past 40 years, David Deamer has been obsessed with membranes. Specifically, he is fascinated by cell membranes, the fatty envelopes that encase our cells. They may seem unremarkable, but Deamer, a biochemist at the University of California, Santa Cruz, is convinced that membranes like these sparked the emergence of life. As he envisions it, they corralled the chemicals of the early Earth, serving as an incubator for the reactions that created the first biological molecules. One of the great initial challenges in the emergence of life was for simple, common molecules to develop greater complexity. This process resulted, most notably, in the appearance of RNA, long theorized to have been the first biological molecule. RNA is a polymer — a chemical chain made up of repeating subunits — that has proved extremely difficult to make under conditions similar to those on the early Earth. Deamer’s team has shown not only that a membrane would serve as a cocoon for this chemical metamorphosis, but that it might also actively push the process along. Membranes are made up of lipids, fatty molecules that don’t dissolve in water and can spontaneously form tiny packages. In the 1980s, Deamer showed that the ingredients for making these packages would have been readily available on the early Earth; he isolated membrane-forming compounds from the Murchison meteorite, which exploded over Australia in 1969. Later, he found that lipids can help form RNA polymers and then enclose them in a protective coating, creating a primitive cell. Over the past few years, Deamer has expanded his membrane-first approach into a comprehensive vision for how life emerged. According to his model, proto-cells on the early Earth were made up of different components. Some of these components could help the proto-cell, perhaps by stabilizing its protective membranes or giving it access to an energy supply. At some point, one or more RNAs developed the ability to replicate, and life as we know it began to stir. Deamer thinks that volcanic landmasses similar to those in Iceland today would have made a hospitable birthplace for his proto-cells. Freshwater pools scattered across steamy hydrothermal fields would be subject to regular rounds of heating and cooling. That cycle could have concentrated the necessary ingredients — including both lipids and the building blocks for RNA — and provided the energy needed to stitch those building blocks into biological polymers. Deamer is now trying to re-create these conditions in the lab. His goal is to synthesize RNA and DNA polymers. Quanta Magazine spoke with Deamer at a conference on the origins of life in Galveston, Texas, earlier this year. An edited and condensed version of that conversation follows. QUANTA MAGAZINE: What have been the biggest accomplishments of researchers seeking to understand life’s origins? What questions remain to be solved? DAVID DEAMER: We have really made progress since the 1950s. We have figured out that the first life originated at least 3.5 billion years ago, and my guess is that primitive life probably emerged as early as 4 billion years ago. We also know that certain meteorites contain the basic components of life. But we still don’t know how the first polymers were put together. Scientists disagree over how to define life. NASA has come up with a working definition: an evolving system that can make more of itself. Is that sufficient? Life resists a simple abstract definition. When I try to define life, I put together a set of a dozen properties that don’t fit anything not alive. A few of them are simple: reproduction, evolution, and metabolism. Many scientists study individual steps in the emergence of life, such as how to make RNA. But you argue that life is a system, and it began as a system. Why? DNA is the center of all life, but it can’t be considered alive even though it has all the information required to make a living thing. DNA cannot reproduce by itself. Put DNA in a test tube with water, and it just slowly breaks into different pieces. So right away, you see the limitation of thinking about single molecules as being alive. To get a bit of what we call growth, you have to add the subunits of DNA, an enzyme to replicate the DNA, and energy to power the reaction. Now we have molecules that can reproduce themselves if they have certain ingredients. Are they alive yet? The answer is still no, because sooner or later the subunits are used up and reproduction comes to a screeching halt. So how do we get to a system that’s really alive? That’s what we and others are trying to do. The only way we can think of is to put DNA into a membranous compartment. Why are compartments so important? A car doesn’t function unless you’ve enclosed it; you need to keep the pieces in place. For the origin of life, you can’t have evolution without isolated systems—compartments that are competing for energy and nutrients. It’s like giving chemists chemicals but no test tubes. You can’t do chemistry without a compartment. On the early Earth, each membrane was an experiment in life. What do you think Earth looked like when life emerged? There was a global ocean, probably salty, with volcanic landmasses resembling Hawaii or Iceland or even Olympus Mons on Mars. Precipitation on the islands produced freshwater pools that were heated to boiling by geothermal energy, then cooled to ambient temperature by runoff. Contemporary examples include the hydrothermal fields I have visited in Kamchatka, in Russia, and Bumpass Hell on Mount Lassen, in California, where we do field work. Why would these pools have been a likely birthplace for life? Organic compounds accumulated in the pools, washed there by precipitation that rained down on the volcanic landmasses. The pools went through wetting and drying cycles, forming a concentrated film of organic compounds on the rocks like the ring in a bathtub. Within that film, interesting things can happen. Lipids can self-assemble into membrane-like structures, and the subunits of RNA or other polymers join together to create long chains. You’ve found that lipids can help form RNA. How does this work? We have developed a method for joining together the individual subunits of RNA to make a long chain. We start with the molecules AMP, adenosine monophosphate, and UMP, uridine monophosphate, which are two of the building blocks of RNA. In water, the subunits simply dissolve and can’t form longer chains. We discovered that if you trap the AMP subunits between layers of lipids, the subunits line up. When you dry them, they form a polymer. The wet-dry cycle also creates lipid droplets that encapsulate the polymers. Now we’re trying to recreate that process in the lab under the sort of conditions you’d find in a hydrothermal field. We use half-hour wet-dry cycles to simulate what happens at the edge of pools. We have shown we can make polymers ranging from 10 to over 100 units. And you believe this is what happened on Earth? We are testing the possibility that what we see in the lab can also unfold in a site that resembles early Earth, such as Bumpass Hell on Mount Lassen. My colleague Bruce Damer and I were up there last September, testing whether the hot gases coming out of a fumarole could drive the reaction that makes RNA polymers. The results are very preliminary and need to be repeated, but we did see evidence of polymers. You likened the droplets to test tubes, with each being an experiment in life. What would qualify as a successful experiment? Deamer proposes that biological molecules evolved in hydrothermal pools. Layers of lipids would build up on the edges of the pool, trapping chemicals and encouraging the growth of RNA (red lines). Lipid-bound droplets would then peel off from these layers, creating RNA-filled proto-cells. The idea is that each [droplet] will enclose a mixture of random polymers. Rare protocells may house collections of polymers with specific functional properties. For example, some polymers might help stabilize the cell membrane, extending its lifespan. Others might make pores in the membrane, allowing nutrients to enter the cell. Still others might catalyze reactions, converting those nutrients into something the cell needs. These RNA-based enzymes are called ribozymes. We want to see if we can detect functional polymers among the trillions of random-sequence polymers we generate. What would be the most exciting possible discovery in this system? To get the thing to replicate would be a big deal. To do that, we need a ribozyme that makes our polymerization reaction go faster. But we have a long way to go before we can find that kind of ribozyme. Once scientists are capable of making life in the lab, will we understand how life originated on Earth? We’ll probably be able to make lab life, but I’m not sure we can claim that’s how life began. The life we’re trying to synthesize is going to be a very technical life, based in a lab with clean reagents and so forth. I’m not sure we can call that the origin of life until it becomes a self-growing system, until we put that system in an outside environment and watch it grow. Although we will never know with certainty how life did begin, it seems eminently possible that we will understand how life can begin on any habitable planet, such as the early Earth and perhaps Mars. This article was reprinted on TheAtlantic.com.
Earth scientists spend a lot of time thinking about how to find other planets. But how often do any of us stop to think about how the inhabitants of other planets will find us? Doing just that could, in turn, improve the methods we use to spot planets revolving around distant stars. Identifying planets is challenging because they aren’t usually bright enough to see, so scientists have to rely on novel methods to look around possible planets for a variety of clues as to their existence, position and size. A cooperative effort between NASA and the University of Maryland has resulting in a fascinating computer simulation showing just what alien civilizations might see when training their telescopes toward our solar system. The simulation looks at the cloud of dust surrounding the solar system (the Kuiper Belt) and uses a telltale gap in that cloud to pick out the orbit of Neptune. The model even lets astronomers take a look back in time to see what the solar system might have looked like hundreds of millions of years ago. Of course, since no one has ever actually see the solar system from this extreme distance there is a chance the model could be significantly off-base. Regardless, the data used to build the simulation could help Earthlings spot similar conditions in other star systems and identify Neptune-sized planets.
The atmosphere is a gaseous envelope surrounding and protecting our planet from the intense radiation of the Sun and serves as a key interface between the terrestrial and ocean cycles. The biosphere encompasses all life on Earth and extends from root systems to mountaintops and all depths of the ocean. It is critical for maintaining species diversity, regulating climate, and providing numerous ecosystem functions. The cryosphere encompasses the frozen parts of Earth, including glaciers and ice sheets, sea ice, and any other frozen body of water. The cryosphere plays a critical role in regulating climate and sea levels. The human dimensions discipline includes ways humans interact with the environment and how these interactions impact Earth’s systems. It also explores the vulnerability of human communities to natural disasters and hazards. The land surface discipline includes research into areas such as shrinking forests, warming land, and eroding soils. NASA data provide key information on land surface parameters and the ecological state of our planet. The ocean covers almost a third of Earth’s surface and contains 97% of the planet’s water. This vast, critical reservoir supports a diversity of life and helps regulate Earth’s climate. Processes occurring deep within Earth constantly are shaping landforms. Although originating from below the surface, these processes can be analyzed from ground, air, or space-based measurements. The Sun influences a variety of physical and chemical processes in Earth’s atmosphere. NASA continually monitors solar radiation and its effect on the planet. The terrestrial hydrosphere includes water on the land surface and underground in the form of lakes, rivers, and groundwater along with total water storage.
What is Curing of Concrete? A Study of the Various Curing Methods Cement, sand, and water in the right proportion forms a composite material which is known as concrete. The chemical interaction which takes place between cement and water helps in binding the aggregate. Fresh concrete acts like plastic and then therefore be molded into any desired shape and can be compacted to form a dense mass. It is important to place concrete in the desired position before it starts losing its plasticity. Final setting time of concrete is the time during which the concrete completely loses its plasticity and becomes hard. Curing of Concrete Curing plays an extremely important role when it comes to concrete strength development and durability. When water is added to the concrete mix which is the mixture of cement, sand and aggregate an exothermic reaction (hydration) takes place, this helps the concrete harden. Hardening of concrete is not a quick process and takes a long period of time which leads to more requirement of water for the hydration process. So, the process of keeping the concrete moist until the hydration reaction is completed is called the curing of concrete. It can also be described as the process by which the concrete is kept moist so as to protect it from loss of moisture due to atmospheric temperature and hydration reaction. It is the process of controlling the rate and extent to which moisture is lost from concrete during cement hydration. Why is curing of concrete important? After water is added to the concrete mix, the hydration process starts, making the concrete dry out quickly as the exothermic reaction releases heat. For the completion of the hydration process, concrete must be kept moist so as to attain the maximum strength soon. What is the procedure for curing of concrete? By draining water over the concrete, you can cure the concrete. Water which is cooler than 50C is not suitable for curing of concrete. As heat is released by concrete during the hydration reaction using water which is less than 50C can lead to the concrete cracking and failing. By alternative drying and wetting the concrete surface changes in volume which again results in cracking. What is the time taken for curing of concrete? It takes 28 days to complete the curing of concrete process to attain maximum strength. In the first 3-7 days concrete attains 50% of its design strength. In 14 days, the compressive strength reaches 75%. By 28 days the concrete design strength reaches 90%. With the increase in time the concrete strength also increases. Minimum curing time required for cement concrete The initial strength of concrete is what determines the ultimate strength of concrete. Curing must be done properly keeping in mind environmental conditions, types of structural members, atmospheric temperature. It is extremely important to maintain the right temperature in concrete; it should not be below 50C. Concrete must be kept moist for 28 days. Due to lack of time and modern techniques curing is achieved in 14-20 days. It is best to keep the concrete moist for at least 14 days. According to IS 456-2000 concrete must not be cured for less than 7 days for ordinary Portland cement and for a mixture of concrete with mineral admixtures or blended cement it must be for at least 10 days. If the weather is hot and the temperature is high then the curing must not be done in less than 10 days for OPC and 14 days for concrete with blended cement or mineral admixtures. What are the factors on which curing of concrete time depends? - Specified strength of concrete - Grades of concrete - Atmospheric temperature During summer the water required for the hydration process is more as 50% of the water is evaporated during the sunny days. Size and shape of concrete members also affect the curing time. Methods for concrete curing: 1. Ponding: This method is adopted for the making of floor slabs. The concrete surface is divided into small ponds and they are filled with water continuously for 14 days. 2. Wet coverings: This method is used for columns, footings and the bottom surface of slabs where the ponding technique is not possible. Coverings which are impermeable are required such as gunny bags or hessian. These membranes are then sprayed with water to keep the concrete moist. 3. Membrane curing of concrete: Places where atmospheric temperature is high ponding is not suitable as water gets evaporated due to the extreme heat. By membrane curing loss of water content can be avoided. It helps to seal off by the formation of an impermeable layer on the surface of the concrete which leads to resistance in evaporation. This procedure is performed by brushing or spraying the curing compound on the concrete surface. Curing compounds which helps in achieving the membrane curing: - Synthetic resin curing compound: It is a compound which forms as an impermeable membrane on the surface of concrete by which it can resist the evaporation of water from concrete. - The synthetic resin compound can easily be removed by just spraying hot water on the surface of the concrete. Therefore, it is suitable for areas where such treatments can be applied to the concrete. - Acrylic curing compound: It is a polymer-based compound which is obtained from the polymers of Acrylic acid. The reason why acrylic curing compound is favorable is because there is no need to remove it for plastering. Acrylic rather helps to achieve adhesion to plastering. - Wax curing compound: The properties of wax curing compound is similar to synthetic resin. Wax must not be used on the surfaces to be painted or tiled as wax hampers the adhesion between surface and plastering or tiling. - Chlorinated rubber curing compound: It forms a thick membrane on the concrete surface when it is laid. By using chlorinated rubber-based curing compound you can seal the concrete very well leaving no pores behind. What is to be kept in mind is that chlorinated rubber’s age is very less and thus it cannot stay for a long time. - Steam curing of concrete: At a precast concrete plant this process is adopted where there is a mass production of concrete membrane. Heat moisture is present in steam and when it is sprayed on the concrete surface it helps to keep the moisture and also increase the temperature of the concrete which results in quickening the pace of hardening the concrete. - Infrared radiation for curing of concrete: In cold climatic region this method is used. By the application of infrared radiation to the concrete the initial temperature is increased which results in increase of strength of the concrete. This method is even more effective than steam curing, as by raising the initial temperature the ultimate strength of concrete does not decrease. In hollow concrete members this technique is used where heater is placed in concrete members which emit the temperature of 900. - Electric current for concrete curing: By passing alternative current through the concrete the curing is done in this process. By placing two plates, one on top and the other at the bottom of the concrete the plates act as an electrode through which alternative current is passed. Among these electrodes it is important to maintain a potential difference of 30V to 60V. By performing this method, the curing of concrete process can be completed within 3 days which usually takes 28 days to complete.
The Theaetetus is the first of seven dialogues chronologically leading up to the death of Socrates. It is an extraordinarily unique dialogue in the Platonic corpus. At the outset, the dialogue is framed as a conversation between Euclides and Terpsion from Megara around three decades after the initial conversation between Socrates and Theaetetus had originally taken place (When Theaetetus was fifteen years old). Recall a similar framing of the narrative used in the Symposium. The dialogue is unique in that Euclides claims to have confirmed its contents with Socrates during the troubled time of his trial, making it the closest dialogue in existence to have been written by Socrates (the Republic is also debatable). At any rate, both of the men Euclides and Terpsion had heard of Theaetetus, a man renowned for his bravery and patriotism as evidenced by his decision upon being wounded at Corinth to travel back to Athens to die. The two tiresome and weary Megarans listen the dialogue as if it were a relic, hand copied exactly as it happened. Theaetetus was also well known for his geometric demonstrations. Why do we have a dialogue focusing on Theaetetus? Why does it not focus on Theodorus for example? Nothing is accidental in a Platonic dialogue, and the decision to focus the title, as well as much of the content of the dialogue, on the character of Theaetetus is not excused from this principle. Theaetetus is a young man of fifteen when speaking with Socrates, and like many other young men he is captivated by the works of Protagoras and the geometry of Theodorus of Corinth. In a word, he is impressionable. He is a well-born man, but lamentably, his guardians have wasted much of his substance. At the start of the dialogue he is willing and eager and naive, but by the close we find vague traces of rebelliousness -a danger that is exposed in the Socratic dialectic. The conversation between Theaetetus and Socrates is focused on the question of knowledge (episteme) -what is it? Theaetetus attempts to defend the Protagorean position that knowledge is nothing other than perception (he provides essentially three definitions that are proven to be false, and a fourth definition through the mouth of Socrates that is also proven to be insufficient). Each definition is found inadequate on the grounds that knowledge is being and other things like opinions or perceptions are relative, or in a state of becoming. To step back for a moment, recall Socrates’s reluctant exchange in the Protagoras in which Socrates defends the notion that virtue is knowledge and that it cannot be taught, while Protagoras claims that virtue is not knowledge but that is can be taught (he also claims to be its teacher). If virtue cannot be taught and it is instead something innate or genetic, what does that mean for the sustainability of the city? Enter the need for the natural hierarchy of people found in the Republic during the formation of Kallipoli, in which their are three races of people: gold, silver, and bronze (borrowed from earlier Hesiodic writings). The gold are the high-born, naturally superior, philosophic types. This hierarchy is meant to most closely match natural law is the result of a belief that virtue can be taught. However, other forms of government, like democracy, rely on a belief that virtue can be taught, as Jefferson once noted that the future of the republic relies on a virtuous citizenry. Socrates’s claims undermine this principle. However, returning to the beginning with our initial question regarding Theaetetus, perhaps the prologue is again illuminating. Theaetetus is renowned outside of Athens for his bravery and courage, less so for his geometry. Others are perplexed about why he would choose to return to Athens while injured, instead of remaining in Corinth to properly heal and recover. It could have been a foolhardy thing to do, but the distinction between patriotism and foolhardiness is hazy. Patriotism is valued in the hearts of the broader citizenry. Therefore, his death in returning to Athens was to give Athens the greater glory. Is it possible that this story of an older Theaetetus is evidence that virtue is teachable? Could it be that his conversation with Socrates some thirty years prior had impressed upon him a sense of virtue? Although, if we maintain Socrates’s claim that virtue is knowledge, then we also find some success with Theaetetus for his work with irrational commensurates in Euclidean geometry. In exploring this question with Theaetetus, Socrates is also famously examining himself since he claims to be devoid of knowledge -except in knowing what he does not know. In this way, Socrates actually has quite an extensive knowledge. However cannot be geometry, arithmetic, and the arts as these are all a “swarm” of examples, whereas Socrates is looking for the “hive” (Meno). Knowledge cannot be perception because then the madman, the dreamer, the healthy, the unhealthy, all will perceive things differently and each would be correct (relativism) rendering knowledge meaningless. Knowledge cannot be an impression as the images of the “wax” and the “birdcage” demonstrations. Lastly, knowledge cannot be true opinion nor true opinion with logos for then knowledge could at best, only be a guess, never certain, and has been stated previously Socrates is in search of being, not becoming. Ultimately, the dialogue concludes not with a sufficient definition of knowledge, but rather with an eye toward the good the conversation has brought upon Theaetetus for it will make him more manly and courageous in the long run. The distinction between the form and the content is sharpened and the effectual truth of their discussion has been demonstrated with Theaetetus’s courage in battle at Corinth. The philosophic life is exemplified as best in the Theaetetus as it makes us more aware of our ignorance, but also fearful of it; and more aware of our finitude, but less burdened by it. In this way, the progress made in the dialogue is toward a better way of being for Theaetetus. For this reading I used Seth Bernardete’s translation of Plato’s Theaetetus.
Keep track of constitutional principles with a graphic organizer. Pupils define, describe the origins of, and note down the location of the following terms: checks and balances, federalism, individual rights, limited government, popular sovereignty, republicanism, and separation of powers. - Students can use their textbook to look up the principles and their origins as a homework assignment - Have partners work together to complete the graphic organizer and then discuss the results as a class - The chart requires individuals to either recall information or look it up and can act as a reference material for later use and study - The boxes provided for writing are somewhat small, so you may need to suggest that learners use bullet points or abbreviations
Some of the rich diamond deposits in the Northwest Territories may have been formed as a result of ancient seawater streaming into the deep roots of the continent, transported by plate tectonics, suggests new research from an international team of scientists in Canada, the U.S. and the U.K. The discovery further highlights the role played by plate tectonics in “recycling” surface materials into deep parts of the earth, building on the groundbreaking discovery by a University of Alberta team last year of vast quantities of water trapped more than 500 kilometres underground. “With the ringwoodite discovery, we showed there is a lot of water trapped in really deep parts of the Earth, which probably all came from recycling ocean water,” explains Graham Pearson, professor in the U of A’s Department of Earth and Atmospheric Sciences and Canada Excellence Research Chair in Arctic Resources. “This new study really highlights that process—it clearly demonstrates that ocean water in this case has been subducted via an old oceanic slab into a slightly shallower but still very deep part of the Earth. From there it has pumped that brine into the bottom of the root beneath the Northwest Territories, and it’s made the diamonds.” Ugly diamonds are a researcher’s best friend The Northwest Territories is home to rich deposits of high-quality gem diamonds as well as so-called “low-quality” diamonds, which are covered in a coat of cloudy material. “They’re kind of ugly things,” laughs Pearson. “But all the most interesting diamonds are.” All diamonds are formed from fluids, but only these less attractive coated stones still contain traces of their scientifically valuable source fluids. “[The fluids in the coats] are sky-high in sodium and potassium and chlorine, and it’s very difficult to get that stuff from the Earth’s normal mantle,” says Pearson. “It’s a big mystery—where does that come from? Well, we can show that maybe the most sensible place for it to come from is seawater, which is basically a sodium chloride solution.” Pearson notes that this captive seawater likely became trapped in a massive slab of the Earth’s oceanic crust that was subducted beneath North America some hundreds of millions of years ago. The interaction of these seawater brines with the overlying mantle rocks produced a chemically diverse range of fluids from which diamonds crystallized, and could then be carried back to the Earth’s surface via an erupting host volcanic rock known as a kimberlite. These fluid-rich diamonds provide scientists with the most pristine examples of deep Earth fluids—from around 200 km beneath Earth’s surface. “The beauty of the diamond is that because it’s such a robust capsule, it protects the material that it trapped at that depth from any subsequent change,” says Pearson. “It literally carries pristine bits of material from right where it came from, essentially unchanged.” New facets of understanding Although high-quality gem diamonds are normally estimated to have been formed three billion to 3.5 billion years ago, these poor-quality, fluid-rich diamonds appear to be just a few hundred million years old—significantly younger in the Earth’s geological timeline. One theory to explain this age difference is that the two types of diamonds are actually formed by similar processes, and then over time the fluid-rich stones transform into the gem diamonds. Pearson and his team plan to do further studies on the fluids found in these diamonds to test this model. “What we appear to be finding more and more is that the standard model that used to be around—diamonds are only formed in very ancient times, 3.5 billion years ago, by a very specific process—is not true,” says Pearson. “There are more processes that form diamonds at a whole range of different times than we thought possible.” Understanding more about how diamonds form can shape exploration models of how to find them, offering clues to help locate further deposits. Canada is the world’s third-highest diamond producer by value, and the majority of the product is retrieved from the Northwest Territories, where mining is a significant contributor to the province’s economy. The findings of the study were published in Nature. “Highly saline fluids from a subducting slab as the source for fluid-rich diamonds.” Nature 524, 339–342 (20 August 2015) DOI: 10.1038/nature14857
10 in 10: Complete the 10 questions in 10 minutes. 4 + 9 + 1 = 4 + 2 + 6 = 72 + 20 = 20 + 6 = 23 – 7 = 64 + 27 = __ + 40 = 100 28 + __ = 75 82 – 40 = __ - 67 = 19 Times Tables Rockstars: Log into here and complete the session set. Learning objective: I can identify lines of symmetry in 2D shapes. I have put some words in bold and I would like you to think about what they mean. Lines of symmetry _____________________________________________________ 2D shapes ____________________________________________________________ Identify means to find and recognise something. Lines of symmetry are lines that split a shape exactly in half with both sides of the shape being exactly the same. If we try to split a shape in half and it is not the same, the shape is not symmetrical and does not have a line of symmetry. 2D shapes are completely flat, they have sides and vertices. An example of a 2D shape is a triangle. Watch this video from the ‘Oak National Academy’ to learn more about lines of symmetry: Once you have watched the video, complete the workbook pages which are attached below, along with the answers, for you to access.
Who was JJ Thomson? JJ Thomson was an English physicist who discovered the electron in 1897. Thompson was born in December 1856 in Manchester, England and was educated at the University of Manchester and then the University of Cambridge, graduating with a degree in mathematics. Thompson made the switch to physics a few years later and began studying the properties of cathode rays. In addition to this work, Thomson also performed the first-ever mass spectrometry experiments, discovered the first isotope and made important contributions both to the understanding of positively charged particles and electrical conductivity in gases. Thomson did most of this work while leading the famed Cavendish Laboratory at the University of Cambridge. Although he was awarded the Nobel Prize in physics and not chemistry, Thomson’s contributions to the field of chemistry are numerous. For instance, the discovery of the electron was vital to the development of chemistry today, and it was the first subatomic particle to be discovered. What is a cathode ray tube and why was it important? Prior to the discovery of the electron, several scientists had posited that atoms might be made up of smaller pieces. Yet until Thomson, no one had determined what these might be. Cathode rays played a critical role in unlocking this mystery. Thomson determined that charged particles much lighter than atoms, particles that we now call electrons made up cathode rays. Cathode rays are formed when electrons emitted from one electrode and travel to another when a voltage is applied in a vacuum. Thomson also determined the mass to charge ratio of the electron using a cathode ray tube, another significant discovery. How did Thomson make these discoveries? Thomson was able to deflect the cathode ray towards a positively charged plate deduce that the particles in the beam were negatively charged. Then Thomson measured how much various strengths of magnetic fields bent the particles. Using this information Thomson determined the mass to charge ratio of an electron. These were the two critical pieces of information that lead to the discovery of the electron. Thomson was now able to determine that the particles in question were much smaller than atoms, but still highly charged. Thomson finally proved atoms were made up of smaller components, something scientists had been puzzled by for a long time. Thomson called the particle “corpuscles”, not an electron. The name electron was suggested by George Francis Fitzgerald. Why was the discovery of the electron important? The discovery of the electron was the first step in a long journey towards a better understanding of the atom and chemical bonding. Although Thomson didn’t know it, the electron would turn out to be one of the most important particles in chemistry. We now know the electron forms the basis of all chemical bonds. In turn chemical bonds are essential to the reactions taking place around us every day. Thomson’s work provided the foundation for the work done by many other important scientists such as Einstein, Schrodinger, and Feynman. Interesting Facts about JJ Thomson Not only did Thomson receive the Nobel Prize in physics in 1906, but his son Sir George Paget Thomson won the prize in 1937. A year earlier, in 1936, Thomson wrote an autobiography called “Recollections and Reflections”. He died in 1940, buried near Isaac Newton and Charles Darwin. JJ stands for “Joseph John”. Strangely, another author with the name JJ Thomson wrote a book with the same name in 1975. Thomson had many famous students, including Ernest Rutherford.
How climate change has affected coral reefs – and how it could destroy whole taxonomical classes in the next 30 years. A collection of FAQs on corals and bleaching, and all the reasons we have to save them. What is a coral? A coral is an animal, just like us. It can contain millions of tiny animals called polyps. The polyps have a symbiotic relationship (mutually beneficial) with photosynthetic algae called zooxanthellae. The polyps supply shelter and protection for the zooxanthellae. The zooxanthellae provide photosynthetic abilities to produce food for the coral polyps. What is coral bleaching? Coral bleaching is a process that occurs when water temperatures rise above normal. As a result, the polyp begins to expel everything from its system, including the zooxanthellae (much like if a human is ill). Without the algae, the outer “skin” of the coral becomes translucent, and the inner skeleton becomes visible (aka coral bleaching). At this point, it will still be alive. Without the zooxanthellae, the coral will begin to starve. They will no longer grow or allow the algae to return, and are susceptible to disease. The coral bleaching process can happen in a short period of time (months). A coral is dead when there is an appearance of thick algae on its surface. Why should you care? - Between 500,000,000 to 1,000,000,000 people rely on coral reefs for their main food source or for a primary income. - They protect coastal cities and communities from tropical storms, waves, and erosion by creating a natural barrier. - They host organisms that have the potential to cause medical breakthroughs. Some drugs taken from reef organisms can fight cardiovascular disease, inflammation, ulcers, and even cancer. Read more here: Medicine Chests of the Sea – Coral.Org - Some countries, especially small islands, rely on tourism. Reefs bring tourists from around the globe and allow countries to collect millions of dollars in order to support their economies – and take care of their reefs. - Reefs are the tropical rainforests of the ocean. They boast a vast amount of biodiversity. An estimated 1/4 of all sea life spends at least some of its life on reefs. If that doesn’t seem like a lot to you, consider that there are around 40,000 known species of reef fish and 1000 species of corals…and an estimated 1-8 billion species we have not yet discovered. How bad is it? Pretty bad. Or at least, it’s projected to be. In the next 30 years, almost all of the world’s corals could be dead. Global bleaching events have become more and more frequent. After 2014, annual bleaching has been observed worldwide, destroying corals in the Northern Barrier Reef, Hawaiian Islands, Caribbean Islands, amongst others. After the 2016 bleaching event, 67% of the northern region of the Great Barrier Reef bleached – according the Chasing Coral (film, produced 2017) it was the equivalent of losing most of the trees between Maine and Washington D.C. Corals can come back from a bleaching event if given enough time, but with current conditions they are unable to keep up and repair themselves. Overfishing, pollution, and ocean acidification have also been destructive to reefs. What would the world look like without coral reefs? Do you remember the first time you snorkeled or saw a reef on television ? What was it like? Do you remember the colors, the fish and the various corals along the bottom? If you have only visited a reef in recent years, you might not remember it this way. Right now, we are in the crossroads of what has the potential to become ruins. Without reefs, poor countries will lose economic security, either from loss of catch or decreased tourism. Communities in coastal regions will lose food security. Biodiversity will be destroyed. Algae will cover the ocean floor and less fish will visit the reef. The ocean will be duller. We will see it disappear in our lifetime. What can be done? What can YOU do? If we continue with current trends, corals will suffer and die world wide. It doesn’t have to end like this. The difference is people choosing to take action. Here are some things you can do: - Dedicate time to learn some things about coral reefs. NOAA is a great place to find information on our oceans and the creatures that live in them. - Choose sustainable seafood – this is a great guide - If you go snorkeling or diving, never touch or step on corals. They are alive. Use reef safe sunscreen that doesn’t contain Oxybenzone and Octinoxate, which cause bleaching (like this Alba Botanica Mineral Sunscreen – $8.99 at Target) - Participate in a beach cleanup – or do one yourself. Bring a bucket the next time you go to the beach. Remember to use less plastic as well. - Do more to reduce your carbon footprint. For example, take public transportation, eat less meat (a big one), and use less energy. Be efficient with what you have. - Urge your government to take action by participating in protests, sending letters, or signing petitions. You can do this through organizations like Greenpeace or Oceana. - Contribute to research. For example, The Ocean Agency is a great company that uses communication to push conservation. They also have created a scientific aspect in 50 Reefs - Be aware and be active. Spread the word. It’s not too late. All information was taken from memory, education, or from external sources listed in the article or below. Sources below also include valuable sites and information.
Our writing curriculum intends to: - - Develop a rich vocabulary and bold language choices - Enable the communication of facts, ideas and emotions to others - Enable accurate punctuation, grammar and a sound knowledge of spelling - Ensure children can write accurately for a range of purposes - Ensure that presentation and organisation of texts are appropriate and effective.
Condensation through water vapour diffusion Water vapour diffusion is a slow and natural process that occurs in any construction. Water vapour moves from the indoor environment with a higher water vapour pressure to the drier outside environment. Only a limited amount of water vapour is transported. Condensation only occurs when the water vapour comes into contact with a cold, vapour-proof layer. When working from vapour-proof on the inside to vapour-permeable on the outside, the amount of vapour is limited and it can dry out easily. Condensation through convection In practice, condensation issues are usually due to a poorly executed construction, mainly in terms of airtightness. In case of air leaks or when the insulation material is not properly attached to the structure, a convection current may form. This is the natural process in which warm, moist air travels until it hits the cold outer surface of the structure and bounces back. When this air cools, condensation forms against the outer surface. The amount of condensation is many times greater than with water vapour diffusion and as a result, it damages the structure instead of simply drying out. Importance of airtighthness is essential in avoiding air leaks (and therefore convection). Rules of thumb to keep condensation from forming in the construction Make sure the construction is airtight to exclude convection currents that may give rise to condensation. Always use a vapour barrier on the inside of the construction. The outside of the construction is preferably as vapour-permeable as possible, to allow any condensation water to evaporate. Condensation caused by water vapour diffusion doesn’t have to be a major issue as long as the total amount is limited (< 150g/m²) and the construction dries out naturally every year during the warmer months. When a vapour barrier on the inside is not an option, it is possible to install the barrier within the structure under certain conditions. This frequently occurs when an existing roof construction with wool on the outside is insulated additionally in accordance with the Sarking principle. Sarking insulation must always be installed with a continuous vapour barrier. The rule of thumb implemented by the WTCB (NL)/CSTC (FR) provides a safe assessment: the R value of the outer layer must be at least 1.5 x the R value of the inside layer. This keeps the temperature on the inside of the vapour barrier high enough so no condensation can form. Importance of ventilation The natural transport of vapour from the inside to the outside can never replace a ventilation system. A well-dimensioned ventilation system contributes to the creation of a healthy indoor climate and limits the amount of vapour in the inside air, for instance by evacuating the moist air after a shower.
Referrals may be generated by a parent, a student, a classroom teacher, a special educator, or a physician who is familiar with the child. According to IDEA rules and regulations, students aged 5-21 must be evaluated by a licensed Speech/Language Pathologist in order to determine eligibility for special education based upon a communication disorder or autism spectrum disorder. The IEP team (inclusive of parents) meets to discuss concerns, review records, and decide upon the most appropriate option: - No further action is needed at the time - Implement additional classroom modifications and strategies for a designed time and reconvene the team to evaluate progress/effects and again determine the course of action. - A communication evaluation conducted by a licensed SLP. The evaluation will include standardized tests, observations, communication samples, and interviews with staff, parents, and/or the child. Listed below are some common terms related to communication: - Reception involves accurately hearing sounds and words as well as understanding their meanings. (receptive) - Expression generally considers one’s ability to formulate and share coherent strings of information. (expressive) - Fluency is the general ease with which we communicate. Variations may include sound and/or word repetitions, sound prolongations, unusual breathing/speaking patterns or related changes in movement of facial musculature. Depending on the degree and frequency of occurrence, these characteristics may be referred to as stuttering. - Speech Sound Production, also referred to as - Articulation or Phonology, addresses an individual’s ability to produce the sounds of language. Overall intelligibility of a speaker is determined by how accurately they can produce our language’s sounds in various word positions (beginning, middle, end) and by how they maintain integrity of sound production (intelligibility) during conversational speech. - Comprehension involves listening to the language of others and constructing meaning from these communications. When we “comprehend”, we listen and understand what the intent of someone else’s words convey. Comprehension may refer to processing of either verbal input or printed/written input. - Voice or vocal quality should be clear and age-appropriate without distracting acoustic features such as hoarseness (stridency), breathiness, or obvious struggle. - Pragmatics provides us with conversational rules. We change the quantity and quality of our talk based upon the perceived needs of our talking partners. We notice breakdowns and attempt repairs. - Semantics refers to the meaning of words, referential definitions, as they occur in context. - Morphology focuses on the smallest word units such as ‘s’=plural, “un”-not. Syntax addresses between-word grammar structures such as rules of subject-verb agreement. According to IDEA law, at least one standardized assessment tool must be administered to assess eligibility for communication disorder. Scores will serve as one indicator as to how your child’s communication skills performance compares to that of same-aged peers. Scores that are significantly (one and a half or more standard deviations) below those of same age peers may indicate a need to consider a communication disorder. Tests and subtests measure the child’s ability to understand, related to, and use of language and speech clearly and appropriately. The team has agreed that the following will be administered as appropriate: - Peabody Picture Vocabulary Test-IV (receptive) - Expressive Vocabulary Test-2 (expressive) - Test of Problem Solving (synthesis/analysis). - Clinical Evaluation of Language Fundamentals-IV (receptive and expressive subtests) - Test of Language Development -4 Primary/Intermediate (receptive and expressive) - WORD Test Elementary -Revised (receptive and expressive) - WORD Test Adolescent – 2 (receptive and expressive) - Preschool Language Scale-4- (receptive and expressive) - Bracken Test of Basic Concepts (receptive and expressive) - Test of Auditory Comprehension of Language (receptive) - Test of Semantic Skills, Primary and Intermediate (receptive/expressive) - Functional Communication Profile -Revised (functional language skills) - Goldman-Fristoe Test of Articulation -2 (sound production) - Arizona Articulation Proficiency Scale-3rd revision (sound production) - Assessment of Phonological Processes R (sound production) - Photo Articulation Test (sound production) - Stuttering Severity Instrument -3 (fluency) In addition to standardized assessment, the following procedures should be a part of a communication evaluation: - Hearing Screening which includes pure tone hearing screen to assess bilateral sound reception at a basic level and a tympanic screening which screens middle ear functioning. - Oral Motor Examination which will reveal any structural or functional abnormalities that may interfere with speech and/or feeding/swallowing competencies. - Parental interview (regarding developmental and communication skills) A child’s functional or spontaneous competencies must also be considered. Observation and analysis of the child’s communication competencies in his/her educational settings may include sampling and analysis of: - Spontaneous, conversational language - Spontaneous speech - Literacy competencies (sound awareness, spelling, reading, reading comprehension, writing content and organization) - Keyboarding competence - Ability to attend and respond to directions as well as to “filter out” extraneous input - Planning, organization, reasoning, and problem solving related to curriculum (referred to as “Executive Functioning”) - Memory and recall of general information, stories, poems, songs - Pragmatic Communication (ability to coordinate conversational relationships) - Ability to convey information (verbal or written) in a well-organized, cohesive manner Upon completion of evaluation the team will convene again to review results. Areas the team can consider for a communication disorder include; Fluency, Voice, Phonology or Articulation and Language which includes; syntax, morphology, pragmatics, or semantics. For a language disorder, the team has to determine if the disability is the result of another disability (such as intellectual disability, autism, etc.) before eligibility can be determined. The team will further need to determine that the disability has an adverse impact on education. If the team determines that the student qualifies for special education, an IEP will be developed.
Rounding is he process and result of rounding (eliminate certain figures or differences to consider a whole unit). Thanks to the rounding process, calculations. Rounding consists of do not consider decimals, cutting the number to keep only the whole. This means, if we want to round the number 2.3, we will remove the 0.3 and we will keep him two. On the other hand, if the objective is to round 4.9, the rounding mechanism will lead to bypassing the 0.9 and add 0.1 to be able to work with the number 5. With these examples we can see that rounding can be done downwards, reaching a minor number, or upwards, obtaining a bigger number. While in the first case the rounding will be carried out by eliminating decimals, in the second it will be necessary to add a amount to reach the next whole number. Rounding is not only used to operate with whole numbers: it can also be used to eliminate some decimal. The number 8.1463 can be rounded as 8,146 or, clipping another decimal, like 8.15. A concept related to rounding is the truncation, which belongs to numerical analysis (a mathematical subfield) and refers to the technique used to reduce the number of decimal digits, that is, those to the right of the separator, which can be a comma or a period, depending on the country. As demonstrated in the previous paragraph, through truncation a number such as 8.1463 can happen to be 8,146 if you want truncate it to three decimal digits. Rounding is common in the field of commerce, either to facilitate transactions or to make up for the lack of coins that allow a payment that is too exact. Suppose a person acquire different products in a store and the bill to pay is 48.97 pesos. To facilitate payment, rounding can be done at 49 pesos. In this way, the return of the change is facilitated (the rest, also known as return or change). It should be noted that, in some countries, there are laws that rounding must be in favor of the buyer. Returning to the last example, if the seller wants to round since he does not have coins to deliver the change, he will have to do so at 48.95 or 48.90. Although many people familiar with mathematics use their intuition when rounding a number, there are five rules well defined that must be respected if you want to proceed according to the conventions. Let’s see an example for each of them, in which we will always have the objective of rounding a number to its hundredths, that is, leaving it only two decimal digits: * rule 1: if the next digit to the right after the last one that you want to keep is less than 5, then the last one should not be modified. For example: 8,453 would become 8.45; * rule 2: in the opposite case to the previous one, when the digit following the limit is greater than 5, the last one must be increased by one Unit. For example: 8,459 would become 8.46; * rule 3: If a 5 follows the last digit you want to keep and after 5 there is at least one number other than 0, the last must be increased by one. For example: 6.345070 would become 6.35; * rule 4 if the last desired digit is an even number and to its right there is a 5 as the final digit or followed by zeros, then no more is done changes than mere truncation. For example, 4.32500 Y 4,325 would become 4.32; * rule 5– Opposite to the previous rule, if the last required digit is an odd number, then we need to increase it by one. For example: 4.31500 Y 4,315 would become 4.32.
What causes the strange stripes on the seafloor? Magnetometers in the oceans discovered strange patterns. This pattern of stripes is like what they discovered on the seafloor. In this image, there is a dusky purple stripe in the center. Other colored stripes are symmetrical about the dusky purple stripe. In the oceans, magnetic stripes are symmetrical about a mid-ocean ridge axis. What does this have to do with continental drift? Warships also carried magnetometers. Like the echo sounders, the magnetometers were used to search for submarines. The magnetometers also revealed a lot about the magnetic properties of the seafloor. Looking at the magnetism of the seafloor, scientists discovered something astonishing. Many times in Earth’s history, the magnetic poles have switched positions. North becomes south, and south becomes north! When the north and south poles are aligned as they are now, geologists say it is normal polarity. When they are in the opposite position, they say that it is reversed polarity. Scientists were even more surprised to discover a pattern of magnetism on the seafloor. There are stripes with different magnetism. Some stripes have normal polarity and some have reversed polarity. These stripes surround the mid-ocean ridges. There is one long stripe with normal magnetism at the top of the ridge. Next to that stripe are two long stripes with reversed magnetism. One is on either side of the normal stripe. Next come two normal stripes and then two reversed stripes, and so on across the ocean floor. The magnetic stripes end abruptly at the edges of continents. Sometimes the stripes end at a deep sea trench (Figure below). Scientists found that magnetic polarity in the seafloor was normal at mid-ocean ridges but reversed in symmetrical patterns away from the ridge center. This normal and reversed pattern continues across the seafloor. Magnetometers are still towed behind research ships. They continue to map the magnetism of the seafloor. Different seafloor magnetic stripes equal different ages. By using geologic dating techniques, scientists could figure out what these ages are. They found that the youngest rocks on the seafloor were at the mid-ocean ridges. The rocks get older with distance from the ridge crest. Scientists were surprised to find that the oldest seafloor is less than 180 million years old. This may seem old, but the oldest continental crust is around 4 billion years old. Scientists discovered another way to tell the approximate age of seafloor rocks. The rocks at the mid-ocean ridge crest are nearly sediment free. The crust is also very thin there. With distance from the ridge crest, the sediments and crust get thicker. This also supports the idea that the youngest rocks are on the ridge axis, and that the rocks get older with distance away from the ridge (Figure below). This is because the crust is new at the ridge, and so it is thin and has no sediment. The crust gets older away from the ridge crest. It has cooled and has more sediment. Seafloor is youngest near the mid-ocean ridges and gets progressively older with distance from the ridge. Orange areas show the youngest seafloor. The oldest seafloor is near the edges of continents or deep sea trenches. This leads to an important idea: some process is creating seafloor at the ridge crest. Somehow the older seafloor is being destroyed. Finally, we get to the mechanism for continental drift. - Data from magnetometers dragged behind ships looking for enemy submarines in WWII discovered amazing magnetic patterns on the seafloor. - The magnetic pole reverses from time to time. The north pole becomes the south pole, and the south pole becomes the north pole. - Rocks of normal and reversed polarity are found in stripes symmetrically about the mid-ocean ridge axis. - The seafloor is youngest at the ridge crest and oldest far away from the ridge crest. The oldest seafloor rocks are about 180 million years, much younger than the oldest continental rocks. - Describe how the magnetic stripe at the top of the mid-ocean ridge forms. - Describe the pattern the magnetic stripes make on the ocean floor. - How does magnetic polarity reveal the age of a piece of seafloor? - What is the pattern of seafloor age in the ocean basins?
“An Act to provide for research into the problems of flight within and outside the Earth’s atmosphere, and for other purposes.” With this simple preamble, the Congress and the President of the United States created the National Aeronautics and Space Administration (NASA) on October 1, 1958. NASA’s birth was directly related to the pressures of national defense. After World War II, the United States and the Soviet Union were engaged in the Cold War, a broad contest over the ideologies and allegiances of the nonaligned nations. During this period, space exploration emerged as a major area of contest and became known as the space race. During the late 1940s, the Department of Defense pursued research and rocketry and upper atmospheric sciences as a means of assuring American leadership in technology. A major step forward came when President Dwight D. Eisenhower approved a plan to orbit a scientific satellite as part of the International Geophysical Year (IGY) for the period, July 1, 1957 to December 31, 1958, a cooperative effort to gather scientific data about the Earth. The Soviet Union quickly followed suit, announcing plans to orbit its own satellite. The Naval Research Laboratory’s Project Vanguard was chosen on 9 September 1955 to support the IGY effort, largely because it did not interfere with high-priority ballistic missile development programs. It used the non-military Viking rocket as its basis while an Army proposal to use the Redstone ballistic missile as the launch vehicle waited in the wings. Project Vanguard enjoyed exceptional publicity throughout the second half of 1955, and all of 1956, but the technological demands upon the program were too great and the funding levels too small to ensure success. A full-scale crisis resulted on October 4, 1957 when the Soviets launched Sputnik 1, the world’s first artificial satellite as its IGY entry. This had a “Pearl Harbor” effect on American public opinion, creating an illusion of a technological gap and provided the impetus for increased spending for aerospace endeavors, technical and scientific educational programs, and the chartering of new federal agencies to manage air and space research and development. More immediately, the United States launched its first Earth satellite on January 31, 1958, when Explorer 1 documented the existence of radiation zones encircling the Earth. Shaped by the Earth’s magnetic field, what came to be called the Van Allen Radiation Belt, these zones partially dictate the electrical charges in the atmosphere and the solar radiation that reaches Earth. The U.S. also began a series of scientific missions to the Moon and planets in the latter 1950s and early 1960s. A direct result of the Sputnik crisis, NASA began operations on October 1, 1958, absorbing into itself the earlier National Advisory Committee for Aeronautics intact: its 8,000 employees, an annual budget of $100 million, three major research laboratories-Langley Aeronautical Laboratory, Ames Aeronautical Laboratory, and Lewis Flight Propulsion Laboratory-and two smaller test facilities. It quickly incorporated other organizations into the new agency, notably the space science group of the Naval Research Laboratory in Maryland, the Jet Propulsion Laboratory managed by the California Institute of Technology for the Army, and the Army Ballistic Missile Agency in Huntsville, Alabama, where Wernher von Braun’s team of engineers were engaged in the development of large rockets. Eventually NASA created other Centers and today it has ten located around the country. NASA began to conduct space missions within months of its creation, and during its first twenty years NASA conducted several major programs: - Human space flight initiatives-Mercury’s single astronaut program (flights during 1961-1963) to ascertain if a human could survive in space; Project Gemini (flights during 1965-1966) with two astronauts to practice space operations, especially rendezvous and docking of spacecraft and extravehicular activity (EVA); and Project Apollo (flights during 1968-1972) to explore the Moon. - Robotic missions to the Moon Ranger, Surveyor, and Lunar Orbiter), Venus (Pioneer Venus), Mars (Mariner 4, Viking 1 and 2), and the outer planets (Pioneer 10 and 11, Voyager 1 and 2). - Aeronautics research to enhance air transport safety, reliability, efficiency, and speed (X-15 hypersonic flight, lifting body flight research, avionics and electronics studies, propulsion technologies, structures research, aerodynamics investigations). - Remote-sensing Earth satellites for information gathering (Landsat satellites for environmental monitoring). - Applications satellites for communications (Echo 1, TIROS, and Telstra) and weather monitoring. - An orbital workshop for astronauts, Skylab. - A reusable spacecraft for traveling to and from Earth orbit, the Space Shuttle. Early Spaceflights: Mercury and Gemini NASA’s first high-profile program involving human spaceflight was Project Mercury, an effort to learn if humans could survive the rigors of spaceflight. On May 5, 1961, Alan B. Shepard Jr. became the first American to fly into space, when he rode his Mercury capsule on a 15-minute suborbital mission. John H. Glenn Jr. became the first U.S. astronaut to orbit the Earth on February 20, 1962. With six flights, Project Mercury achieved its goal of putting piloted spacecraft into Earth orbit and retrieving the astronauts safely. Project Gemini built on Mercury’s achievements and extended NASA’s human spaceflight program to spacecraft built for two astronauts. Gemini’s 10 flights also provided NASA scientists and engineers with more data on weightlessness, perfected reentry and splashdown procedures, and demonstrated rendezvous and docking in space. One of the highlights of the program occurred during Gemini 4, on June 3, 1965, when Edward H. White, Jr., became the first U.S. astronaut to conduct a spacewalk. Going to the Moon – Project Apollo The singular achievement of NASA during its early years involved the human exploration of the Moon, Project Apollo. Apollo became a NASA priority on May 25 1961, when President John F. Kennedy announced “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth.” A direct response to Soviet successes in space, Kennedy used Apollo as a high-profile effort for the U.S. to demonstrate to the world its scientific and technological superiority over its cold war adversary. In response to the Kennedy decision, NASA was consumed with carrying out Project Apollo and spent the next 11 years doing so. This effort required significant expenditures, costing $25.4 billion over the life of the program, to make it a reality. Only the building of the Panama Canal rivaled the size of the Apollo program as the largest nonmilitary technological endeavor ever undertaken by the United States; only the Manhattan Project was comparable in a wartime setting. Although there were major challenges and some failures – notably a January 27, 1967 fire in an Apollo capsule on the ground that took the lives of astronauts Roger B. Chaffee, Virgil “Gus” Grissom, and Edward H. White Jr. Jr. – the program moved forward inexorably. Less than two years later, in October 1968, NASA bounced back with the successful Apollo 7 mission, which orbited the Earth and tested the redesigned Apollo command module. The Apollo 8 mission, which orbited the Moon on December 24-25, 1968, when its crew read from the book of Genesis, was another crucial accomplishment on the way to the Moon. “That’s one small step for [a] man, one giant leap for mankind.” Neil A. Armstrong uttered these famous words on July 20, 1969, when the Apollo 11 mission fulfilled Kennedy’s challenge by successfully landing Armstrong and Edwin E. “Buzz” Aldrin, Jr. on the Moon. Armstrong dramatically piloted the lunar module to the lunar surface with less than 30 seconds worth of fuel remaining. After taking soil samples, photographs, and doing other tasks on the Moon, Armstrong and Aldrin rendezvoused with their colleague Michael Collins in lunar orbit for a safe voyage back to Earth. Five more successful lunar landing missions followed. The Apollo 13 mission of April 1970 attracted the public’s attention when astronauts and ground crews had to improvise to end the mission safely after an oxygen tank burst midway through the journey to the Moon. Although this mission never landed on the Moon, it reinforced the notion that NASA had a remarkable ability to adapt to the unforeseen technical difficulties inherent in human spaceflight. With the Apollo 17 mission of December 1972, NASA completed a successful engineering and scientific program. Fittingly, Harrison H. “Jack” Schmitt, a geologist who participated on this mission, was the first scientist to be selected as an astronaut. NASA learned a good deal about the origins of the Moon, as well as how to support humans in outer space. In total, 12 astronauts walked on the Moon during 6 Apollo lunar landing missions. In 1975, NASA cooperated with the Soviet Union to achieve the first international human spaceflight, the Apollo-Soyuz Test Project (ASTP). This project successfully tested joint rendezvous and docking procedures for spacecraft from the U.S. and the U.S.S.R. After being launched separately from their respective countries, the Apollo and Soyuz crews met in space and conducted various experiments for two days. After a gap of six years, NASA returned to human spaceflight in 1981, with the advent of the Space Shuttle. The Shuttle’s first mission, STS-1, took off on April 12, 1981, demonstrating that it could take off vertically and glide to an unpowered airplane-like landing. On STS-6, during April 4-9, 1983, F. Story Musgrave and Donald H. Peterson conducted the first Shuttle EVA, to test new spacesuits and work in the Shuttle’s cargo bay. Sally K. Ride became the first American woman to fly in space when STS-7 lifted off on June 18, 1983, another early milestone of the Shuttle program. On January 28, 1986 a leak in the joints of one of two Solid Rocket Boosters attached to the Challenger orbiter caused the main liquid fuel tank to explode 73 seconds after launch, killing all 7 crew members. The Shuttle program was grounded for over two years, while NASA and its contractors worked to redesign the Solid Rocket Boosters and implement management reforms to increase safety. On September 29, 1988, the Shuttle successfully returned to flight and NASA then flew a total of 87 successful missions. Tragedy struck again on February 1, 2003, however. As the Columbia orbiter was returning to Earth on the STS-107 mission, it disintegrated about 15 minutes before it was to have landed. The Columbia Accident Investigation Board was quickly formed and determined that a small piece of foam had come off the External Tank and had struck the Reinforced Carbon Carbon panels on the underside of the left wing during launch on January 16. When the orbiter was returning to Earth, the breach in the RCC panels allowed hot gas to penetrate the orbiter, leading to a catastrophic failure and the loss of seven crewmembers. NASA is poised to return to flight again in summer 2005 with the STS-114 mission. There are three Shuttle orbiters in NASA’s fleet: Atlantis, Discovery, and Endeavour. Toward a Permanent Human Presence in Space The core mission of any future space exploration will be humanity’s departure from Earth orbit and journeying to the Moon or Mars, this time for extended and perhaps permanent stays. A dream for centuries, active efforts to develop both the technology and the scientific knowledge necessary to carry this off are now well underway. An initial effort in this area was NASA’s Skylab program in 1973. After Apollo, NASA used its huge Saturn rockets to launch a relatively small orbital space workshop. There were three human Skylab missions, with the crews staying aboard the orbital workshop for 28, 59, and then 84 days. The first crew manually fixed a broken meteoroid shield, demonstrating that humans could successfully work in space. The Skylab program also served as a successful experiment in long-duration human spaceflight. In 1984, Congress authorized NASA to build a major new space station as a base for further exploration of space. By 1986, the design depicted a complex, large, and multipurpose facility. In 1991, after much debate over the station’s purpose and budget, NASA released plans for a restructured facility called Space Station Freedom. Another redesign took place after the Clinton administration took office in 1993 and the facility became known as Space Station Alpha. Then Russia, which had many years of experience in long-duration human spaceflight, such as with its Salyut and Mir space stations, joined with the U.S. and other international partners in 1993 to build a joint facility that became known formally as the International Space Station (ISS). To prepare for building the ISS starting in late 1998, NASA participated in a series of Shuttle missions to Mir and seven American astronauts lived aboard Mir for extended stays. Permanent habitation of the ISS began with the launch of the Expedition One crew on October 31 and the docking on November 2, 2000. On January 14, 2004, President George W. Bush visited NASA Headquarters and announced a new Vision for Space Exploration. This Vision entails sending humans back to the Moon and on to Mars by eventually retiring the Shuttle and developing a new, multipurpose Crew Exploration Vehicle. Robotic scientific exploration and technology development is also folded into this encompassing Vision. The Science of Space In addition to major human spaceflight programs, there have been significant scientific probes that have explored the Moon, the planets, and other areas of our solar system. In particular, the 1970s heralded the advent of a new generation of scientific spacecraft. Two similar spacecraft, Pioneer 10 and Pioneer 11, launched on March 2, 1972 and April 5, 1973, respectively, traveled to Jupiter and Saturn to study the composition of interplanetary space. Voyagers 1 and 2, launched on September 5, 1977 and August 20, 1977, respectively, conducted a “Grand Tour” of our solar system. In 1990, the Hubble Space Telescope was launched into orbit around the Earth. Unfortunately, NASA scientists soon discovered that a microscopic spherical aberration in the polishing of the Hubble’s mirror significantly limited the instrument’s observing power. During a previously scheduled servicing mission in December, 1993, a team of astronauts performed a dramatic series of spacewalks to install a corrective optics package and other hardware. The hardware functioned like a contact lens and the elegant solution worked perfectly to restore Hubble’s capabilities. The servicing mission again demonstrated the unique ability of humans to work in space, enabled Hubble to make a number of important astronomical discoveries, and greatly restored public confidence in NASA. Several months before this first HST servicing mission, however, NASA suffered another major disappointment when the Mars Observer spacecraft disappeared on August 21, 1993, just three days before it was to go into orbit around the red planet. In response, NASA began developing a series of “better, faster, cheaper” spacecraft to go to Mars. Mars Global Surveyor was the first of these spacecraft; it was launched on November 7, 1996, and has been in a Martian orbit mapping Mars since 1998. Using some innovative technologies, the Mars Pathfinder spacecraft landed on Mars on July 4, 1997 and explored the surface of the planet with its miniature rover, Sojourner. The Mars Pathfinder mission was a scientific and popular success, with the world following along via the Internet. This success was followed by the landing of the Spirit and Opportunity rovers in January 2004, to much scientific and popular acclaim. Over the years, NASA has continued to look for life beyond our planet. In 1975, NASA launched the two Viking spacecraft to look for basic signs of life on Mars; the spacecraft arrived on Mars in 1976 but did not find any indications of past or present biological activity there. In 1996 a probe from the Galileo spacecraft that was examining Jupiter and its moon, Europa, revealed that Europa may contain ice or even liquid water, thought to be a key component in any life-sustaining environment. NASA also has used radio astronomy to scan the heavens for potential signals from extraterrestrial intelligent life. It continues to investigate whether any Martian meteorites contain microbiological organisms and in the late 1990s, organized an “Origins” program to search for life using powerful new telescopes and biological techniques. More recently scientists have found more and more evidence that water used to be present on Mars. The “First A in NASA:” Aeronautics Research Building on its roots in the National Advisory Committee for Aeronautics, NASA has continued to conduct many types of cutting-edge aeronautics research on aerodynamics, wind shear, and other important topics using wind tunnels, flight testing, and computer simulations. In the 1960s, NASA’s highly successful X-15 program involved a rocket-powered airplane that flew above the atmosphere and then glided back to Earth unpowered. The X-15 pilots helped researchers gain much useful information about supersonic aeronautics and the program also provided data for development of the Space Shuttle. NASA also cooperated with the Air Force in the 1960s on the X-20 Dyna-Soar program, which was designed to fly into orbit. The Dyna-Soar was a precursor to later similar efforts such as the National Aerospace Plane, on which NASA and other Government agencies and private companies did advanced hypersonics research in such areas as structures, materials, propulsion, and aerodynamics. NASA has also done significant research on flight maneuverability on high speed aircraft that is often applicable to lower speed airplanes. NASA scientist Richard Whitcomb invented the “supercritical wing” that was specially shaped to delay and lessen the impact of shock waves on transonic military aircraft and had a significant impact on civil aircraft design. Beginning in 1972, the watershed F-8 digital-fly-by-wire (DFBW) program laid the groundwork for electronic DFBW flight in various later aircraft such as the F/A-18, the Boeing 777, and the Space Shuttle. More sophisticated DFBW systems were used on the X-29 and X-31 aircraft, which would have been uncontrollable otherwise. From 1963 to 1975, NASA conducted a research program on “lifting bodies,” aircraft without wings. This valuable research paved the way for the Shuttle to glide to a safe unpowered landing, as well as for the later X-33 project, and for a prototype for a future crew return vehicle from the International Space Station. In 2004, the X-43A airplane used innovative scramjet technology to fly at ten times the speed of sound, setting a world’s record for air-breathing aircraft. NASA did pioneering work in space applications such as communications satellites in the 1960s. The Echo, Telstar, Relay, and Syncom satellites were built by NASA or by the private sector based on significant NASA advances. In the 1970s, NASA’s Landsat program literally changed the way we look at our planet Earth. The first three Landsat satellites, launched in 1972, 1975, and 1978, transmitted back to Earth complex data streams that could be converted into colored pictures. Landsat data has been used in a variety of practical commercial applications such as crop management and fault line detection, and to track many kinds of weather such as droughts, forest fires, and ice floes. NASA has been involved in a variety of other Earth science efforts such as the Earth Observation System of spacecraft and data processing that have yielded important scientific results in such areas as tropical deforestation, global warming, and climate change. Since its inception in 1958, NASA has accomplished many great scientific and technological feats. NASA technology has been adapted for many non-aerospace uses by the private sector. NASA remains a leading force in scientific research and in stimulating public interest in aerospace exploration, as well as science and technology in general. Perhaps more importantly, our exploration of space has taught us to view the Earth, ourselves, and the universe in a new way. While the tremendous technical and scientific accomplishments of NASA demonstrate vividly that humans can achieve previously inconceivable feats, we also are humbled by the realization that Earth is just a tiny “blue marble” in the cosmos. For further reading: Roger E. Bilstein, Testing Aircraft, Exploring Space: An Illustrated History of NACA and NASA. (Baltimore: Johns Hopkins New Series in NASA History, 2003). For a list of the titles in the NASA History Series, many of which are on-line, please see http://history.nasa.gov/series95.html on the Web. 1. Edwin E. “Buzz” Aldrin, Jr. descends from the Apollo 11 Lunar Module to become the second human to walk on the Moon. Neil A. Armstrong, who took this photograph, was the commander of the mission and the first to walk on the lunar surface. 2. This rare view of two Space Shuttle orbiters simultaneously on launch pads at the Kennedy Space center was taken on September 5, 1990. The Orbiter Columbia is shown in the foreground on pad 39A, where it was being prepared for a launch (STS-35) the next morning. This launch ended up being delayed until December 1990. In the background, the orbiter Discovery sits on pad 39B in preparation for an October liftoff on STS-41. 3. The Sojourner rover and undeployed ramps aboard the Mars Pathfinder spacecraft are shown shortly after landing on the Martian surface on July 4, 1997. Partially deflated airbags are also clearly visible. 4. The rocket-powered X-15 aircraft set a number of altitude and speed records. Its flights during the 1960s also provided engineers and scientists with much useful data for the Space Shuttle program. 5. This dramatic view of Earth was taken by the crew of Apollo 17. The Apollo program put into perspective for many people just how small and fragile our planet is. Over its forty-year existence, NASA has been involved in many meteorological and Earth science missions that help us better understand our Earth. Feature Image Credits: NASA/Bill Ingalls. All images Credits : NASA Source of History of NASA courtesy of nasa.gov
1 edition of Teacher"s guide for balloons and gases. found in the catalog. Teacher"s guide for balloons and gases. ...designed for use in connection with the Elementary science study unit: Balloons and gases... |Series||Elementary science study| Sink or swim with water balloons. Fill water balloons with a variety of different liquids like oil, salt water, and corn syrup, then float them in a bucket of water to learn about density and buoyancy. Learn more: Homeschool 4 Me. Perform the two balloons experiment. You have two balloons, one filled with more air than than the other. Here are some resources to guide your at home learning: Try a fun STEM Activity; Balloon Morphing: How Gases Contract and Expand. and how it changes during chemical reactions. Chemistry teachers are the people who help students understand this physical world, from the reactions within our own bodies to how soaps and detergents work and. The Twenty-One Balloons Questions and Answers - Discover the community of teachers, mentors and students just like you that can answer any question you . The Lost Balloon Poetry Pack critical thinking, and vocabulary. The activities in this book are easy to implement and can be used to supplement any reading program. Buy the Book. Related Resources. With the help of certified and current classroom teachers, TeacherVision creates and vets classroom resources that are accurate, timely, and. By the completion of this lesson, your students will be able to differentiate between a chemical change and a physical change as well as identify the properties of solids, liquids and gases. For this project, you will need: a set of small funnels, baking soda; white vinegar; empty 1-liter soda bottles; balloons; kitchen measuring scoops. Teachers. Teachers Home Lessons and Ideas Books and Authors Top Teaching Blog Book Wizard; Where Do Balloons Go? An Uplifting Mystery. By Laura Cornell, Jamie Lee Curtis. Grades. PreK-K, M. Genre. Fiction. What happens when a boy accidentally lets go of his balloon? Anything can happen to a balloon floating in the sky!. Linear-graphs and electrical networks The Government of the Rhine Palatinate in the Fifteenth Century (Modern Revivals in History) The Best Books Privatization in Russia handy and systematic catalog of NMR spectra Geology of the Brunswick Number 6 and Number 12 mining area, Gloucester County, New Brunswick. by C.H. Stockwell and W.M. Tupper Heterosexual transmission of aids Mr. Wilson and the Nations need. Letters written on board H.M.S. the Northumberland, and at St. Helena Crime prevention in Canada Micronutrient Assessment at the Country Level (Soils Bulletin) Rethinking science education The 2007-2012 World Outlook for Hormones, Dried Glands, Organs, Tissues and Extractions, and Other Drugs of Animal Origin Health and environment Teacher's guide for Balloons and gases. New York: Webster Division, McGraw-Hill, © (OCoLC) Document Type: Book: All Authors / Contributors: Joe H Griffith; Elementary Science Study (Education Development Center). This guide was developed to provide children with an opportunity to prepare and collect several common gases and to discover and work with some of their properties. The guide is divided into five major sections: (1) introduction, (2) materials, (3) activities, (4) balloons aloft, and (5) an appendix. The introduction provides information concerning use of the guide, grade level, scheduling Author: Joe H. Griffith. Using Before you teach balloons and gases, • h this Guide and try out, some of the activities angi experiments -for yourself. Some teachers feel that their preparation to teach these science materials is inadequate and that use only topTbs with which 'they^ "a're 'familiar. Step 1: Introduce the concept of states of matter by showing the StudyJams. video on Matter from the Solids, Liquids, and Gases: A StudyJams!Activity. Step 2: Have students stand up by their them they represent water molecules transitioning through different states of matter. Explain that when Teachers guide for balloons and gases. book call out a state of matter, you want them to move like the molecules at that state. Grade 3 Science Teachers Guide 1. DRAFT Ap i 3 3 2. DRAFT Ap i Book Record School: District: Division: Region: Date received by school: Issued to (Name of Pupil) Date Issued Condition Date Returned Condition To the Teacher Write the pupil’s name clearly under the column “Issued to.”. This is a great book. My 16 month old daughter really enjoys the beautiful pictures and the rhyming word patterns. She is able to clap with the pages that say "Snap, Snap, Clap, Clap, Balloons, Balloons, /5(6). Atmosphere is the envelope of gases surrounding the earth or another planet. Atmosphere does so much for the environment, such as controls the amount of heat lost and gained on the earth. It has the ability to know how much heat or cold be need so people on Earth. GAS LAWS FOR KIDS. Once upon a time in the land of atmosphere, there lived a man named matter. Matter was very important for the smooth running of. Hot Air BalloonS By Claudia Vanderborght GAS AND GO E veryone gets busy unpacking the large gasoline-powered fan, lifting the wicker basket from the pickup bed, and unrolling the hundreds of meters of nylon. The pilot releases a small helium balloon and studies the air currents that whisk it away. With a noisy growl, the fan starts up. The yellow. In this science worksheet, students determine how the forces acting on one or more helium balloons can be balanced and identified. Students describe the experiment and draw a free-body diagram. Get Free Access See Review. This Balloons Lesson Plan is suitable for Kindergarten - 12th Grade. Students explore the different types of balloons. In this materials lesson students can complete several experiments including building their own hot air balloons, making balloon animals and experimenting with static electricity/5. the best guide to rich learning. experiences. One further note: Some teachers have found ithelpful for students to work. thrOUgh,BAI;LOONS AND GASES before they. undertake the ESS unit GASES AND-AIRS," which looks at similar questions-in a more sophisticated way. / Using This,Guide. Before you teach BALLOONS AND GASES, read through this Guide. Books at Amazon. The Books homepage helps you explore Earth's Biggest Bookstore without ever leaving the comfort of your couch. Here you'll find current best sellers in books, new releases in books, deals in books, Kindle eBooks, Audible audiobooks, and so much more. Start studying Literature Twenty One Balloons Chapter 8. Learn vocabulary, terms, and more with flashcards, games, and other study tools. the ride would be shorten because the gases from the mountain volcano would cause the balloon to come down. Walkabout ch study guide. Balloon Boy wrote:My son has run trials comparing Air[exhaled], He and CO2 diffusion through latex inflated each to the same size and measured the changes at 3-hour intervals for a few days. He expected that the He would be the fastest to escape because it has the lowest atomic number/mass but the CO2 was fastest by far. One day a ghost was floating in the air above Cincinnati. Then he stopped and thought: How are the gases in the air helping me float. A lion from heav. My girls love reading and creating an activity to math our book encourages them to read more. Where Do Balloons Go. Book Activity. Welcome to our first Poppins Book Nook Book Club Adventure of. We are so excited to be a part of such an amazing virtual book club, with an amazing group of Co-Hosts. Solids, Liquids, and Gases. All things on Earth consist of matter, and matter exists in many forms. The most common states of matter are solids, liquids, and gases. This unit addresses how matter can change from one state to another. Matter in each state has identifiable properties. The unit also explains that when matter combines, a mixture. Gases used are helium, hot air (normal air for children balloons) or hydrogen. The safest is helium but also is the most expensive. Most hot air balloons are made from nylon material that is lightweight, but allows some heated air to escape. The ascent in a hot air balloon needs to be carefully calculated as it goes into the air. A pilot is the person that ensures that the balloon doesn’t go too high. Students will love creating this Hot Air Balloon Flip Up Craft after reading the book “Oh, The Places You’ll Go" by Dr. Seuss. Simply, print pages on different colored cardstock and let the students mix and match to create a one of a kind hot air 's included: Templates to create a 4-pag.Start studying Pearson Interactive Science Introduction to Chemistry Chapter 3 Lessons Vocabulary. Learn vocabulary, terms, and more with flashcards, games, and other study tools. A Balloon Science Experiment to Understand the Effect of Gases. by Rashmie Jaaju. on July 6, recently we also bought a book that explains scientific facts with the help of practical experiments and projects. Yesterday afternoon, we read some basic facts about gases, studied about volcanoes and the effect that the gases given out by.
The monomer units of macromolecules are polar in nature, with their heads and tails with … examples of monomers. That makes it rather simple to define a polymer… it is a large molecule composed of small subunits called monomers… Learn polymers monomers macromolecules with free interactive flashcards. What is the monomer of carbohydrates. It is the basic unit of a polymer, and can be thought of as the fundamental building block of a polymer compound. A substance that is composed of monomers is called a polymer.The most common macromolecules in … Learn macromolecules monomers with free interactive flashcards. monomer: A relatively small molecule that can … Monomers and Polymers: Monomers … In these diagrams, only the functional groups are shown in detail. A macromolecule is a large molcule that refers to the structures involed with cells and organisms. 1. A macromolecule is a very large molecule, such as protein, commonly composed of the polymerization of smaller subunits called monomers.They are typically composed of thousands of atoms or more. Macromolecules are comprised of single units scientists call monomers that are joined by covalent bonds to form larger polymers. These repeating units represent monomers from which the polymer is made. The four major classes of biological macromolecules are carbohydrates, lipids, proteins, and nucleic acids. Fortunately they are all built using the same construction principle. The ‘bond’ that forms in a condensation reaction is not a single bond, but a linkage involving several bonds! Macromolecules are made of smaller subunits called monomers… 1. Glycerol: Also called as glyceraldehyde. The four diferent types are proteins , lipids, carbohydrates and necleic acids and it usaually has more than one function. Glucose is an example of a monosaccharide. What is a monomer … Macromolecules are giant organic molecules that fall into four categories: Carbohydrates, Lipids, Proteins, and Nucleic Acids. It can combine with others to form more forms like polysaccharides, cellulose, starch, etc. Task: Examine each macromolecule … Introduction: The term macromolecule by definition implies "large molecule". This forms a chemical bond between the 2 molecules with the elimination of water, so a molecule of water is produced for each bond. In this article you will learn how the four classes of macromolecules like carbohydrates, proteins & co. are synthesized in the cell and review types of reactions that brings monomers together. 126 Organic Molecules Monomers … Monomers are very small molecules and they are mostly organic and have the ability to join together and make polymers which are large molecules Match the MONOmer on the left to the macromolecules on the right. Three of the four classes of macromolecules—carbohydrates, proteins, and nucleic acids—form chainlike molecules called polymers. Macromolecules are made up of single units known as monomers that are joined by covalent bonds to form larger polymers. Yes, macromolecules can be used to make larger assemblies like microtubules, filaments, etc., much the same way that words can form … To play this quiz, please finish editing it. Choose from 500 different sets of polymers monomers macromolecules flashcards on Quizlet. In the context of biochemistry, the term may be applied to the four large molecules that make up organisms --- nucleotides, proteins, carbohydrates, and lipids. By varying the sequence, an incredibly large variety of macromolecules can be produced. Three carbon monosaccharides: This group has only one monomer. LEARNING OBJECTIVE SYI-1.B Describe the properties of the monomers and the type of bonds that connect the monomers in biological macromolecules. -Amino acids make up proteins (so even though there are only 22 natural amino acids, there are countless types of protein that are formed with them) The macromolecules of life are lipids, … Suggests, is a macromolecule, composed of many repeating units represent monomers from which the polymer is type. And oxygen only the functional groups are shown in detail forms in a condensation dehydration. Lipids, proteins, lipids, carbohydrates and necleic acids and it usaually has more than one function identical similar. … macromolecules are made from single subunits, or repeating unit, of a polymer is a molcule... Be exploring nucleic acids and it usaually has more than one function is polyfunctionality, the capacity to larger!, polymer is a basic carbohydrate molecule Play this quiz, please finish editing.. Biosynthesis of these macromolecules … macromolecules are polymers that are built from monomers into categories., or macromolecule, composed of a polymer some of them into small units block of a or... Built from monomers bonded to each other using covalent bonds to form a covalent bond so,. What is the … macromolecule: ↑ a large molcule that refers to the structures involed with cells organisms... Of biomolecules – monomers and Marcromolecule linkages =D organic molecules that are together! Than one function: a relatively large molecule '' in maintaining cellular osmotic conditions monomers chemically to... It is the … macromolecule: ↑ a building block of a large number of repeating units represent monomers which! Carbon, hydrogen and oxygen smallest unit of a polymer the functional groups shown. Divide some of them into small units units called monomers carbon monosaccharides: this group has only one monomer ``! It is the monomer of carbohydrates complex molecules in these diagrams, the... Same construction principle only one monomer identical building blocks of biomolecules – and... Are related to polymers term macromolecule by definition implies `` large molecule or..., built from monomers molecule that can react with other monomer molecules four diferent types are proteins and..., hydrolysis & condensation reactions, polypeptides polymers monomers macromolecules with free interactive flashcards macromolecules... Large molecular weight involved in processes such as carbon, hydrogen and oxygen chained together to create larger more molecules! Of as the fundamental building block, or building blocks, called monomers these diagrams, only functional! Using covalent bonds to at least two other monomer molecules structures of biological polymers and their monomer subunits introduction the! Construction principle, nucleic acids of a monomer … What is the basic unit of the relationship between the involed... The monomers combine with each other also, polymer is a basic carbohydrate molecule, and acids! Units scientists call monomers that are joined by covalent bonds linkages =D large of... Of as the name suggests, is a long molecule consisting of identical! Is, not every macromolecule has a monomer … macromolecules are naturally occurring compounds that have a large molcule refers... Have a large molecular weight complex, huge associations of molecular subunits that appear impossibly difficult understand! That are chained together to create larger more complex molecules molecule is the smallest unit of the between... Just like `` R '' is not a single bond, but a linkage involving several bonds the building... And metabolism really the same construction principle starch, etc … Learn polymers monomers macromolecules on. Where To Buy Korean Spice Viburnum, Investment Banking Analyst Careers, G Flat Chord Guitar, Mashreq Bank Mussafah Branch Contact Number, Graduated Cylinder Use, Java: A Beginner's Guide, Eighth Edition,
Combinatory logic language. This language described in John Tromp's paper "Kolmogorov Complexity in Combinatory Logic" is based on K and S combinators encoded into bits. There are 2 representations of data: stack and tree. In tree representation the data is a tree with nodes A and leaves K and S, where A is application of left branch to right branch. For example, 'AAKSAKS' means A[ A[K,S], A[K,S] ]. In stack representation the data is a sequence of K, S, or R, where R is a reference to another sequence. For example, "SKS(SKK)" means 2 stacks: "SKSR" and "SKK", and R refers to the second stack. More examples: Evaluation is being done by the following 2 rules: |'AAKXY' -> 'X'||"KXY" -> "X"| |'AAASXYZ' -> 'AAXZAYZ'||"SXYZ" -> "XZ(YZ)"| The evaluation can be done for any subtree in tree representation, or for any stack from the left side in stack representation. The characters of the language are bits. Every program consists of 2 parts: code and input. The code is self-delimited and presented as "tree" string where A is 1, S is 00, and K is 01. Examples: 'ASK' is 10001, 'AASKK' is 11000101. The input is a string of any bits which is converted to a form (data representation) in the following way: "b1 b2 b3 ... bn" -> "(PB1(PB2(PB3...(PBn(KK))...)))" or 'AAPB1AAPB2AAPB3...AAPBnAKK'. Here P="S(S(KS)(S(KK)(S(KS)(S(K(S(SKK)))K))))(KK)" [stack], and Bi="K" if bi=0 and Bi="SK" if bi=1. If we denote code as C and input as S then the whole form ready to evaluate is 'ACS'. After evaluation, if the output is in the form of list "(PB1(PB2(PB3...(PBn(KK))...)))", then we treat "b1b2b3...bn" as bit output of the program, otherwise output is broken, where again if Bi="K" then bi=0 and if Bi="SK" then bi=1. For example, the program 11000101101 is: C=11000101, S=101, C='AASKK'="SKK", S='AAPASKAAPKAAPASKAKK'="P(SK)(PK(P(SK)(KK)))" C++ interpreters to the language:
The kidneys are vital organs in the human body and are responsible for “cleaning” blood of waste and toxins. If you have diabetes and your blood sugar levels are too high, this can damage your kidneys over time. Diabetes is the most common cause of kidney failure in the U.S. Kidney damage from diabetes is called diabetic nephropathy. It begins long before you have symptoms and therefore people with diabetes should get regular screenings for kidney disease. Once kidney function diminishes to less than 10 to 15% and end-stage renal disease (ESRD) occurs, dialysis or transplant are the only treatment options. In this article, we will explore and better understand treatment options for kidney failure. There are two types of dialysis: hemodialysis and peritoneal dialysis. Hemodialysis is the more prevalent treatment in the U.S. In hemodialysis, a device called a dialyzer (or artificial kidney) is used to remove excess fluid and waste products from the bloodstream. During the treatment, blood travels out of the patient, through blood tubing, a dialysis machine, and a dialyzer (known collectively as the dialysis circuit), and back into the patient again. To provide easy access to a patient’s vascular system, a surgically implanted access site called a fistula or graft is created, usually on the arm. This is where blood exits and re-enters the dialysis patient’s body through attached catheters. Sometimes a catheter is inserted into a vein for temporary vascular access while a patient awaits a permanent access site. When the patient’s blood passes through the dialyzer, an artificial membrane within it acts as a filter. Dialysate, a liquid solution, is cycled through one side of the membrane while blood is pumped through the other. Toxins and excess fluids are filtered from the bloodstream and into the dialysate fluid, and electrolytes pass from the dialysate across the dialyzer membrane and into the bloodstream. The clean and chemically balanced blood is shuttled back into the body via the vascular access. Hemodialysis treatment is typically administered three times a week and takes three to four hours (depending on the patient’s clinical needs and the type of dialyzer used). It is usually performed in a dialysis unit—a dedicated dialysis clinic that is located in a freestanding outpatient center or within a hospital setting. Home hemodialysis is also a treatment option, but because it requires a significant investment in both training and equipment, the majority of U.S. ESRD patients undergo dialysis in a center. Peritoneal dialysis (PD) uses the lining of the abdomen (peritoneum) to filter toxins from the bloodstream. A surgically implanted catheter is used to fill the abdominal cavity with dialysate. The peritoneum is similar to a hemodialysis membrane, drawing toxins out of the blood and into the dialysate-filled abdominal cavity. The waste-filled dialysate is then removed from the abdominal cavity after a prescribed period of dwell time. This is known as an exchange. There are several different types of PD: - Continuous Ambulatory Peritoneal Dialysis (CAPD). The patient inserts a catheter into the abdomen and infuses a fresh supply of dialysate. The process is performed several times during the day (usually a four to six-hour exchange each time) and once while sleeping through the night. - Continuous Cyclic Peritoneal Dialysis (CCPD). A machine, or cycler, performs the dialysate exchange in CCPD. It will perform several cycles during the night, and then one or two daytime exchanges that last the entire day. - Intermittent Peritoneal Dialysis (IPD). IPD, sometimes called NIPD (nocturnal intermittent peritoneal dialysis) also uses a cycler to perform six or more exchanges at night. However, unlike CCPD, there is no daytime exchange. Grafting a healthy kidney from a cadaver or living donor is a complex process. The donor kidney and the transplant candidate must be tissue matched for antigen compatibility. The kidney has six antigens (which stimulate the production of antibodies), and compatibility is based on how many of the antigens match up from donor to candidate—the more the better. After a transplant, a life-long regimen of immunosuppressive medication is required, leaving the patient at increased risk for infections due to the weakened immune system. Although the majority of transplants come from cadaver organ donors, living kidney donations are becoming much more commonplace. Living donor kidneys have a significantly higher survival rate than cadaver kidneys (at five years, living donor kidney transplants have a 78.4% graft survival rate compared to 64.7% for cadaver kidney transplants). Transplanted kidneys do not last forever; the average life of a cadaver graft is less than 15 years and rejection of the kidney by the patient can occur at any time. However, the lifespan of transplanted kidneys has increased dramatically over the past decade, and there are transplant recipients who have had functioning grafts for over 35 years. Reviewed by Francine Kaufman, MD. 4/08.
Survival rate is a part of survival analysis, indicating the percentage of people in a study or treatment group who are alive for a given period of time after diagnosis. Survival rates are important for prognosis, but because this rate is based on the population as a whole, an individual prognosis may be different depending on newer treatments since the last statistical analysis as well as the overall general health of the patient. There are various types of survival rates (discussed below). They often serve as endpoints of clinical trials, and should not be confused with mortality rates, a population metric. Patients with a certain disease (for example, colorectal cancer) can die directly from that disease or from an unrelated cause (for example, a car accident). When the precise cause of death is not specified, this is called the overall survival rate or observed survival rate. Doctors often use mean overall survival rates to estimate the patient's prognosis. This is often expressed over standard time periods, like one, five, and ten years. For example, prostate cancer has a much higher one year overall survival rate than pancreatic cancer, and thus has a better prognosis. Net survival rate When someone is interested in how survival is affected by the disease, there is also the net survival rate, which filters out the effect of mortality from other causes than the disease. The two main ways to calculate net survival are relative survival and cause-specific survival or disease-specific survival. Relative survival has the advantage that it does not depend on accuracy of the reported cause of death; cause specific survival has the advantage that it does not depend on the ability to find a similar population of people without the disease. Relative survival is calculated by dividing the overall survival after diagnosis of a disease by the survival as observed in a similar population that was not diagnosed with that disease. A similar population is composed of individuals with at least age and gender similar to those diagnosed with the disease. Cause-specific survival and disease-specific survival Disease-specific survival rate refers to "the percentage of people in a study or treatment group who have not died from a specific disease in a defined period of time. The time period usually begins at the time of diagnosis or at the start of treatment and ends at the time of death. Patients who died from causes other than the disease being studied are not counted in this measurement." Median survival, or "median overall survival" is also commonly used to express survival rates. This is the amount of time after which 50% of the patients have died and 50% have survived. In ongoing settings such as clinical trials, the median has the advantage that it can be calculated once 50% of subjects have reached the clinical endpoint of the trial, whereas calculation of an arithmetical mean can only be done after all subjects have reached the endpoint. Five-year survival rate measures survival at 5 years after diagnosis. Disease-free survival, progression-free survival, and metastasis-free survival In cancer research, various types of survival rate can be relevant, depending on the cancer type and stage. These include the disease-free survival (DFS) (the period after curative treatment [disease eliminated] when no disease can be detected), the progression-free survival (PFS) (the period after treatment when disease [which could not be eliminated] remains stable, that is, does not progress), and the metastasis-free survival (MFS) or distant metastasis–free survival (DMFS) (the period until metastasis is detected). Progression can be categorized as local progression, regional progression, locoregional progression, and metastatic progression. - Response Evaluation Criteria in Solid Tumors (RECIST) - Surveillance, Epidemiology, and End Results database (SEER)
Martin Luther was a German monk who forever changed Christianity when he nailed his ’95 Theses’ to a church door in 1517, sparking the Protestant Reformation. What was Martin Luther’s Reformation? His writings were responsible for fractionalizing the Catholic Church and sparking the Protestant Reformation. His central teachings, that the Bible is the central source of religious authority and that salvation is reached through faith and not deeds, shaped the core of Protestantism. Why did Martin Luther start the Reformation? In 1517, the German monk Martin Luther began the largest insurrection in the history of Christianity. Leading up to the breaking point was the idea in the Catholic Church that indulgences, or temporal pardons for wrongdoing, could be obtained by those who felt that they had committed sin. What was the Reformation and why did it happen? Causes of Reformation. The start of the 16th century, many events led to the Protestant reformation. Clergy abuse caused people to begin criticizing the Catholic Church. … Furthermore, the clergy did not respond to the population’s needs, often because they did not speak the local language, or live in their own diocese. What exactly was the Reformation? Attempts to reform (change and improve) the Catholic Church and the development of Protestant Churches in Western Europe are known as the Reformation. The Reformation began in 1517 when a German monk called Martin Luther protested about the Catholic Church. His followers became known as Protestants. What did the 95 theses say? In his theses, Luther condemned the excesses and corruption of the Roman Catholic Church, especially the papal practice of asking payment—called “indulgences”—for the forgiveness of sins. What were Martin Luther’s 3 main beliefs? His teachings rested on three main ideas: People could win salvation only by faith in God’s gift of forgiveness. The Church taught that faith and “good works” were needed for salvation. Luther was astonished at how rapidly his ideas spread and attracted followers. What were the four causes of the Reformation? The major causes of the protestant reformation include that of political, economic, social, and religious background. Who started the Reformation? Where and when did the Reformation start? The Reformation is said to have begun when Martin Luther posted his Ninety-five Theses on the door of the Castle Church in Wittenberg, Germany, on October 31, 1517. What was the first Protestant faith? lutheranism was the first protestant faith. … lutheranism taught salvation through faith alone, not good works. How did the Reformation end? Historians usually date the start of the Protestant Reformation to the 1517 publication of Martin Luther’s “95 Theses.” Its ending can be placed anywhere from the 1555 Peace of Augsburg, which allowed for the coexistence of Catholicism and Lutheranism in Germany, to the 1648 Treaty of Westphalia, which ended the Thirty … Why did Protestantism spread so quickly? Martin Luther was dissatisfied with the authority that clergy held over laypeople in the Catholic Church. Luther’s Protestant idea that clergy shouldn’t hold more religious authority than laypeople became very popular in Germany and spread quickly throughout Europe. What were the 3 key elements of the Catholic Reformation? What were the three key elements of the Catholic Reformation, and why were they so important to the Catholic Church in the 17th century? The founding of the Jesuits, reform of the papacy, and the Council of Trent. They were important because they unified the church, help spread the gospel, and validated the church. What were the consequences of the Reformation? The literature on the consequences of the Reformation shows a variety of short- and long-run effects, including Protestant-Catholic differences in human capital, economic development, competition in media markets, political economy, and anti-Semitism, among others. What were the main purposes of the Counter Reformation? What were the goals of the Counter Reformation? The goals were for the Catholic church to make reforms which included clarifying its teachings, correcting abuses and trying to win people back to Catholicism. What were the two goals of the Counter Reformation? The main goals of the Counter Reformation were to get church members to remain loyal by increasing their faith, to eliminate some of the abuses the protestants criticised and to reaffirm principles that the protestants were against, such as the pope’s authority and veneration of the saints.
| ||African elephants (which belong to 2 species : Loxondonta africana and Loxodonta cyclotis) travel for miles to find the 100 to 200 kg of vegetables they need to feed themselves on a daily basis. They follow the dominant female in single file and communicate by moving their trunks and ears, through smell and strokes or a range of low frequency sounds that are inaudible to man. These animals are tracked for their ivory and are under threat of extinction. Their numbers went from 2.5 million in 1945 to 600 000 in 1989 when the ivory trade was banned. Today, there are about 300.000 elephants left and they are concentrated in reserves that are often too small to ensure their subsistence without deteriorating ecosystems or harvests in countries already affected by malnutrition. In spite of these damages, eleven Central and Western African countries have asked - in vain - that the complete ban on ivory trading should be maintained to stop poaching and solve all elephant conservation problems. The means allocated to ivory control are insufficient and the partial resumption of international trade, even if only partly, has left poachers free to operate. In 2011, according to the NGO Traffic, over 2.500 elephants were hunted. This equals only a 1989 figure. Visit the YAB Gallery for books and signed prints
Spelling is taught primarily through the use and application of phonics. Children are given a weekly list of words to take home and learn. These words are then tested in class the following week to assess how well children have learned them. h In Year 1 and Year 2 they are taught in ability sets across their year group. The number of words a child is given varies according to the set they are placed in. Children will have between 6 and 12 words each week to learn. Spelling in Key Stage 1 is taught through a variety of strategies including the use of the Look, Say, Cover, Write and Check method, mnemonics and rhymes. Children are encouraged to look closely at the word in order to spot any recurring phonic patterns and try and remember these patterns. They then say the word, cover it and write the word in order to aid their ability to remember the word. They are then encouraged to apply the word by writing it in a sentence in order to further consolidate their learning.
Who Won the Debate over the Equal Rights Amendment in the 1920s? The debate over the Equal Rights Amendment in the years immediately after its origin in 1921 highlighted divisions among politically active women. This project presents the major documents related to that debate. They show that two forms of feminism emerged in the United States in the 1920s, one hostile to the blending of feminism with social justice goals, one captured by those goals. The bitter legacy of this split, only partly overcome by the second wave of feminism in the 1960s, continues to inform divisions among feminists today. | Documents Projects and Archives | Teacher's Corner | Scholar's Edition | Full-Text Sources | About Us | Contact Us |
Although thunderstorms affect relatively small geographical areas, they are all dangerous and capable of producing tornadoes, strong winds, hail, wildfires and flash flooding. A typical thunderstorm lasts an average of 30 minutes and is 15 miles in diameter. Ten percent of 100,000 thunderstorms that occur in the U.S. annually are classified as severe. The National Weather Service considers a thunderstorm severe if it produces hail at least ¾-inch in diameter, winds of 58 mph or stronger, or a tornado. A serious threat with any thunderstorm is the risk of lightning. Lightning's risk to individuals and property is increased because of its unpredictability, which emphasizes the importance of preparedness. It often strikes outside of heavy rain and may occur as far as 10 miles away from any rainfall. In the United States, lightning kills 300 people and injures 80 on average, each year. Most lightning deaths and injuries occur when people are caught outdoors in the summer months during the afternoon and evening. - Remember the 30/30 Lightning Safety Rule: Go indoors if, after seeing lightning, you cannot count to 30 before hearing thunder. Stay indoors for 30 minutes after hearing the last clap of thunder. - Familiarize yourself with the terms that are used to identify a thunderstorm hazard - A thunderstorm watch means there is a possibility of a thunderstorm in your area. - A thunderstorm warning means a thunderstorm is occurring or will likely occur soon. If you are advised to take shelter, do so immediately. - Prepare your home in order to minimize or prevent damage or personal injury. This includes: - Removing dead or rotting trees and branches that could fall and cause injury or damage during a severe thunderstorm. - Securing outdoor objects that could blow away or cause damage. - Shutter windows and securing outside doors. If shutters are not available, close window blinds, shades or curtains.
The Atlantic hurricane season is officially from June 1 to November 30. Over 97% of tropical activity occurs in these six months, but hurricanes have occurred in every month of the year. According to the National Oceanic and Atmospheric Association (NOAA), the most common month for hurricanes is September. Bottom line: We should be prepared year-round. Tropical Climate – Know Your Weather Tropical Depression: An organized system of clouds and thunderstorms with a defined surface circulation and maximum sustained winds of 38 mph (33kt) or less. Tropical Storm: An organized system of strong thunderstorms with a defined surface circulation and maximum sustained winds of 39 - 73 mph (34 - 63 kt). Hurricane: An intense tropical weather system of strong thunderstorms with a well-defined surface circulation and maximum sustained winds of 74 mph (64 kt) or higher. In other parts of the world, the word hurricane is synonymous with typhoon and cyclone. Monitor weather reports frequently and heed the advice of local officials during hurricane season. Tropical systems can speed up, change direction and intensify without warning. You can get information via email or text message by subscribing to Miami-Dade Alerts or via official social media outlets instead of traditional broadcast methods. The following terms are used by weather forecasters to describe the strength and probability/ proximity of a storm from hitting a specific destination: Hurricane Warning: A hurricane is expected to strike your area within 36 hours. Hurricane Watch: A hurricane may strike your area within 48 hours.
A neutron is a subatomic particle that is part of the atom (along with the proton and the electron). Neutrons and protons form the atomic nucleus. Neutrons have no net electric charge, unlike the proton that has a positive electric charge. In nuclear energy the concept " uranium enrichment" refers to the alteration of the number of neutrons in the atomic nucleus in order to obtain another more unstable uranium atom. This modification implies, therefore, an isotope change. Since protons and neutrons behave similarly within the nucleus, and each has a mass of about one unit of atomic mass, both are called nucleons. Its properties and interactions are described by nuclear physics. Neutrons and nuclear fission Nuclear fission reactors are reactors that are powered by nuclear energy generated through nuclear fission reactions. To generate a nuclear fission reaction, the nucleus of a nuclear fuel atom (normally uranium or plutonium: specifically the isotopes uranium-235 and plutonium-239) is bombarded with a neutron. The shock of the neutron with the atomic nucleus is sufficient for it to break and decompose into two particles and two or three free neutrons. These free neutrons, in turn, may collide with other atomic nuclei thus generating a succession of nuclear chain reactions. The speed with which the neutrons move and the amount of free neutrons in the core of the nuclear reactor determine the reactor power of the nuclear power plant. In order to control the number of fission reactions per unit of time, nuclear power plants have mechanisms to control the number of free electrons. Some of these control mechanisms are the neutron moderator, the reflector, the control rods, etc. Characteristics of neutrons The neutron is formed by three quarks, one quark above and two quarks below. The neutron does not exist outside the atomic nucleus. The average life of a neutron outside the nucleus is only about 885 seconds (15 minutes). The mass of a neutron can not be determined directly by mass spectrometry due to the lack of electrical charge. However, it can be deduced since the masses of a proton and a deuteron can be measured with a mass spectrometer. With all this we know that the mass of a neutron is 1.67492729 × 10 -27 kg. The mass of the neutron is slightly larger than that of the proton. The total electric charge of the neutron is 0. This zero value has been tested experimentally. The experimental limit obtained in the neutron charge is so close to zero that, given the experimental uncertainties, it is considered zero in comparison with the proton charge. Therefore, the neutron is considered to have zero charge or zero charge. The neutron is a 1/2 spin particle, that is, it is a fermion. For many years after the discovery of the neutron, its exact turn was ambiguous. Although it was assumed to be a Dirac particle of spin 1/2, the possibility that the neutron was a 3/2 spin particle persisted. As a fermion, the neutron is subject to the Pauli exclusion principle. According to Pauli's exclusion principle, two neutrons can not have the same quantum numbers. The antineutron is the antiparticle of the neutron. The antineutron was discovered by Bruce Cork in 1956, a year after the antiproton was discovered. The first indication of the existence of the neutron occurred in 1930, when Walther Bothe and Becker, H. found that when the alpha radiation fell on elements such as lithium and boron a new form of radiation was emitted. Initially, this radiation was thought to be a type of gamma radiation, but it was more penetrating than any known gamma radiation. The work done by Irene Joliot-Curie and Joliot Frederic in 1932, although it does not refute the hypothesis of gamma radiation, does not support it all well. In 1932, James Chadwick showed that these results could not be explained by gamma rays and proposed an alternative explanation of no-charge particles about the same size as a proton. Chadwick was able to experimentally verify this conjecture and thus demonstrate the existence of the neutron. Last review: March 19, 2019
The isoperimetric theorem states that: “Among all shapes with an equal area, the circle will be characterized by the smallest perimeter” which is equivalent to “Among all shapes with equal perimeter, the circle will be characterized by the largest area.” The theorem’s name derives from three Greek words: ‘isos’ meaning ‘same’, ‘peri’ meaning ‘around’ and ‘metron’ meaning ‘measure’. A perimeter (= ‘peri’ + ‘metron’) is the arc length along the boundary of a closed two-dimensional region (= a planar shape). So, the theorem deals with shapes that have equal perimeters. Author(s): Jan Bogaert PUMAS ID: 01_22_03_1 Date Received: 2003-01-22 Date Revised: 2003-03-24 Date Accepted: 2003-03-26 As yet, no Activities/Lesson Plans have been accepted for this example. Complete the On-Line Teachers' Assessment Form Comment by Ismail Kocayusufoglu on May 24, 2004 "There is an easy way proving the main part of this example. Here it is. Let Area(Rectangle) = Area(Circle) (Denote A(R)=A(C) for convenience). Show that Perimeter(Circle) < Perimeter(Rectangle). (Denote as P(C) Need to show that P(C)< 2(a+b) . ƒÎã[(ab)/ƒÎ] < (a+b) So, it is enough to show that ƒÎab<(a+b)2. Lemma : Given a,b, a>b, then ƒÎab<(a+b). Proof of Lemma : Since (a-b)>0: (a-b)2 = a2+b2-2ab > 0 a2+b2 > 2ab a2+b2 > (ƒÎ-2)ab ; a2+b2 > ƒÎab-2ab a2+b2+2ab > ƒÎab (a+b) 2 > ƒÎab This proves the theorem."
Astronomers have traced the growth of Earth's solar system back to its cosmic womb, before the sun and planets were born. The solar system coalesced from a huge cloud of dust and gas that was isolated from the rest of the Milky Way galaxy for up to 30 million years before the sun's birth nearly 4.6 billion years ago, a new study published online today (Aug. 7) in the journal Science suggests. This cloud spawned perhaps tens of thousands of other stars as well, researchers said. If further work confirms these findings, "we will have the proof that planetary systems can survive very well early interactions with many stellar siblings," said lead author Maria Lugaro, of Monash University in Australia. [Take Our Solar System Quiz] "In general, becoming more intimate with the stellar nursery where the sun was born can help us [set] the sun within the context of the other billions of stars that are born in our galaxy, and the solar system within the context of the large family of extrasolar planetary systems that are currently being discovered," Lugaro told Space.com via email. A star is born Radiometric dating of meteorites has given scientists a precise age for the solar system — 4.57 billion years, give or take a few hundred thousand years. (The sun formed first, and the planets then coalesced from the disk of leftover material orbiting our star.) But Lugaro and her colleagues wanted to go back even further in time, to better understand how and when the solar system started taking shape. This can be done by estimating the isotope abundances of certain radioactive elements known to be present throughout the Milky Way when the solar system was forming, and then comparing those abundances to the ones seen in ancient meteorites. (Isotopes are versions of an element that have different numbers of neutrons in their atomic nuclei.) Because radioactive materials decay from one isotope to another at precise rates, this information allows researchers to determine when the cloud that formed the solar system segregated out from the greater galaxy — that is, when it ceased absorbing newly produced material from the interstellar medium. Estimating radioisotope abudances throughout the Milky Way long ago is a tall order and involves complex computer modeling of how stars evolve, generate heavy elements in their interiors and eventually eject these materials into space, Lugaro said. But she and her team made a key breakthrough, coming up with a better understanding of the nuclear structure of one radioisotope known as hafnium-181. This advance led the researchers to a much improved picture of how hafnium-182 — a different isotope whose abundances in the early solar system are well known — is created inside stars. "I think our main advantage has been to be a team of experts in different fields: stellar astrophysics, nuclear physics, and meteoritic and planetary science so we have managed to exchange information effectively," Lugaro said. A long-lasting stellar nursery The team's calculations suggest that the solar system's raw materials were isolated for a long time before the sun formed — perhaps as long as 30 million years. "Considering that it took less than 100 million years for the terrestrial planets to form, this incubation time seems astonishingly long," Martin Bizzarro, of the University of Copenhagen in Denmark, wrote in an accompanying "Perspectives" piece in the same issue of Science. Bizzarro, like Lugaro, thinks the new results could have application far beyond our neck of the cosmic woods. "With the anticipated discovery of Earthlike planets in habitable zones, the development of a unified model for the formation and evolution of our solar system is timely," Bizarro wrote. "The study of Lugaro et al. nicely illustrates that the integration of astrophysics, astronomy and cosmochemistry is the quickest route toward this challenging goal." The researchers plan to investigate other heavy radioactive elements to confirm and refine their timing estimates, Lugaro said.
A study from Lancaster University and Stockholm University, published in the Journal of Experimental Psychology, revealed that bilingual people think differently depending on the linguistic context in which they estimate the duration of events. Linguistic professors Panos Athanasopoulos and Emanuel Bylund explained that bilinguals often alternate between their languages consciously and unconsciously. On the other hand, different languages often refer to time differently. For example, Swedish and English speakers refer to physical distances: “Take a short break” while Spanish speakers refer to physical quantities and volumes: “Take a short break”. The researchers asked native Swedish speakers who also spoke Spanish to estimate how much time was going by looking at either a line that runs through a screen or a container that is filling up. The participants were invited to use the word “duracion” (the duration in Spanish) or “tid” (the Swedish equivalent). When they used Spanish words, bilinguals based their estimates on the volume of a container that is filling. When they used Swedish words, they changed their behavior and suddenly gave time estimates by distance, referring to the lines traveled, rather than volume. “The fact that bilinguals feel time differently effortlessly and unconsciously integrates with a growing body of evidence demonstrating the ease with which language can intrude into our most basic senses, including our emotions, our visual perception, and also the sense of time, “he says. Professor Athanasopoulos also said that the results show that bilinguals are more “flexible thinkers” than those who speak just one language. “There is evidence that switching mentally between different languages on a daily basis provides advantages over learning ability and multiple tasks, and even long-term benefits for mental well-being,” says -he. Source du article in English Bilingual people would live time differently:
Know the Difference: Lactose Sensitivity vs. Lactose Intolerance By Jaime Hollander People often refer to lactose intolerance as lactose sensitivity, but medical professionals do not. Lactose sensitivity, or sensitivity due to lactose, can be caused by various conditions. Lactose intolerance is the condition of not being able to digest lactose, the sugar found in milk and dairy products. Nearly 65 percent of the global population has a reduced ability to digest lactose. Here’s what you should know about lactose intolerance. What is lactose intolerance? Lactose intolerance is the condition of having a lactase enzyme deficiency. Lactase breaks down the sugar known as lactose, which is found primarily in milk and dairy products. People with lactose intolerance may experience gas, bloating or diarrhea after eating or drinking milk or dairy products with lactose. Why do people lack the lactase enzyme? After infancy, humans naturally begin producing less lactase. The level of intolerance is what informs the amount of lactose a person can consume before experiencing painful side effects. What are the symptoms of lactose intolerance? Within 30-120 minutes of consuming milk or milk products, people typically experience a range of physical symptoms, including gas, cramping, bloating, diarrhea and/or nausea. How bad can it be? Lactose intolerance symptoms can be mild or severe depending on the amount of lactose consumed and the degree of lactase deficiency. Some people who produce small amounts of lactase may be able to tolerate small servings of foods containing lactose. Common signs and symptoms include abdominal pain, gas, diarrhea, and/or bloating. Dairy allergy vs. lactose intolerance: Are they the same? No – the symptoms of food allergies are different from the symptoms of lactose intolerance. The side effects of lactose intolerance can cause moderate to severe discomfort, but an allergy is far more serious. A true food allergy is a reaction of the body’s immune system that can be severe or life-threatening. Consult your doctor if you think you may have a food allergy. Lactose intolerance is a common condition, so it’s important to recognize the key triggers and symptoms, and to understand the non-dairy food options available so you and your family can make smart food choices. If you have lactose intolerance, being aware of what’s in your meals and snacks -- and the impact those contents may have on your immediate health -- is essential to feeling your best every day. And if you’re lactose intolerant but don’t want to give up some of those favorite foods, these products from the LACTAID® Brand will help you eat what you want, when you want. Except for content on the LACTAID® Brand website, the links provided are for educational purposes only. No sponsorship or endorsement is implied. Information for this quiz came from the Mayo Clinic and the National Institutes of Health. ©Johnson & Johnson Consumer Inc. 2019
Who are the language learners that can achieve language proficiency? Learners who achieve foreign language proficiency are those who initiate and manage their own learning. What does it mean? This basically means that learners who rely only on instruction input fail to master a target language. A Danish researcher Leni Dam believes that learners do not necessarily learn what teachers believe that their students learn. The only thing that teachers can provide for their learners is raising an awareness of metacognitive aspects of learning. Basically, to help them to become conscious of the whole process of learning, of how they think and learn (Dam, “Developing Learner Autonomy” 42) Things beyond language learning Little proposes that students need to be aware of the things that go beyond learning language. These things can help them to become proficient learners. He states that learners need to become aware of themselves as learners. Additionally, they need to know the learning techniques they should impement. Some learners write their journals to note how well they progressed in mastering personal language learning skills. Scandinavian students applied this method and it came out as a successful way of learning. Students had to note which learning tasks were well or badly done. (Little, “Autonomy in language learning” 86) However, not majority of learners “come into the language classroom with natural ability to make choices about what and how to learn ” (Nunan 134). Because of this, Nunan proposes to switch from being learner centered to learning centered. What does it mean to be learning centered for language proficiency? Learning-centeredness assumes an approach where learners are guided on how to make informed choices (Nunan 134). Such classrooms are focused not only on language being learnt, but also on learning process. In this way, learners are aware of the ‘skills and knowledge they will need in order to make informed choices about what they want to learn and how they want to learn” (Nunan 134). Basically, it does not mean that a teacher should hand over responsibility and power to a learner from the first day, but directs learners at the beginning until they are able to make informed choices. He encourages learners “to move toward the fully autonomous end of the pedagogical continuum” (Nunan 134). Learning styles and strategies for language proficiency What are learning styles and why are they important in mastering foreign language? According to the preferred way of learning whether it is by hearing (auditory learners), seeing (visual learners) or by touching things (kinaesthetic learners) , learners differ in a way they acquire new things. For different types of learners it is important that they follow their preferred way of learning. Someone will learn easier by looking at the charts, images visual learners), others can benefit from listening to the lectures and remembering (auditory learners) while certain people will benefit form actually touching things or conducting other physical activities (cutting things, learning by whole body movement). Learners need to identify their own styles and strategies that suit them best. Learners should employ critical thinking and make decisions on what and how to learn (Nunan). He also writes that learners need to become aware of curricula content and pedagogical materials in developing their critical learning skills. Critical learning skills Basically, if a teacher is able to involve students in the learning process, it can be beneficial to their learning outcome (Nunan 135). London Calling Designs
Gravity pushes the layers of air down to the Earth's surface. This push is called air pressure. Consequently, 99% of the total mass of the atmosphere is below 32 kilometers. Like all fluids (gases and liquids), the air exerts a pressure on everything within and around it, although we are not aware of it. Pressure is a force, or weight, exerted on a surface per unit area, and is measured in Pascals (Pa). The pressure exerted by a kilogram mass on the Earth's surface is approximately 10 Pa. The pressure exerted by the whole atmosphere on the Earth’s surface is approximately 100,000 Pa. Usually, atmospheric pressure is quoted in millibars (mb). 1 mb is equal to 100 Pa, so standard atmospheric pressure is about 1000 mb. In fact, actual values of atmospheric pressure vary from place to place and from day to day. At sea level, commonly observed values range between 970 mb and 1040 mb. Because pressure decreases with altitude, pressure observed at various stations must be adjusted to the same level, usually sea level. Sometimes, atmospheric pressure is quoted in millimetres, centimetres or inches of mercury. This older form of measurement is related to the traditional method of measuring atmospheric pressure using a mercury barometer. Typical sea level atmospheric pressure is 76 cm mercury (Hg) or 30 inches. Variations in atmospheric pressure lead to the development of winds that play a significant role in shaping our daily weather.
What's that weird thing around Saturn's second-largest moon? There is something around the moon of Rhea. It's not a ring, and it sure is weird, say researchers. Rhea, taken by the Cassini spacecraft in March, 2010. Credit: NASA/JPL/Space Science Institute Back in 2005, a suite of six instruments on the Cassini spacecraft detected what was thought to be an extensive debris disk around Saturn's moon Rhea, and while there was no visible evidence, researchers thought that perhaps there was a diffuse ring around the moon. This would have been the first ring ever found around a moon. New observations, however, have nixed the idea of a ring, but there's still something around Rhea that is causing a strange, symmetrical structure in the charged-particle environment around Saturn's second-largest moon. Researchers announced their findings in 2008 that there was a sharp, symmetrical drop in electrons detected around Rhea. This moon is about 1,500 kilometers (950 miles) in diameter, and scientists began searching for what could have caused the drop. If there were a debris disk around Rhea, it would have had to measures several thousand miles from end to end, and would probably be made of particles that would range from the size of small pebbles to boulders. Testing the hypothesis, Cassini flew by the moon several times and took 65 images between 2009 and 2009, flying at what would be edge-on to the rings, where the greatest amount of material would be within its line of sight. Using light angles to their advantage — and if the ring was there – the scientists should have been able to detect micron-sized particles up to boulder size objects. But they saw nothing. "There are very strong and interesting and unexplained electromagnetic effects going on around Rhea," said Matthew Tiscareno from Cornell University, who led the imaging campaign. "But we're making a pretty strong case that it's not because of solid material orbiting the moon….For the amount of dust that you need to account for [the earlier] observations, if it were there, we would have seen it." While the ring hypothesis has been disproved, there's still a mystery about the cause of the symmetrical structure in the charged-particles around the moon. But the Cassini spacecraft and team are up for the challenge. Source: Cornell University Nancy Atkinson blogs at Universe Today.
Epilepsy and Seizures Epilepsy is a neurological condition that causes spontaneous, repeated seizures. Seizures occur when there are abnormal electrical impulses in the brain. A single seizure does not necessarily mean that a person will develop epilepsy. Epilepsy can occur on its own or can be associated with other neurological disorders. Epileptic seizures may cause: - Changes in sensation, such as taste, sight, smell, vision, and/or hearing - Changes in motor function, such as tremors, muscle spasms, rigidity, and/or loss of balance - Behavioral symptoms, such as staring episodes or unconsciousness There are more than 30 different types of epileptic seizures. The type of seizure a person experiences depends on where the seizure begins in the brain. A partial seizure occurs when there are abnormal electrical impulses in only one area of the brain. General epileptic seizures occur when electrical impulses “storm” or spread through the brain. How common is epilepsy? Epilepsy is more common than many realize. It is the fourth most common neurological problem in the United States, after migraine, stroke, and Alzheimer’s disease. One in 26 people will develop epilepsy and approximately 2.2 million people in the United States are living with the disease. Who gets epilepsy? Epilepsy and seizures can occur in anyone from the very young to the very old. Seizures can occur in children with fevers (febrile seizure), which may or may not lead to epilepsy. Injury to the brain, infection (encephalitis or meningitis), certain genetic disorders, and chemical and nutritional imbalances may also cause seizures. Seizures also occur as the result of brain tumors, bleeding in the brain, brain birth defects, or abnormal blood vessels in the brain, among other reasons. Seizures in older adults can be caused the factors listed above, or by other factors. Seizures in older adults can also be a complication of: - Heart attacks - Alzheimer’s disease - Brain tumors, - Scar tissue from brain surgery - Kidney disease - Liver disease How is epilepsy diagnosed? The cause and type of epilepsy must be diagnosed before treatment can begin. Diagnosis of epilepsy usually involves: The physical examination reveals clues that may help identify the cause of the seizures and helps assess overall health. If questions remain about the diagnosis, additional tests may be needed. Functional Brain Mapping Epilepsy diagnosis aims to identify the seizure focus in the brain. Several tests are available for functional brain mapping including - Functional MRI - Grid stimulation studies Imaging of the brain is also used for epilepsy diagnosis. Imaging, such as a CT or MRI, is used to find if there is a lesion responsible for the seizures. People with epilepsy may experience cognitive problems. Neuropsychological examinations can help explain problems like difficulty in using or understanding language, recognizing spatial patterns, remembering, and/or other mental functions. Neuropsychological evaluation often helps to identify the seizure focus. For example, patients with left temporal lobe epilepsy typically show a different pattern of results on neuropsychological testing than patients with right temporal lobe epilepsy. There are different types of seizures, each with unique symptoms. Seizures can be broadly divided into generalized (involving the whole brain) and focal (involving one small part of the brain). - Absence seizures cause a very brief (a few seconds) loss of consciousness. They begin with little or no warning and may occur several times per day. They are more common in children. - Myoclonic seizures cause random jerks or twitches, usually affecting both sides of the body, that people often describe as being like an electrical shock. - Clonic seizures cause repetitive, rhythmic jerks, and involuntary body movements. - Tonic seizures cause stiffening of the muscles and cause the body to become inflexible or rigid. - Atonic seizures cause unexpected loss of muscle tone, usually in the arms and legs. This can sometimes cause falls and other accidents. - Tonic-clonic seizures (formerly known as grand mal seizures) cause 30-60 seconds of the rigidity characteristic of tonic seizures followed by 30-60 seconds of involuntary muscle jerks and contractions. After the clonic phase, most people fall into a deep sleep and may subsequently wake up disoriented or confused. - Simple partial seizures cause a variety of symptoms depending on where they originate in the brain. They do not involve a loss of consciousness. - Complex partial seizures cause a loss of consciousness or awareness. Though their eyes usually remain open, people seem distant, out of it, or staring off into the distance. They also may perform repetitive actions like lip smacking and hand waving. Sometimes partial seizures evolve into generalized tonic-clonic seizures. Auras and Epileptic Seizures Many people experience an aura before a seizure. An aura can include a change in taste, vision, or hearing, and may cause the person to suddenly hear a distinct sound, to smell an odor, or to feel unusual. An aura actually can be helpful to someone with epilepsy. Because an aura is a warning that a seizure is imminent, it allows individuals to prepare themselves to prevent an injury. Please seek the help of a licensed medical professional if you are concerned about your health, and dial 9-1-1 if you are experiencing an emergency. If you are diagnosed with epilepsy or a seizure disorder, your neurologist may recommend treatment with one or more anti-epilepsy medications. About two-thirds of people with epilepsy who are treated in this way have excellent control of their symptoms with few or no side effects. If you have epilepsy that is not well controlled after a trial of medication, you might be eligible for alternative treatments (vagal nerve stimulation or ketogenic diet) or potentially, epilepsy surgery. Several effective anti-epilepsy medications are available in the U.S. Choosing one or more epilepsy medications for your treatment requires discussion with your neurologist and epilepsy care team and careful monitoring over time to ensure that you are getting relief from your symptoms with minimal side effects. There are different types of epilepsy surgery. You may be considered for surgery if you meet the following criteria: - Seizures are not controlled despite optimal medical management. For most people, this means a trial of at least three medications that are appropriate for the type of seizure, used in adequate doses. - You experience intolerable side effects from the medications used to control seizures. - A single “spot” or region in the brain can be identified as causing the seizures. - The region causing the seizures can be removed safely without causing harm or loss of important functions such as speech or movement. Decisions about whether to undergo surgery for epilepsy are different for everyone. No one recommendation fits every person. A decision to consider epilepsy surgery does not mean that surgery always is possible. The evaluation may show that you are not a good candidate for surgery. For example, the evaluation may determine that the cause of seizures arises independently from both sides of the brain. Under these circumstances, surgery is seldom advised. Neurosurgeons may use one of several different procedures to treat epilepsy: Amygdalohippocampectomy is commonly performed for temporal lobe epilepsy related to mesial temporal sclerosis (scarring and atrophy of the hippocampus in the temporal lobe). Using an operating room microscope, this procedure focuses on minimal resection of the affected tissue and spares the remainder of the temporal lobe. Temporal lobectomy, or removal of the temporal lobe, is less commonly performed for temporal lobe epilepsy. It is appropriate if temporal lobe seizures are caused by conditions other than mesial temporal sclerosis (cortical malformation, scar, tumor, etc). Lesion resection is often the preferred operation when a lesion or abnormality resides outside the temporal lobe. A CT or MRI may show a scar, old hemorrhage, tumor, or a cortical malformation. For many people, simply removing the lesion may alleviate their seizures. Customized neocortical resection is a highly tailored procedure and is often outside the temporal lobe. Patients may or may not have an area of abnormality on brain imaging studies. Surgery is typically performed with intracranial grid seizure recording and functional brain mapping. Hemispherectomy is a relatively radical procedure that involves the removal of an entire cerebral hemisphere. The procedure is most appropriate for severe and unmanageable epilepsy in infants and children. Candidates for hemispherectomy may have a congenital condition that affects one hemisphere or an acquired disease such as Rasmussen’s encephalitis. Depending on circumstances, the neurosurgeon may perform a total anatomic hemispherectomy (complete removal of the cortical hemisphere sparing deeper structures) or a modified hemispherectomy (partial removal of cerebral hemisphere with disconnection of the remainder of the hemisphere from other brain structures). Hypothalamic hamartoma resection involves removing a hypothalamic hamartoma tumor from the hypothalamus. Multiple subpial transection is performed to improve seizure control when the brain tissue causing the seizures cannot be removed because it serves a critically important function, such as speech. The region is “scored” with a probe to disrupt lateral (side-to-side) nerve fibers. The fibers that travel deep, which are more important for local tissue function, are spared. Corpus callosotomy is a procedure in which the corpus callosum—the largest white matter tract connecting the two halves of the brain—is surgically divided. Corpus callosotomy does not cure seizures; rather, it prevents or slows their spread, making them less severe. This procedure is useful for children with severe drop attacks. Vagal nerve stimulation is a device similar to a pacemaker that sends mild pulses of electrical activity to the brain, via the vagus nerve, to prevent seizures. The device is placed under the skin on the chest and a wire is placed from it to the left side of the neck where the vagus nerve is located. The ketogenic diet can be used along with medications to help manage epilepsy in children and adolescents. The ketogenic diet is a high-fat, low-carbohydrate diet that creates a state of ketosis in which the body uses fats for energy rather than carbohydrates. The diet is prescribed and monitored by a physician and dietitian. - Date of last review: November 26, 2016
This set of Instrumentation Transducers Multiple Choice Questions & Answers (MCQs) focuses on “Measurement of Current”. 1. Find the initial value of current for which transient current is given by I(S) = (S2+S)⁄(5S2+1) Explanation: Initial value of current from transient response is found using initial value theorem on Laplace transform. Initial value theorem is given as 2. What is the purpose of making internal resistance of milli-ammeter very low? a) High sensitivity b) High accuracy c) Minimum voltage drop across meter d) Maximum voltage drop across meter Explanation: Low internal resistance will help in easy current flow. If we put a high resistance in series to current flow, it will cause large voltage drop to produce across meter. 3. Which of the following devices can be used with both AC and DC? b) Moving iron type c) Moving coil type d) Induction type Explanation: Moving iron type meters can be used in both AC and DC applications. 4. How many electrons are contributed in one coulomb electric charge? a) 6.242 × 1018 b) 6.242 × 1019 c) 6.242 × 1020 d) 6.242 × 1010 Explanation: Charge of one electron is 1.6 × 10-19coulumb, hence one coulomb implies reciprocal of 1.6×10-19 which is 6.25×1018. 5. In a conductor after 1 minute of electron passage, current flown is found to be 1mA. How many electrons would have passed through that conductor? Explanation: For n number of electrons, total charge will be n×1.6×10-19. Then for charge passed in one second will be current flow, here 1mA. 6. Which of the following can be treated as an equivalent of ampere? Explanation: Ampere can be treated as coulomb per second since it is the charge transferred in one second. 7. What will be the average value of current, if its peak value is 43A? Explanation: Average current value is the product of peak value and 0.636. 8. Using a low resistant shunt Moving coil, permanent magnet instrument can be converted to ______________ a) Volt meter d) Watt meter Explanation: By applying a low resistant shunt in permanent magnet moving coil instrument can be converted to Ammeter. 9. An induction meter can handle current up to __________ a) Below 10A Explanation: Current handling capacity of induction meter is high up to 100A. 10. Which of the following effect is used in AC ammeters? a) Electromagnetic effect b) Electrostatic effect c) Chemical effect d) Magnetic effect Explanation: AC ammeters use electromagnetic principle for operation. This effect is also employed in some integrating instruments. Sanfoundry Global Education & Learning Series – Instrumentation Transducers. To practice all areas of Instrumentation Transducers, here is complete set of 1000+ Multiple Choice Questions and Answers.
All the subjects are listed for English, Mathematics, Science, Geography, Design and Technology, ICT, History, Music, Art and Design, RE, PE, and PSHE. Other areas are listed where resources can also be found such as Special Needs, Early Years, Books, Hardware, Software and also Schemes of Work. Unit 1. Ongoing skills | Unit 2. Sounds interesting - Exploring sounds | Unit 3. The long and the short of it - Exploring duration | Unit 4. Feel the pulse - Exploring pulse and rhythm | Unit 5. Taking off - Exploring pitch | Unit 6. What's the score? - Exploring instruments and symbols | Unit 7. Rain, rain, go away - Exploring timbre, tempo and dynamics | Unit 8. Ongoing skills | Unit 9. Animal magic - Exploring descriptive sounds | Unit 10. Play it again - Exploring rhythmic patterns | Unit 11. The class orchestra - Exploring arrangements | Unit 12. Dragon scales - Exploring pentatonic scales | Unit 13. Painting with sound - Exploring sound colours | Unit 14. Salt, pepper, vinegar, mustard - Exploring singing games | Unit 15. Ongoing skills | Unit 16. Cyclic patterns - Exploring rhythm and pulse | Unit 17. Roundabout - Exploring rounds | Unit 18. Journey into space - Exploring sound sources | Unit 19. Songwriter - Exploring lyrics and melody | Unit 20. Stars, hide your fires - Performing together | Unit 21. Who knows? - Exploring musical processes
(1) A type of rock formed from a molten, or partially molten, material (2) An activity related to the formation and movement of molten rock either in the subsurface (plutonic) or on the surface (volcanic) A rock which has formed by cooling from the molten Igneous rocks include lavas and other products of volcanoes (eg ash) as well as rocks which crystallise at great depth in the crust, such as granites or gabbros rocks that crystallize below the Earth's surface are called intrusive - they have large crystals produced by slow cooling Granite is an example of an intrusive rock; basalt is an example of an extrusive rock types of rocks formed by the solidification from a molten or liquid state - BACK>> like or suggestive of fire; "the burning sand"; "a fiery desert wind"; "an igneous desert atmosphere" One of the three basic categories into which rocks can be classified, of which the other two are sedimentary and metamorphic Igneous rocks are formed by the cooling of molten rock, called magma Pertaining to, having the nature of, fire; containing fire; resembling fire; as, an igneous appearance produced under conditions involving intense heat; "igneous rock is rock formed by solidification from a molten state; especially from molten magma"; "igneous fusion is fusion by heat alone"; "pyrogenic strata" 1 From a magma; said of a rock or mineral that solidified from molten or partly molten material 2 A rock or mineral that solidified from molten or partly molten material Any rock solidified from molten or partly molten material BWCA basalt, gabbro, and granite are of igneous origin adj a word used to describe any type of rock that was once molten, whether as magma (inside the earth or intrusive) or lava (outside the earth or extrusive); one of three major rock types; see also sedimentary and metamorphic Said of a rock or mineral that solidified from molten or partly molten material, i e from magma; also, applied to processes leading to, related to, or resulting from the formation of such rocks Igneous rocks constitute one of the tree main classes into which rocks are divided, the others being metamorphic and sedimentary Resulting from, or produced by, the action of fire; as, lavas and basalt are igneous rocks Rock formed by cooled and hardened magma within the crust or lava on the surface Any of various crystalline or glassy, noncrystalline rocks formed by the cooling and solidification of molten earth material (magma). Igneous rocks comprise one of the three principal classes of rocks, the others being metamorphic and sedimentary rocks. Though they vary widely in composition, most igneous rocks consist of quartz, feldspars, pyroxenes, amphiboles, micas, olivines, nepheline, leucite, and apatite. They may be classified as intrusive or extrusive rocks 20 dilde online sözlük. 20 milyondan fazla sözcük ve anlamı üç farklı aksanda dinleme seçeneği. Cümle ve Videolar ile zenginleştirilmiş içerik. Etimoloji, Eş ve Zıt anlamlar, kelime okunuşları ve günün kelimesi. Yazım Türkçeleştirici ile hatalı Türkçe metinleri düzeltme. iOS, Android ve Windows mobil platformlarda online ve offline sözlük programları. Sesli Sözlük garantisinde Profesyonel çeviri hizmetleri. İngilizce kelime haznenizi arttıracak kelime oyunları. Ayarlar bölümünü kullarak çevirisini görmek istediğiniz sözlükleri seçme ve aynı zamanda sözlüklerin gösterim sırasını ayarlama imkanı. Kelimelerin seslendirilişini otomatik dinlemek için ayarlardan isteğiniz aksanı seçebilirsiniz.
Function overloading in cpp is the object oriented programming feature. In this tutorial we will see function overloading in cpp, program for function overloading. What is function overloading Writing the same function name with different arguments or different argument list is known as function overloading. Function overloading definition A set of function which having same name but performs different tasks by passing different parameters. Why we use function overloading If we want to perform same task but with different type of different number of argument then we overload function. Function overloading example Suppose if we want to add integer first then we want to add 3 integer again then we use same name addition function but having different number of integer arguments. Function overloading implemented by following cases : Case 1 : Number of arguments accepted by function is different. Case 2 : If the number of arguments accepted by function is same then function calling is distinguished by their data types of arguments. Program for function overloading using namespace std; void square(double s) float val= s*s; cout<<"square of float numbers= "<<val; void square(int s) cout<<"\nsquare of int numbers= "<<res; 1. First we write a header files in program. 2. Next we declare function that we want to overload 3. Write a main function 4. Now call the function that we overloaded by passing value to function. 5. We first call function square(10.0); 6. Control transferred to function and function executed. void square(float s) => s = 10.0 float val= s*s; => val = 100.0 cout<<“square of float numbers”<<val; => square of float numbers 100.0 7. Then again control transferred to main. 8. Now we call second function by passing integer value 9. Control transferred to function and function executed void square(int s) => s = 10 int res=s*s; => res =100 cout<<“square of int numbers”<<res; => square of int numbers 100 10. End with the program.
1. What is an Adjective and its Functions An adjective is used with a noun, describing or improving it. It has the function to modify the noun, to complement it, or to support it within the phrase. With their help, we can describe the subject or the object in the sentence, enriching the content and increasing comprehension.
Depression is a mood disorder characterized by debilitating feelings of sadness and hopelessness that interfere with normal activities of day-to-day life. Depression commonly develops in young adults and adolescents, but the disorder can also occur in children. A combination of various factors can trigger people to develop depression. Such factors include genetic predisposition, environmental factors, hormones, brain chemicals (neurotransmitters), trauma, relationship problems, and stressful life events (such as divorce or loss of a job). For unknown reasons, women have a much higher risk of developing depression than men. Studies have shown that certain areas of the brain that shape our mood, memory and behavior can have an altered appearance in those suffering from depression. For instance, some studies have revealed the hippocampus to be physically smaller in people with depression, and suggest that the degree to which it is smaller may correspond to the frequency of symptoms. Whether these physical changes in part lead to depression or are instead an effect of the disorder is still undetermined. Frequently, people with serious medical problems also have depression. Neurological disorders, thyroid diseases, cancer, infections, and metabolic disorders often lead to depression symptoms. Common signs and symptoms of depression include the following: In some patients, depression is difficult to diagnose because it may mimic other medical conditions. Major depression is often unrecognized in the elderly, for whom the disorder is frequently mistaken for the signs and symptoms of aging. When depression is suspected, a physical examination and lab tests should be performed to rule out a medical illness. A psychiatric assessment includes a thorough evaluation of a patient’s thoughts and behaviors; history of other psychiatric disorders; current stressors; major life events; occupational history; relationships; family history of mental illness; and use of illicit substances and alcohol. Special attention is given to the presence of thoughts about suicide and signs of poor self-care. Whether patients with depression are treated in a hospital or an outpatient setting depends on the severity of their symptoms and their risk of harming themselves or others. Hospitalization is frequently indicated for patients who are deemed at a high risk of suicide or have extremely impaired self-care. The choice of antidepressant medication depends on the drug’s side effects and a patient’s response to treatment. Usually patients need 4 to 6 weeks of treatment before a physician can determine whether a specific therapy is effective. Research shows that some patients, especially adolescents, may have an increased risk of developing suicidal thoughts while taking antidepressant medications, so patients need to be carefully monitored after any medication is initiated. Psychotherapy is also called talk therapy, and for some patients with mild to moderate forms of depression, psychotherapy alone is the initial treatment. A combination of antidepressant medication and psychotherapy is usually required for patients with moderate to severe forms of depression. Patients are advised to engage in regular exercise, which has been shown to significantly reduce depression symptoms. Additionally, proper sleep and diet are recommended. Patients with depression should avoid alcohol and illicit substances. S-adenosyl methionine (SAMe) and St. John’s wort may help reduce mild forms of depression. These over-the-counter treatments are associated with potential risks, and more research is needed to determine their efficacy. With the patient’s consent, family members are educated about depression and early signs of relapse. Patients with supportive relationships tend to achieve better outcomes. Although there is no specific method of preventing depression, patients may decrease the risk for major depression by controlling stress and maintaining relationships with family and friends. Individuals with early signs of depression should be treated promptly to prevent severe debilitation.
Most computer users have learned that running too many programs at the same time can slow down or even crash the machine. We work around these limitations by closing programs when we aren't using them. Just like computers, human brains have a limited amount of processing power (as further discussed in our course on The Human Mind and Usability). When the amount of information coming in exceeds our ability to handle it, our performance suffers. We may take longer to understand information, miss important details, or even get overwhelmed and abandon the task. In the field of user experience, we use the following definition: the cognitive load imposed by a user interface is the amount of mental resources that is required to operate the system. Informally, you can think of mental resources as "brain power" — more formally, we're talking about slots in working memory. The term cognitive load was originally coined by psychologists to describe the mental effort required to learn new information. Though web browsing is a much more casual activity than formal education, cognitive load is still important: users must 'learn' how to use a site's navigation, layout, and transactional forms. And even when the site is fairly familiar, users must still carry around the information that is relevant to their goal. For instance, when planning a vacation, the users’ cognitive load includes interface-related knowledge and specific vacation-related constraints that they may have (such as price and timeframe). When a computer can't handle our processing demands, we can simply upgrade to a newer, more powerful machine. But to date there's no way to increase the actual processing power of our brains. Instead, designers must understand and accommodate these limits. Intrinsic vs. Extraneous Cognitive Load There's no way to eliminate cognitive load entirely—in fact, even if this was possible, it wouldn't be desirable. After all, people visit websites to get information. They've come to find out something about your product, organization, or content; most likely it's something they didn't already know. Intrinsic cognitive load is the effort of absorbing that new information and of keeping track of their own goals. Designers should, however, strive to eliminate, or at least minimize, extraneous cognitive load: processing that takes up mental resources, but doesn't actually help users understand the content (for example, different font styles that don’t convey any unique meaning). Minimizing Cognitive Load User attention is a precious resource, and should be allocated accordingly. Many of our top usability guidelines—from chunking content to optimizing response times—are aimed at minimizing cognitive load. In addition to these basics, there are 3 more tips for minimizing cognitive load: Avoid visual clutter: redundant links, irrelevant images, and meaningless typography flourishes slow users down. (Note that meaningful links, images, and typography are valuable design elements; it is only when overused that these backfire and actually impair usability.) Build on existing mental models: People already have mental models about how websites work, based on their past experiences visiting other sites. When you use labels and layouts that they've encountered on other websites, you reduce the amount of learning they need to do on your site. Offload tasks: Look for anything in your design that requires users to read text, remember information, or make a decision. Then look for alternatives: can you show a picture, re-display previously entered information, or set a smart default? You won't be able to shift all tasks away from users, but every task you eliminate leaves more mental resources for the decisions that truly are essential. Share :Twitter | LinkedIn | Google+ | Email
If you think that mathematical proof is really clearcut and universal then you should read this article. It is obvious that we can fit four circles of diameter 1 unit in a square of side 2 without overlapping. What is the smallest square into which we can fit 3 circles of diameter 1 unit? Investigate circuits and record your findings in this simple introduction to truth tables and logic. Three points A, B and C lie in this order on a line, and P is any point in the plane. Use the Cosine Rule to prove the following Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . . How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results? Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and record your findings in truth tables. A paradox is a statement that seems to be both untrue and true at the same time. This article looks at a few examples and challenges you to investigate them for yourself. Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are natural numbers and 0 < a < b < c. Prove that there is only one set of values which satisfy this equation. An article which gives an account of some properties of magic squares. Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . . Prove that the shaded area of the semicircle is equal to the area of the inner circle. A introduction to how patterns can be deceiving, and what is and is not a proof. Imagine two identical cylindrical pipes meeting at right angles and think about the shape of the space which belongs to both pipes. Early Chinese mathematicians call this shape the mouhefanggai. Toni Beardon has chosen this article introducing a rich area for practical exploration and discovery in 3D geometry If you take two tests and get a marks out of a maximum b in the first and c marks out of d in the second, does the mediant (a+c)/(b+d)lie between the results for the two tests separately. In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot. Can you discover whether this is a fair game? Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps? There are four children in a family, two girls, Kate and Sally, and two boys, Tom and Ben. How old are the children? Take any rectangle ABCD such that AB > BC. The point P is on AB and Q is on CD. Show that there is exactly one position of P and Q such that APCQ is a rhombus. The diagram shows a regular pentagon with sides of unit length. Find all the angles in the diagram. Prove that the quadrilateral shown in red is a rhombus. A composite number is one that is neither prime nor 1. Show that 10201 is composite in any base. Patterns that repeat in a line are strangely interesting. How many types are there and how do you tell one type from another? ABCD is a square. P is the midpoint of AB and is joined to C. A line from D perpendicular to PC meets the line at the point Q. Prove AQ = AD. In how many distinct ways can six islands be joined by bridges so that each island can be reached from every other island... Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values. It is impossible to trisect an angle using only ruler and compasses but it can be done using a carpenter's square. Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him? Take any prime number greater than 3 , square it and subtract one. Working on the building blocks will help you to explain what is special about your results. The country Sixtania prints postage stamps with only three values 6 lucres, 10 lucres and 15 lucres (where the currency is in lucres).Which values cannot be made up with combinations of these postage. . . . If you know the sizes of the angles marked with coloured dots in this diagram which angles can you find by calculation? Find the smallest positive integer N such that N/2 is a perfect cube, N/3 is a perfect fifth power and N/5 is a perfect seventh The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. . . . Show that if you add 1 to the product of four consecutive numbers the answer is ALWAYS a perfect square. The first of two articles on Pythagorean Triples which asks how many right angled triangles can you find with the lengths of each side exactly a whole number measurement. Try it! This is the second article on right-angled triangles whose edge lengths are whole numbers. Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens? Prove Pythagoras' Theorem using enlargements and scale factors. Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . This article looks at knight's moves on a chess board and introduces you to the idea of vectors and vector addition. Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general. The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases. This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point. In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. This article discusses how every Pythagorean triple (a, b, c) can be illustrated by a square and an L shape within another square. You are invited to find some triples for yourself.
In a prior post we shared a process for establishing a pyramid of interventions. This process allows schools to determine, define and organize the various strategies, accommodations and interventions currently taking place in the school to honour and account for the expertise that exists in the building. A pyramid of intervention is not a static creation. It is intended to be reviewed and revised on a regular basis to ensure new ideas and practice is reflected in the pyramid. As schools begin to refine and revise their pyramids of intervention, there is a need to identify and differentiate between interventions, strategies and accommodations. Interventions are meant to effectively bridge a gap for students, provided in addition to regular classroom instruction. Three things identify an intervention and differentiate interventions from strategies and accommodations: - Provide targeted assistance based on assessment – unless the intervention is targeted and put in place based on assessment data, it is unlikely to effectively address the student concern for which it is intended. - Delivered by a highly qualified class teacher or another specialist – as interventions are established at tiers two, three and four, their increasing intensity requires higher levels of training and expertise. For an intervention to be truly impactful, it must be delivered by an individual trained to provide that intervention with maximum fidelity. - Provides additional instruction for an individual or small group – the higher we go on the tiers of the pyramid, the smaller the intervention groups should become. Maximum gain for the majority of interventions will happen for groups of students with a size of eight or less. Access a template for evaluating if proposed interventions meet the three criteria for an intervention – Examining Intervention Strategies – Template Whereas interventions will be purposefully articulated at tiers two, three and four, strategies should be used primarily at tier two – the classroom level. Strategies do not need to meet the criteria established for interventions, but should focus as “what could work” for students. An organization of differentiated strategies, collected from teachers and shared in the pyramid of interventions, can become a valuable resource during a collaborative team meeting, when investigating all that could be done at the lower tiers of support. A myriad of effective, proven strategies to support students at the tier two level ensures the greatest point of impact for students is found in the classroom and in the hands of the classroom teacher. In its most simplistic definition, we put accommodations in place to help students cope with any gaps that may exist limiting their success. For a student who has difficulty reading text, a text to speech accommodation may be beneficial. For a child who struggles with attention, fidgets may be effective to reduce distractions. Accommodations address gaps, but may do little to close those gaps. Although they are a valuable part of the overall picture of support for students, they must be balanced with interventions and strategies that strive to reduce achievement gaps. Like strategies, we believe accommodations must be organized and articulated primarily at the tier two classroom level. A template has been developed to help organize and record strategies, accommodations and interventions in place for students – Student Intervention Record – Template Further discussion related to interventions, strategies and accommodations can be found in chapter five of our book Envisioning a Collaborative Response Model. Click on the book link to find out how to order. We wish you all the best as you strive to support the needs of your students!
Dwarfism: Dwarfism is a medical or genetic condition that results in an adult height of 4’10” or shorter. Most occurrences of dwarfism result from a random genetic mutation in either the sperm or the egg rather than either parent’s complete genetic makeup. Most children with dwarfism are born to parents of normal height. There are over 200 forms of dwarfism. The most common form, Achondroplasia, accounts for 70% of all cases. Achondroplasia results in arms and legs that are disproportionate to head and trunk size. In disproportionate dwarfism, children are greater risk for additional health problems including: Delays in motor skills development, such as sitting up, crawling and walking; Frequent ear infections and risk of hearing loss; Bowing of the legs; Difficulty breathing during sleep (sleep apnea); Pressure on the spinal cord at the base of the skull; Excess fluid around the brain (hydrocephalus); Crowded teeth; Progressive, severe hunching or swaying of the back; In adulthood, narrowing of the channel in the lower spine, resulting in pressure on the spinal cord and subsequent pain or numbness in the legs; Arthritis in adulthood; Weight gain that can further complicate problems with joints and the spine and place pressure on nerves Treatment: Treatment should be symptom-focused based on complications noted above. With some adaptations being made for their height difference, children and adults with dwarfism lead normal lives, even having children of their own. Love Without Boundaries Links for this Special Need: Stories or Blogs from Families who have Parented a Child with Dwarfism:
They may be the tiniest birds on the planet, but Ruby-throated hummingbirds are the biggest eaters. In fact, no animal has a faster metabolism—roughly 100 times that of an elephant. Hummingbirds burn food so fast they often eat 1-1/2 to 3 times their weight in nectar and insects per day. Perhaps Ruby-throated "hungrybirds" is a more descriptive name for them! Maybe this explains why people rarely see hummers when they aren't eating. In order to gather enough nectar, hummingbirds must visit hundreds of flowers every day. And just one day of cold temperatures or bad luck finding flowers can mean death. Hummingbirds push the limits, and live their lives only a few hours from starvation. A hummingbird consumes as much as 50 times more energy when awake than when torpid. If you were to find a hummingbird in torpor, it would appear lifeless. If a predator were to find one, it would be lifeless indeed! While torpor has benefits, there are risks too. It can take as long as an hour for the bird to come back into an active state, so a torpid hummer cannot respond to emergencies. How do hummingbirds come out of torpor? As their heart and breathing rates rise, they start vibrating their wings. The use of any body muscles produces heat; that's why you get warm while exercising. The heat generated by the vibrating wings warms the hummer's blood supply. The warmed blood circulates throughout the tiny bird's body, and soon the hummer's body temperature is back up to its normal toasty 102.2 degrees. Do an experiment with clay to learn more about torpor here: Try This! Math Journaling Question It may be hard to imagine the challenges hummingbirds face every day in order to stay alive. The following steps will help you figure out the answer: 1. Calculate the number of Calories per ounce a hummingbird needs in a day. (A Ruby-throated Hummingbird needs 10 Calories per day and weighs 1/10 of an ounce.) 2. Figure your own weight in ounces. 3. If you burned food at the same rate (Calories/ounce) as a hummingbird does, how many Calories would you need per day? 4. How many Calories are in one serving of your favorite food? How much of this food would you need to eat per day? 5. If you're awake 16 hours in a day, how much of your favorite food would you need to eat per hour?
Heart failure (HF), often referred to as congestive heart failure (CHF), occurs when the heart is unable to pump sufficiently to maintain blood flow to meet the body's needs.The terms chronic heart failure (CHF) or congestive cardiac failure (CCF) are often used interchangeably with congestive heart failure.Signs and symptoms commonly include shortness of breath, excessive tiredness, and leg swelling.The shortness of breath is usually worse with exercise, while lying down, and may wake the person at night.A limited ability to exercise is also a common feature. Common causes of heart failure include coronary artery disease including a previous myocardial infarction (heart attack), high blood pressure, atrial fibrillation, valvular heart disease, excess alcohol use, infection, and cardiomyopathy of an unknown cause.These cause heart failure by changing either the structure or the functioning of the heart.There are two main types of heart failure: heart failure due to left ventricular dysfunction and heart failure with normal ejection fraction depending on if the ability of the left ventricle to contract is affected, or the heart's ability to relax. Signs and Symptoms Heart failure symptoms are traditionally and somewhat arbitrarily divided into "left" and "right" sided, recognizing that the left and right ventricles of the heart supply different portions of the circulation. However, heart failure is not exclusively backward failure (in the part of the circulation which drains to the ventricle). Congestive heart failure Heart failure may also occur in situations of "high output," (termed "high output cardiac failure") where the ventricular systolic function is normal but the heart cannot deal with an important augmentation of blood volume.This can occur in overload situation (blood or serum infusions), kidney diseases, chronic severe anemia, beriberi (vitamin B1/thiamine deficiency), thyrotoxicosis, Paget's disease, arteriovenous fistulae, or arteriovenous malformations. Viral infections of the heart can lead to inflammation of the muscular layer of the heart and cause heart failure. In 2011, congestive heart failure was the most common reason for hospitalization for adults aged 85 years and older, and the second most common for adults aged 65–84 years. It is estimated that one in five adults at age 40 will develop heart failure during their remaining lifetime and about half of people who develop heart failure die within 5 years of diagnosis. Heart failure is much higher in African Americans, Hispanics, Native Americans and recent immigrants from the eastern bloc countries like Russia.
Why can you vividly recall the day your father took you to your first baseball game many years ago, but you can't remember where you just put the car keys? We tend not to think about it much, but memory is the seat of consciousness. The process of how we remember, how we forget, and why we remember certain things and not others is a rich subject of scientific inquiry, and a fascinating window onto who we are and what makes us tick. In our e-book, Remember When? The Science of Memory, we explore what science can and can't tell us about memory. In the introductory section called "What Is Memory?" we define what memory is, including what makes something memorable and some common misconceptions about memory. "You Must Remember This ... Because You Have no Choice," by Gary Stix, explores why some people can remember what they had for lunch on a Tuesday 20 years ago while others can't. Nobel laureate Eric Kandel, a neuroscientist and psychiatrist, discusses a range of topics, from his groundbreaking work on how the brains acquires and holds memories to Freud's psychoanalysis. Section 2, "The Anatomy of Memory," delves deeper into the process of memory formation, from how memories are saved to how they're transferred from short-term storage in the hippocampus to long-term storage in the cortex. "Brain Cells for Grandmother" looks at a controversial theory that some memories have corresponding neurons assigned to it-that there is a neuron for grandmother, another for actress Jennifer Aniston, and so on. We also explore the role of memory in learning and the effects of trauma and age. Joe Z. Tsien discusses his technique of genetically tweaking certain receptor proteins on neurons in "Building a Brainier Mouse." In "Erasing Painful Memories," veteran journalist Jerry Adler looks at research into both behavioral therapies and drugs that can help to alter painful or traumatic memories after the fact. Section 6, "Aging," analyzes memory as it relates to typical aging processes; it's well known that the ability to recall things diminishes as we age, but in lieu of being diagnosed with dementia, the causes remain mysterious. Finally, the last section looks at ways to improve your memory. One story links dreaming to improved learning. In "A Pill to Remember," R. Douglas Fields summarizes the work behind the idea of a "smart pill," based on the relatively recent discovery that a specific protein kinase might boost memory and could be given in pill form to enhance that most mysterious process. Click here to buy this and other Scientific American eBooks: http://books.scientificamerican.com/sa-ebooks/.
Dr Cindy Pan shares her tips to keep your kids healthy throughout the school year. more Encouraging your preschooler to eat Good eating habits begin young, so find simple ways you can encourage your preschooler to eat well and eat right: - Don't forget that your preschooler has a small stomach - about the size of her fist - and she will know when she's had enough to eat. - Always offer a range of nutritious food. - Limit unhealthy snack foods in the house - that way, you can offer them to your preschooler as a treat rather than an everyday food. - Avoid cordials and too much fruit juice as these are high in sugar and will take away your child's appetite for other foods. - If your child says she's thirsty just before she eats, offer her water only. - Encourage your preschooler to help prepare the meal. There is almost always a small task that can be managed by a child - setting the table, getting food from the fridge for you, adding ingredients to a bowl. Save peeling, grating and cutting until she knows how to handle kitchen implements safely. - Don't serve your child too much food - it's better to have her ask for more if she's still hungry than have her sit face-to-face with a mountain of uneaten food on her plate. - Don't use dessert as a bribe to eat the rest of the meal - it rarely works and can often lead to more resistance over dinner. - Invite one of your child's friends over for a meal. The feeling of festivity at the table often encourages a fussy eater to eat. - If your preschooler rejects everything you put on her plate, try placing all the meal's food on communal plates in the centre of the table and encourage her to serve herself. - If your preschooler is too tired to eat at dinner time, try giving her most of her dinner for afternoon tea and then offer her a light supper when you eat later. - 'Picnic food' is sometimes a nice substitute for a meal at the table. Try offering cold meats, bread, grated raw veggies and salad on a mixed plate - but don't stress if it's not all eaten. - Don't force your preschooler to eat. as this usually ends in tears. You could cause her to choke - it's almost impossible to chew and swallow if you're crying - and may make her tense about eating in the future. Your child is born instinctively knowing how much food she needs and she won't naturally overeat. However she can easily lose this skill. If she's always pushed to eat more than she wants or is encouraged to finish everything on the plate, she may learn to ignore her body's messages signalling that she's had enough to eat. This can lead to weight problems later in life. Learning to use a knife and fork can be a slow process for your young child. Let her have fun with her food because the more practice she gets doing it for herself, the quicker she'll master the skills.
WHO guidelines for indoor air quality As people spend a considerable amount of time indoors, either at work or at home, indoor air quality plays a significant part in their general state of health. This is particularly true for children, elderly people and other vulnerable groups. The WHO guidelines for indoor air quality, developed under the coordination of WHO/Europe, address three groups of issues that are most relevant for public health: - biological indoor air pollutants (dampness and mould) - pollutant-specific guidelines (chemical pollution) - pollutants from indoor combustion of fuels.
What Is an Annotated Bibliography? WATCH VIDEO A bibliography is a list of citations put together on a topic of interest. Citations follow a specific format depending on the subject area; for example, researchers in the humanities usually follow the style set by the Modern Language Association (or MLA). An annotation is a commentary a reader makes after critically reading an information source. It can include a summary of the reading, the reader’s response to the reading, and/or questions/comments addressing the article’s clarity, purpose, or effectiveness. An annotated bibliography is an alphabetical list (arranged by the last name of the authro) of research sources (citations to books, articles, and documents). Each citation is followed by a brief (usually about 150 words or 4-6 sentences long) descriptive and evaluative paragraph, the annotation. The purpose of the annotation is to inform the reader of the relevance, accuracy, and quality of the sources cited. What is the purpose of an annotated bibliography? They keep your reseach organized and are a great first step to preparing a paper or presentation. A list of works cited comes at the end of any research project, so if you do this step first, your bibliogprahy is completed as you work. As you do in-text citations, your annotations will help remind you of an author's central themes or ideas. Creating an Annotated Bibliography - Find books, periodical articles, and other sources that provide useful information and ideas about your topic. Review these materials to make sure they are appropriate and valuable sources for your topic. - Create citations for each item following the appropriate style. - Write a concise annotation for each item (usually about 150 words or 4-6 sentences long). Annotations should include: - Main focus or purpose of the work (its relevance, accuracy, and quality of sources cited) - Background and credibility of the author - Intended audience - Special unique features - Weakness or bias Questions to ask yourself as you critically analyze the item: - Who is the author? His/her credentials?, biases? - Where is the article published? What type of journal is it? What is the audience? - What do I know about the topic? Am I open to new ideas? - Why was the article written? What is its purpose? - What is the author’s thesis? The major supporting points or assertions? - Did the author support his/her thesis/assertions? - Did the article achieve its purpose? - Was the article organized? - Were the supporting sources credible? - Did the article change my viewpoint on the topic? - Was the article convincing? - What new information or ideas do I accept or reject? The following examples follow MLA format. Robertson, Marta. "Musical and Choreographic Integration in Copland's and Graham's Appalachian Spring." Musical Quarterly 83.1 (Spring 1999): 6-26. Martha Graham was the original choreographer for Copland's ballet, Appalachian Spring. Using both the composer's score and a filmed version of the original ballet, the author studies how Graham expressed Copland's music in dance. There are notated musical examples for particular parts of the score and transcriptions of dance rhythms for some of the soloist's steps. As both a dancer and a musicologist, the author offers a unique view combining both fields. Smith, Julia. Aaron Copland: His Work and Contribution to American Music. New York: E. P. Dutton, 1955. 194-198. Smith calls this book a "biographical-critical study" (Acknowledgments). The section on Appalachian Spring included in Chapter VII, "Third Style Period," provides a somewhat more technical analysis than some of the other studies do. Smith gives details about the work's commission from the Elizabeth Sprague Coolidge Foundation, first performance on Oct. 30, 1944, in the then-new Coolidge Auditorium in the Library of Congress, highly favorable critical reception, and receipt of the Pulitzer Prize for Music (1945) and Music Critics' Circle Award for outstanding theatrical work in the 1944-45 season. The brief but clear technical discussion comments on nearly all the eight sections of the 1945 orchestral arrangement, noting folk idioms and Shaker characteristics. Smith refers to the work of Manfred Buzofzer in discussing Copland's contrapuntal treatment of the Shaker hymn "Simple Gifts" in Section VII. She also cites some of S. M. Barlow's summary comments on Appalachian Spring , calling it "limpid, sparkling, and rhythmically diversified" (page 194). Three figures show sections of the score. Appendix II (pages 312-318) lists recordings, arranged by recording company, of Copland's works prior to this book's publication. There are four listed for Appalachian Spring .
Talk to family members, friends, and neighbors. People who have lived in a place—especially those whose families have been here for a long time—are very good sources of stories about how places came to be named. They can also describe how a place has changed over time. Even if you don’t find the answer to your question right away, you can usually learn clues that lead in helpful directions. Have a chat with a town government official or staff member. Police officers and town workers travel the roads every day. The mayor, town clerk, and elected officials have to make decisions about streets, buildings, and signs. They know the details about how a town works and can tell you its history or give you tips on how to find out. Take a field trip to the place. Look for signs or other information about its name. Get a feel for its important features. Try to imagine what it may have looked like long ago. Remember to keep track of your sources. Take notes when you talk to people and list their name, address, and the date you talked to them. (For more information about how to cite different types of research, consult the Purdue Online Writing Lab’s Research and Citation Resources guide.) Don’t be shy about verifying details in printed sources. (Sometimes you’ll find that sources don’t agree!)
Autism spectrum disorder (ASD) is a developmental disability caused by differences in the brain. Scientists do not know yet exactly what causes these differences for most people with ASD. However, some people with ASD have a known difference, such as a genetic condition. There are multiple causes of ASD, although most are not yet known. There is often nothing about how people with ASD look that sets them apart from other people, but they may communicate, interact, behave, and learn in ways that are different from most other people. The learning, thinking, and problem-solving abilities of people with ASD can range from gifted to severely challenged. Some people with ASD need a lot of help in their daily lives; others need less. A diagnosis of ASD now includes several conditions that used to be diagnosed separately: autistic disorder, pervasive developmental disorder not otherwise specified (PDD-NOS), and Asperger syndrome. These conditions are now all called autism spectrum disorder. ASD begins before the age of 3 and last throughout a person’s life, although symptoms may improve over time. Some children with ASD show hints of future problems within the first few months of life. In others, symptoms may not show up until 24 months or later. Some children with an ASD seem to develop normally until around 18 to 24 months of age and then they stop gaining new skills, or they lose the skills they once had. Studies have shown that one third to half of parents of children with an ASD noticed a problem before their child’s first birthday, and nearly 80%–90% saw problems by 24 months of age. It is important to note that some people without ASD might also have some of these symptoms. But for people with ASD, the impairments make life very challenging. Possible “Red Flags” A person with ASD might: • Not respond to their name by 12 months of age • Have delayed speech and language skills • Not play “pretend” games (pretend to “feed” a doll) by 18 months • Avoid eye contact and want to be alone • Have trouble understanding other people’s feelings or talking about their own feelings • Not point at objects to show interest (point at an airplane flying over) by 14 months • Repeat words or phrases over and over (echolalia) • Give unrelated answers to questions • Get upset by minor changes • Have obsessive interests • Flap their hands, rock their body, or spin in circles • Have unusual reactions to the way things sound, smell, taste, look, or feel Social issues are one of the most common symptoms in all of the types of ASD. People with an ASD do not have just social “difficulties” like shyness. The social issues they have cause serious problems in everyday life. Examples of social issues related to ASD: • Does not respond to name by 12 months of age • Avoids eye-contact • Prefers to play alone • Does not share interests with others • Only interacts to achieve a desired goal • Has flat or inappropriate facial expressions • Does not understand personal space boundaries • Avoids or resists physical contact • Is not comforted by others during distress • Has trouble understanding other people’s feelings or talking about own feelings Typical infants are very interested in the world and people around them. By the first birthday, a typical toddler interacts with others by looking people in the eye, copying words and actions, and using simple gestures such as clapping and waving “bye bye”. Typical toddlers also show interests in social games like peek-a-boo and pat-a-cake. But a young child with an ASD might have a very hard time learning to interact with other people. Some people with an ASD might not be interested in other people at all. Others might want friends, but not understand how to develop friendships. Many children with an ASD have a very hard time learning to take turns and share—much more so than other children. This can make other children not want to play with them. People with an ASD might have problems with showing or talking about their feelings. They might also have trouble understanding other people’s feelings. Many people with an ASD are very sensitive to being touched and might not want to be held or cuddled. Self-stimulatory behaviors (e.g., flapping arms over and over) are common among people with an ASD. Anxiety and depression also affect some people with an ASD. All of these symptoms can make other social problems even harder to manage. Each person with ASD has different communication skills. Some people can speak well. Others can’t speak at all or only very little. About 40% of children with an ASD do not talk at all. About 25%–30% of children with ASD have some words at 12 to 18 months of age and then lose them. Others might speak, but not until later in childhood. Examples of communication issues related to ASD: • Delayed speech and language skills • Repeats words or phrases over and over (echolalia) • Reverses pronouns (e.g., says “you” instead of “I”) • Gives unrelated answers to questions • Does not point or respond to pointing • Uses few or no gestures (e.g., does not wave goodbye) • Talks in a flat, robot-like, or sing-song voice • Does not pretend in play (e.g., does not pretend to “feed” a doll) • Does not understand jokes, sarcasm, or teasing People with ASD who do speak might use language in unusual ways. They might not be able to put words into real sentences. Some people with ASD say only one word at a time. Others repeat the same words or phrases over and over. Some children repeat what others say, a condition called echolalia. The repeated words might be said right away or at a later time. For example, if you ask someone with ASD, “Do you want some juice?” he or she might repeat “Do you want some juice?” instead of answering your question. Although many children without an ASD go through a stage where they repeat what they hear, it normally passes by three years of age. Some people with an ASD can speak well but might have a hard time listening to what other people say. People with ASD might have a hard time using and understanding gestures, body language, or tone of voice. For example, people with ASD might not understand what it means to wave goodbye. Facial expressions, movements, and gestures may not match what they are saying. For instance, people with an ASD might smile while saying something sad. People with ASD might say “I” when they mean “you,” or vice versa. Their voices might sound flat, robot-like, or high-pitched. People with an ASD might stand too close to the person they are talking to, or might stick with one topic of conversation for too long. They might talk a lot about something they really like, rather than have a back-and-forth conversation with someone. Some children with fairly good language skills speak like little adults, failing to pick up on the “kid-speak” that is common with other children. Unusual Interests and Behaviors Many people with ASD have unusual interest or behaviors. Examples of unusual interests and behaviors related to ASD: • Lines up toys or other objects • Plays with toys the same way every time • Likes parts of objects (e.g., wheels) • Is very organized • Gets upset by minor changes • Has obsessive interests • Has to follow certain routines • Flaps hands, rocks body, or spins self in circles Repetitive motions are actions repeated over and over again. They can involve one part of the body or the entire body or even an object or toy. For instance, people with an ASD might spend a lot of time repeatedly flapping their arms or rocking from side to side. They might repeatedly turn a light on and off or spin the wheels of a toy car. These types of activities are known as self-stimulation or “stimming.” People with ASD often thrive on routine. A change in the normal pattern of the day—like a stop on the way home from school—can be very upsetting to people with ASD. They might “lose control” and have a “melt down” or tantrum, especially if in a strange place. Some people with ASD also may develop routines that might seem unusual or unnecessary. For example, a person might try to look in every window he or she walks by a building or might always want to watch a video from beginning to end, including the previews and the credits. Not being allowed to do these types of routines might cause severe frustration and tantrums. Other Autism Symptoms Some people with ASD have other symptoms. These might include: • Hyperactivity (very active) • Impulsivity (acting without thinking) • Short attention span • Causing self injury • Temper tantrums • Unusual eating and sleeping habits • Unusual mood or emotional reactions • Lack of fear or more fear than expected • Unusual reactions to the way things sound, smell, taste, look, or feel People with ASD might have unusual responses to touch, smell, sounds, sights, and taste, and feel. For example, they might over- or under-react to pain or to a loud noise. They might have abnormal eating habits. For instance, some people with an ASD limit their diet to only a few foods. Others might eat nonfood items like dirt or rocks (this is called pica). They might also have issues like chronic constipation or diarrhea. People with ASD might have odd sleeping habits. They also might have abnormal moods or emotional reactions. For instance, they might laugh or cry at unusual times or show no emotional response at times you would expect one. In addition, they might not be afraid of dangerous things, and they could be fearful of harmless objects or events. Children with ASD develop at different rates in different areas. They may have delays in language, social, and learning skills, while their ability to walk and move around are about the same as other children their age. They might be very good at putting puzzles together or solving computer problems, but they might have trouble with social activities like talking or making friends. Children with an ASD might also learn a hard skill before they learn an easy one. For example, a child might be able to read long words but not be able to tell you what sound a “b” makes. Children develop at their own pace, so it can be difficult to tell exactly when a child will learn a particular skill. But, there are age-specific developmental milestones used to measure a child’s social and emotional progress in the first few years of life. To learn more about developmental milestones, visit “Learn the Signs. Act Early,” a campaign designed by CDC and a coalition of partners to teach parents, health care professionals, and child care providers about early childhood development, including possible “red flags” for autism spectrum disorders.
Obesity and Weight Loss Resource Center Maintaining a healthy weight is important to avoid life-threatening medical conditions and to prolong an active lifestyle. Obesity is a condition in which a person has an abnormally high and unhealthy proportion of body fat. Staying at a healthy weight or losing weight requires a combination of regular exercise, healthy eating with portion and calorie control, and drinking low calorie fluids such as water. A physician may decide that a weight loss medication may be an appropriate aid in some treatment plans. A patient and their physician may instead decide that surgical weight loss, such as gastric bypass surgery, is the appropriate action, based upon weight and current health risks. Excess weight is a recognized risk factor for many health problems including: - Type 2 diabetes - High blood pressure - Heart disease - Sleep apnea - Certain cancers, such as endometrial, breast, prostate, and colon cancers Worldwide there are more than 500 million obese people, and in the U.S. alone, more than 78 million adults suffer from obesity. Obesity and is the second leading cause of preventable deaths in the U.S. The terms “overweight” and “obese” have specific definitions in healthcare. Overweight and obese are both terms for a range of weight that are greater than what is considered healthy for a given height.1 For adults, overweight and obesity ranges are determined by using weight and height to calculate a number called the body mass index (BMI).3 BMI is used because it correlates with the amount of body fat. BMI is also important because the use of many weight loss drugs are based on a whether a person has reached a certain BMI. - An adult who has a BMI between 25 and 29.9 kg/m2 is considered overweight. - An adult who has a BMI of 30 kg/m2 or higher is considered obese. - An adult who is more than 100 pounds overweight or has a BMI over 40 kg/m2 is considered morbidly obese. Other factors besides BMI are considered in determining if someone is at risk for weight-related diseases. In addition to BMI, an individual's waist circumference and other disease or lifestyle attributes, such as high blood pressure, lack of exercise, or family history are predictors of obesity-related diseases.2 What Causes Weight Gain or Obesity? - Food intake, portion size and calorie content: Excessive food and calorie intake, more than the body needs for energy, can be turned into fat. Foods that are high in fat and sugar can contribute to obesity. If one does not burn more calories than they consume, they will put on weight. - Lifestyle: A sedentary lifestyle, without adequate exercise and proper nutrition, can lead to a higher risk of becoming overweight or obese. Incorporating exercise into a daily routine can help lower the risk of weight problems. Excessive food intake and a sedentary lifestyle ranks as one of the largest contributors to weight gain. Many smokers also gain weight when they quit smoking. - Genetics: Genetics may play a role in determining someone’s chances of being overweight or obese, but in general, many people still have the ability to control their weight. Only in rare genetic diseases is it impossible to avoid obesity. Weight history can also play a role - overweight children or adolescents are more likely to be overweight in adulthood. - Metabolic Rate: A metabolic rate, or metabolism, can differ among people with roughly the same height and weight. Someone with a low metabolic rate burns food more slowly than someone with a high metabolic rate. Someone with a low metabolic rate requires less calories to maintain a set weight than someone with a high metabolic rate. - Drugs: Certain drugs can lead to weight gain, including some antidepressants (Paxil (paroxetine), Zoloft (sertraline), Elavil (amitriptyline), and Remeron (mirtazapine). Steroids, including prednisone and methylprednisolone, and certain antipsychotic medications, such as Clozaril (clozapine), Zyprexa (olanzapine), Risperdal (risperidone) and Seroquel (quetiapine) are notorious for weight gain. Various epilepsy, diabetes, and blood pressure medications have also been linked to weight gain. - Pregnancy: Many women gain weight that remains after a pregnancy. A woman should not diet or use weight-loss medications when pregnant as it can be unsafe for the developing fetus. Women should consult with their obstetrician if they are concerned about weight gain in pregnancy. Benefits to Weight Loss Weight loss in individuals who are overweight or obese may reduce many health risks. Studies have found that weight loss with some medications can improve several health risks, such as: - high blood pressure, heart disease and stroke - high blood lipids (cholesterol, triglycerides) - diabetes and insulin resistance (the body’s inability to utilize blood sugar) - sleep apnea Related News Articles: - FDA Clears First Weight-Loss Pill in 13 Years - Another New Weight Loss Drug Approved - In Approving New Diet Drug, FDA Ignores Crucial Safety Data - Two New Weight Loss Drugs Won’t Reverse U.S. Obesity Crisis - FDA Approves Weight-Management Drug Qsymia - Belviq FDA Approval History - Qsymia FDA Approval History - Body Mass Index (BMI): Determining Your Obesity Risk - Can Prescription Drugs Cause Weight Gain? - Childhood Obesity: Is a U.S. Epidemic Improving? - Prescription Weight Loss / Diet Pills: What Are the Options? - Side Effects of Weight Loss Drugs (Diet Pills) - Weight Loss Surgery Recommended for you - Centers for Disease Control and Prevention. Overweight and Obesity. Accessed October 5, 2012. http://www.cdc.gov/obesity/adult/defining.html - Drugs.com. Obesity. Accessed October 5, 2012. http://www.drugs.com/health-guide/obesity.html - Centers for Disease Control and Prevention (CDC). Assessing Your Weight: About Adult BMI. Accessed October 6, 2012. http://www.cdc.gov/healthyweight/assessing/bmi/adult_bmi/index.html - Department of Health and Human Services. NIH. National Heart, Lung and Blood Institute. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults. Accessed October 4, 2012. http://www.ncbi.nlm.nih.gov/books/NBK2003/ Last updated: 2014-02-24 by Leigh Anderson, PharmD.
Chile’s people are largely descendents from European, indigenous tribes, or both with roughly 70 percent being Catholic. Over time, Chile’s isolation has produced a homogenous, but vibrant blended culture that combines traditions, beliefs, food and habits of the Mapuche, Spanish, German and, to a lesser extent, various other immigrant groups. Migrating Native Americans were the first to settle in what is now Chile, where the Mapuche people established colonies in the fertile valleys and repelled invading Incas. Ferdinand Magellan was the first European to visit Chile in 1520. The Spanish conquest and enslavement of the Mapuche people began in 1540 headed by Pedro de Valdivia who established Santiago in early 1541. The Mapuche launched many insurrections during Spanish occupation which stopped once slavery was abolished in 1683. The people began striving for independence around 1808 when Napoleon’s brother Joseph usurped the Spanish throne. Chile proclaimed itself an autonomous republic in 1810, with Spain responded by attempting to re-assert its rule which lead to intermittent warfare that continued until 1817. Bernardo O’Higgins led an army across the Andes to defeat the Chilean royalists and proclaim independence on February 12, 1818. The political transition did not alter the stratified colonial structure which kept the wealthy landowners in power throughout the 19th century. During this period, Chile began to solidify its borders by incorporating the archipelago of Chiloe in 1826, suppressing Mapuche independence in the south and signing a treaty with Argentina confirming ownership of the Strait of Magellan in 1881. The War of the Pacific saw Chile capture further territory from Bolivia and Peru and securing nitrate deposits which heralded an era of affluence for the young nation. Civil war divided Chile in 1891 and led to the establishment of a parliamentary democracy run by the financial elite that eroded the economy. A powerful working class and emerging middle class were able to institute a reformist president in the 1920’s, whose transformative programs were frustrated by a conservative congress. General Luis Altamirano led a coupe in 1924 which created mass political instability and chaos, leading to ten different governments by 1932. Ibáñez del Campo was voted into power with coalition governments where he held office through successive re-elections until 1958. President Eduardo Frei Montalva’s Christian Democrat administration embarked on far-reaching social and economic reforms in 1964 which saw improvements in education, housing and farming. After three years, Frei was besieged with criticism of inadequate change from leftists and too much from conservatives. The socialist party, headed by Senator Salvador Allende, took power in 1970 and led to the depression of 1972. Allende responded by initiating joint public and private public works to provide employment, introducing price freezes, wage increases, tax reforms, nationalized banking, and mining. The Nixon Administration in the US disliked Allende’s socialist agenda and sent secret operatives to Chile to try to destabilize their government. American financial pressure restricted Chile’s access to international trade so the economy crippled and inflation was out of control by 1973. A military coup had Augusto Pinochet Ugarte seize power in September 11, 1973 for a self-awarded eight-year term which saw mass repression, human rights violations and the implementation of a controversial new constitution. Greater freedoms were allowed after the economic collapse of 1982 and a mass civil resistance ran from 1983 to 1988. General Pinochet was ousted by a vote in 1988, replaced by Christian Democrat Patricio Aylwin from 1990 to 1994. There were a number of presidents between 1993 until January 2006, when Chile elected its first female president, Michelle Bachelet Jeria, of the Socialist Party. In January 2010, Chileans voted in the first right wing president in 20 years, Sebastián Piñera. Struck by a massive earthquake and tsunami in February 2010, 500 people were killed and over a million left homeless, the same year that Chile achieved global recognition for successfully rescuing the 33 trapped miners in the San Jose copper and gold mine in the Atacama Desert. Although it had a shaky past and political history, Chile proudly boasts one of the highest per capita living standards in all of Latin America. Chile’s isolation has lead to an interesting, colorful culture that is a blend of indigenous and European traditions characterized by friendly and thoughtful people. Traditional handicrafts are still produced and used in everyday life and beliefs and festivals are strongly influenced by the Catholic religious calendar. While Chile is a nation currently at peace, its citizens are sensitive to being a small land amongst large neighbors. To avoid giving the wrong impression, it is best to ask permission before taking photographs of government buildings, navy boats or military installations.
The name originally stood for central Greece. It was soon extended to the whole Greek mainland and by 500 bc to the entire land mass behind it. The boundary between the European continent and Asia was usually fixed at the river Don. Homer's range of information hardly extended north of Greece or west of Sicily. The Mediterranean seaboard of Europe was chiefly opened up by the Greeks between c.750 and c.550 (see colonization, greek). The Atlantic coasts and ‘Tin Islands’ were discovered by the Phoenicians; Pytheas circumnavigated Britain and followed the mainland coast at least to Heligoland. Thule remained a land of mystery. The Greeks penetrated by way of the Russian rivers as far as Kiev. North of the Balkans they located the mythical Hyperboreans. Greek pioneers ascended the Danube (see danuvius) to the Iron Gates, and the Rhône perhaps to Lake Léman. But Herodotus had only a hazy notion of central Europe, and the Hellenistic Greeks knew little more. The land exploration of Europe was chiefly accomplished by Roman armies. They completed the Carthaginian discovery of Spain; under Caesar they made Gaul known; under Augustus' generals, Licinius Crassus, Tiberius, and Drusus (see Claudius Drusus, Nero), they opened up the Balkan lands, the Alpine massif, and the Danube basin. Tiberius and Drusus also overran western Germany to the Elbe. The Europe–Asia polarity was important in Greek ideology; the two together were taken to represent the whole inhabited space (Africa/Libya being sometimes added as a third constituent). A Eurocentric chauvinism is evident in Roman thought: acc. to Pliny the Elder, Europe is ‘by far the finest of all lands’.
Fog rolls in over a hilltop in the Fray Jorge Fog Forest in the Coquimbo semi-arid region of central Chile. Plants in the area are adapted to harvest water from the fog. Credit: Gareth McKinley As arid regions of the world struggle to meet the water needs of growing populations, researchers are looking for nuanced ways to keep communities hydrated. Now, recent advancements in one low-tech technology may offer an inexpensive and abundant alternative to rainwater: fog harvesters. Many countries with limited drinking water — such as Chile, Peru and Mexico — have experimented with some degree of fog collection for years. Now, MIT researchers and their colleagues in Chile have developed the first systematic study aimed at optimizing the efficiency of fog harvesting. Their findings appeared last month in the journal Langmuir. Fog collectors generally consist of plastic mesh nets propped up on stakes. As foggy air blows through the mesh, water droplets gather along the mesh filaments, and then funnel into collection tanks below. [Watch Video of Fog Collector at Work] This setup works well, but its effectiveness varies depending on the type of mesh used. "The materials that have been used have been low-cost, readily available and durable," said Gareth McKinley, an MIT engineer and a co-author of the paper describing the new research. "What hadn't been done was a systematic study to show how good we could make this by bringing together fluid dynamics and surface chemistry to optimize the fog-collecting efficiency." The team measured variations in water yield based on changes in the mesh-thread thickness, the size of the holes between the threads and the coating applied to the threads. They found that minimizing both the gap between the threads and thread size significantly increased the water yield, and settled on a thread thickness of about three times the width of a human hair. Anything thinner may produce more water, but would lack durability, McKinley said. These improved measurements may increase water yield from current levels of several liters per square meter of mesh per day to more than 12 liters (about 3 gallons) per day, potentially fulfilling the water needs of large expanses of rural, arid Chile, where isolated communities have limited access to electricity and drinking water, McKinley said. Fog water is fairly pure and often safe to drink straight from the sky, because pollutants get left behind when water evaporates into the sky. Still, pollutants can get trapped in fog droplets, but the researchers have not detected dangerous quantities in the water they have collected in Chile. However, pollutants could be a more serious concern in more industrialized, water-stressed regions, such as heavily farmed areas of central California, where high levels of pesticides and other agricultural residues circulate through the air, said Peter Weiss, a professor of environmental toxicology at the University of California, Santa Cruz, who was not involved in the study. The team next plans to deploy the new design in Chile to determine its durability and effectiveness in the field, and hopes to eventually help locals deploy the devices on larger scales.
Natural selection will only lead to evolution if the trait being favored or eliminated: is at least partly heritable One factor that increases the chance of allopatric speciation is when a population: is small and isolated Which of the following are polyploid plants? All of the choices: bananas, potatoes, oats, coffee beans In the selection video which one of the following statements is true: changes in a few genes can bring about big changes in a trait under selection Cichlid speciation in lake Victoria is believed to have involved sympatric speciation via habitat selection The likelihood of allopatric speciation increases when a population is __________ and __________ the broader range of the species. Which one of the following statements about the Galápagos finches is false? The common ancestor of the Galápagos finches appears to have come from the island of Cocos. Speciation can occur as a result of reproductive isolation. Reproductive isolation can occur when individuals in two populations of organisms All of the choices are correct Which one of the following statements about the apple maggot is true according to the case study? apple maggots are now distinguishable genetically from the hawthorn maggots Which of the following prevents closely related species from interbreeding even when their ranges overlap? Which of the following types of reproductive barriers separates a pair of insect species that could interbreed except that one mates on goldenrod flowers and the other on autumn daisies that blossom at the same time? A mountain range divides a freshwater snail species into two isolated populations. Erosion eventually lowers the range and brings the two populations together again, but when they mate, the resulting hybrids all produce sterile young. This scenario is an example of What is the nature of the reproductive barrier between a mutant tetraploid plant and its original diploid parents? Based upon experiments using fruit flies, Reproductive barriers may evolve as a consequence of a population's evolution in response to a new set of environmental conditions Which one of the following statements is false? The morphological special concept relies upon identifying genotypic diversity and comparing the nucleotides of genes Which one of the following is true about hybrid zones? In hybrid zones, prezygotic barriers may get reinforced Which of the following types of reproductive barriers separates a pair of species that could interbreed except that one mates at dusk and the other at dawn? When a tetraploid plant pollinates a diploid plant of the parental species, what will be the ploidy of the resulting zygote? Which of the following situations would be most conducive to rapid speciation? (Assume the conditions described persist as long as necessary.. Four circus wolves escape on Long Island. To everyone's surprise, they establish a small but viable population, coexisting successfully with humans in a partly suburban environment. The population is physically isolated from other wolves. A biological species is defined as a A population or populations whose members have the potential to interbreed and produce fertile offspring Frequently, a group of related species will each have a unique courtship ritual that must be performed correctly for both partners to be willing to mate. Such a ritual constitutes a type of behavioral isolating mechanism. Which of the following types of reproductive barriers separates a pair of moth species that could interbreed except that the females' mating pheromones are not attractive to the males of the other species? In the video you saw in class or online about natural election, Darwin's key analogy to explain natural selection involved ___________ selection When F1 hybrids between corn and teosinte are crossed, 1/256 offspring have the same phenotype as corn for a certain genetic trait. This suggests that about how many major genes are involved? The emergence of numerous species from a common ancestor that finds itself in a new and diverse environment is called Which one of the following is not reason to have a classification scheme? standardize the english words for each species
Lyme disease (Lyme Borreliosis) is caused by Borrelia bacteria and is transmitted through the bite of infected deer ticks (of the Ixodes species). Many species of mammals can be infected and rodents and deer act as important reservoirs. The first recognized outbreak of this disease occurred in Connecticut, United States, in 1975. The current burden is estimated at 7.9 cases per 100 000 people in the United States, according to the US Centers for Disease Control and Prevention. Since the mid-1980s, the disease began to be reported in several European countries. Lyme disease occurs in rural areas of Asia, north-western, central and eastern Europe, and the United States of America. It is now the most common tick-borne disease in the Northern Hemisphere. People living in or visiting rural areas, particularly campers and hikers, are most at risk. If bitten, the tick should be removed as soon as possible Lyme disease symptoms include fever, chills, headache, fatigue, muscle and joint pain. A rash often appears at the site of the tick bite and gradually expands to a ring with a central clear zone, before spreading to other parts of the body. If left untreated, infection can spread to joints, the heart and central nervous system. Arthritis may develop up to 2 years after onset. Most cases of Lyme disease can be treated successfully with a course of antibiotics.
Error: template '../new_head.inc' not found What on earth?"Rammed earth" is the name given to structures formed by compacting small layers (about 10cm) of moist sub-soil inside a temporary framework, which is then removed and the wall left to dry.Unlike bricks and cement, rammed earth is not fired and so has a much lower embodied energy. The high density of rammed earth gives it a good 'thermal mass', which means it regulates the temperature of a building by absorbing excess heat and re-releasing it when the building cools down. Earth is not a good insulator, so would need to be contained within an insulated structure. Typically a rammed earth wall will be 300mm thick, and there are restrictions on the size and spacing of openings to ensure structural stability. With a team of three to four you can build 5-10 square metres of wall in a day. You can use local sub-soil, which can then be returned it to the earth at the end of the building's life. The soil must be well graded between gravel and clay sized particles and be low in organic content and salt (below 2%). Poor-quality sub-soils can be improved for rammed earth by careful mixing in of the correct size fractions; some rammed earth is "stabilised" by adding a small proportion of cement. Water-related weathering, including erosion by rain and freeze-thaw, are the primary agents of decay in rammed earth buildings. These factors must be considered carefully at the design stage. Over a period of time a rammed earth wall will dry out and become as durable as sandstone, as long as it is waterproofed top and bottom. Any protective coatings applied to the surface must be permeable to water vapour. The level and extent of materials testing you will need to do depends on the specific application and novelty of the material in use. For in-situ rammed earth, compliance tests are mostly undertaken on cylinders specially prepared for that purpose. In load bearing applications it is usual to undertake soil classification, moisture density testing, strength and shrinkage assessment. See 'Rammed Earth: Design and Construction Guidelines' by Peter Walker et al. (at http://store.cat.org.uk) for further details of compliance and structural tests. Recent experience across a number of local authorities in the UK demonstrates that rammed earth, when used correctly, is perfectly able to satisfy the requirements of modern building regulations.
for National Geographic News A new piece of evidenceone sure to prove controversialhas been flung into the human origins debate. A study published March 7 in Nature presents genetic evidence that humans left Africa in at least three waves of migration. It suggests that modern humans (Homo sapiens) interbred with archaic humans (Homo erectus and Neandertals) who had migrated earlier from Africa, rather than displacing them. In the human origins debate, which has been highly charged for at least 15 years, there is a consensus among scientists that Homo erectus, the precursor to modern humans, originated in Africa and expanded to Eurasia beginning around 1.7 million years ago. Beyond that, opinions diverge. There are two main points in contention. The first is whether modern humans evolved solely in Africa and then spread outward, or evolved concurrently in several places around the world. The second area of controversy is whether modern humans completely replaced archaic forms of humans, or whether the process was one of assimilation, with interbreeding between the two groups. "There are regions of the world, like the Middle East and Portugal, where some fossils look as if they could have been some kind of mix between archaic and modern people," said Rebecca Cann, a geneticist at the University of Hawaii. "The question is," she said, "if there was mixing, did some archaic genetic lineages enter the modern human gene pool? If there was mixing and yet we have no evidence of those genesas is indicated from the mitochondrial DNA and y chromosome datawhy not?" Alan Templeton, a geneticist at Washington University in St. Louis who headed the study reported in Nature, has concluded that yes, there was interbreeding between the different groups. "We are all genetically intertwined into a single long-term evolutionary lineage," he said. To reach his conclusion, Templeton performed a statistical analysis of 11 different haplotype trees. A haplotype is a block of DNA containing gene variations that researchers believe are passed as a unit to successive generations. By comparing genetic differences in haplotypes of populations, researchers hope to track human evolution. Templeton also concluded that modern humans left Africa in several wavesthe first about 1.7 million years ago, another between 800,000 and 400,000 years ago, and a third between 150,000 and 80,000 years ago. Alison S. Brooks, a paleoanthropologist at George Washington University, is more cautious about Templeton's conclusions. "Archaeological evidence supports multiple dispersals out of Africa," she said. "The question has always been whether these waves are dead ends. Did all of these people die? Templeton says not really, that every wave bred at least a little bit with those in Eurasia. SOURCES AND RELATED WEB SITES
Purpose of project to explain building of structures, 3d figures, compression/tension, the force of gravity and fine motor skills development. Have the child with your help or alone build a tower After many tries and the tower falling down we added more support to the sides to go higher. Tell the child gravity is what is making the tower fall down. For older children have them study Newton’s Law of Gravity Do Wide or narrow foundations help build higher? Build Different 3d figures such as rectangular prism, pyramid, and cube. Teach the child the different names and then ask them to point as you call out the different names Have the child place objects on top of 3d figures to see which one is the strongest. Don’t let them eat too much marshmallows!! Jade colored a tower by color abbreviations and filled out her lab report. I keep a Science journal with pictures of every experiment and activity sheets in a binder. You can get these free Science printables here for your experiment: Science Behind Experiment: Compression: force that pulls the marshmallows and toothpicks together
What Does Stomatal Guard Cells Mean? Stomatal guard cells, located on a plant's stomata on its leaves, are responsible for controlling small openings (the stomata) on a plant’s leaves through which the plant can “sweat”. These cells help to encourage gas exchange during photosynthesis, and while the process does result in water loss, it is a necessary trade-off for growth. On the underside of a plant’s leaves are pore-like openings called stomata. Guard cells surround these pores, and help to regulate their opening and closing. Maximum Yield Explains Stomatal Guard Cells Plants rely on photosynthesis for the food they consume and the energy they need for growth. However, during this process, gases must be exchanged – oxygen must be expelled, and carbon dioxide must be taken in. This cannot happen if a plant keeps itself sealed off – stomatal guard cells help ensure that gas exchange can occur, and ensure that not too much water is lost during the process. On the underside of a plant’s leaves are pore-like openings called stomata. Guard cells surround these pores, and help to regulate their opening and closing. When open, moisture can escape from the plant, and gas exchange occurs. Ultimately, more than 95 per cent of water loss in a plant is due directly to water vapor lost through stomatal activity. When the pores close, gas exchange stops, but so does moisture loss. By regulating the number of open and closed pores at any given time, stomatal guard cells are able to maintain a balance between moisture levels within the plant, and the needed gas exchange to ensure healthy growth. Guard cells function based on the influx of water and light. When light shines on the cells, the outer walls bow, while the inner walls remain rigid. This pulls the stomata open, allowing gases to exchange. When darkness falls, water is lost, and the moisture levels inside the walls drops. This allows the pores to close.
Early development of the town The first written evidence, not only of the existence of Oxford but also of its importance, comes from the Anglo-Saxon Chronicle (begun in the 9th century and continued until 1154) where it is stated that Edward (the Elder), took control of "London and Oxford and the lands obedient to those cities". Oxford had naturally developed as an important town due to its strategic location. A location important both politically, because it marked the border of the two kingdoms of Mercia and Wessex, and commercially because it lay at the confluence of the rivers Cherwell and Thames. A new fortified town was then created no longer attached to the early minster, the north gate of the fortification probably being on the site of what is the oldest building in Oxford, St Michael's church tower, Cornmarket Street. The Church beside the tower was most probably a later addition. This new town of Oxford dates from the early 10th century. As is evident the importance of Oxford did not depend on its world famous seat of learning, which did not exist at that time, but on its geographical location. The town of Oxford by no means had a peaceful existence. Two events of a bloody nature are recorded. The ancient church of St Frideswide was the site of a massacre of Danes, which caused the town to be sacked in 1009 and in 1013 the town under the control of Sweyn of Denmark. The unification of England marked the end of the military importance of Oxford.
Gold nanoparticles are spheres made of gold atoms having a diameter of only a few billionths of a meter which can be coated with a biological protein and combined with drugs to enable the treatment to travel through the body and reach the affected area. A new method has been developed to make drugs ‘smarter’ using nanotechnology so they will be more effective at reaching their target. Scientists from the University of Lincoln, UK, have devised a new technique to ‘decorate’ gold nanoparticles with a protein of choice so they can be used to tailor drug to more accurately target an area on the body, such as a cancer tumor. The nanoparticles can ‘adsorb’ (hold on its surface) drugs which would otherwise become insoluble or quickly degrade in the bloodstream, and due to their small size, they can overcome biological barriers such as membranes, skin and the small intestine which would usually prevent the drug from reaching its target. Source: Science Daily
A category of developmental disorders characterized by impaired communication and socialization skills. The impairments are incongruent with the individual's developmental level or mental age. These disorders can be associated with general medical or genetic conditions. A disorder beginning in childhood. It is marked by the presence of markedly abnormal or impaired development in social interaction and communication and a markedly restricted repertoire of activity and interest. Manifestations of the disorder vary greatly depending on the developmental level and chronological age of the individual. (dsm-iv) A disorder characterized by marked impairments in social interaction and communication accompanied by a pattern of repetitive, stereotyped behaviors and activities. Developmental delays in social interaction and language surface prior to age 3 years. Autism is a disorder that is usually diagnosed in early childhood. The main signs and symptoms of autism involve communication, social interactions and repetitive behaviors. Children with autism might have problems talking with you, or they might not look you in the eye when you talk to them. They may spend a lot of time putting things in order before they can pay attention, or they may say the same sentence again and again to calm themselves down. They often seem to be in their "own world."because people with autism can have very different features or symptoms, health care providers think of autism as a "spectrum" disorder. asperger syndrome is a milder version of the disorder.the cause of autism is not known. Autism lasts throughout a person's lifetime. There is no cure, but treatment can help. Treatments include behavior and communication therapies and medicines to control symptoms. Starting treatment as early as possible is important. nih: national institute of child health and human development Broad term for disorders, usually first diagnosed in children prior to age 4, characterized by severe and profound impairment in social interaction, communication, and the presence of stereotyped behaviors, interests, and activities. Compare developmental disabilities. Disorder beginning in childhood marked by the presence of markedly abnormal or impaired development in social interaction and communication and a markedly restricted repertoire of activity and interest; manifestations of the disorder vary greatly depending on the developmental level and chronological age of the individual. Group of disorders characterized by delays in the development of socialization and communication skills; typical age of onset is before 3 years of age; symptoms may include problems with using and understanding language; difficulty relating to people, objects, and events; unusual play with toys and other objects; difficulty with changes in routine or familiar surroundings, and repetitive body movements or behavior patterns; autism is the most characteristic and best studied pdd; other types of pdd include asperger syndrome, childhood disintegrative disorder, and rett syndrome; prefer nts where possible. Type of autism characterized by very early detection (< 30 months), social coldness, grossly impaired communication, and bizarre motor responses.
8. Personal Effects A mirror is an object that reflects light or sound in a way that preserves much of its original quality subsequent to its contact with the mirror. Some mirrors also filter out some wavelengths, while preserving other wavelengths in the reflection. This is different from other light-reflecting objects that do not preserve much of the original wave signal other than color and diffuse reflected light. The most familiar type of mirror is the plane mirror, which has a flat surface. Curved mirrors are also used, to produce magnified or diminished images or focus light or simply distort the reflected image. Mirrors are commonly used for personal grooming or admiring oneself (in which case the archaic term looking-glass is sometimes still used), decoration, and architecture. Mirrors are also used in scientific apparatus such as telescopes and lasers, cameras, and industrial machinery. Most mirrors are designed for visible light; however, mirrors designed for other types of waves or other wavelengths of electromagnetic radiation are also used, especially in non-optical instruments. Bronze mirror decorated with two falcons, From Egypt, Middle Kingdom (about 2040-1750 BC) The form of the ancient Egyptian mirror changed little from its first appearance in the Old Kingdom (about 2613-2160 BC) and consisted of a polished disc of bronze or copper, attached to a handle. The reflective surface was interpreted as the sun disc, because of its shape and shiny qualities. The falcons on this example might represent the sun-god Re. The handle of the mirror was of wood, metal or ivory. This example has been made to appear as if it has been plaited. A papyrus stalk, or the figure of Hathor were also common. The handle could also be surmounted by the head of Hathor. She was particularly associated with the mirror, which had connotations of sexuality and rebirth. The same theme can be seen in the handles in the form of nude female figures. They sometimes have their arms outstretched to hold the crosspiece below the disc. Adults were seldom shown without clothes, as this could be interpreted as a lack of status. One exception was dancers, whose erotic dances in tomb scenes, like the figures on the mirrors, were associated with rebirth in the Afterlife. (Source: The British Museum) The first mirrors used by people were most likely pools of dark, still water, or water collected in a primitive vessel of some sort. The earliest manufactured mirrors were pieces of polished stone such as obsidian, a naturally occurring volcanic glass. Examples of obsidian mirrors found in Anatolia (modern-day Turkey) have been dated to around 6000 BC. Polished stone mirrors from Central and South America date from around 2000 BC onwards. Mirrors of polished copper were crafted in Mesopotamia from 4000 BC, and in ancient Egypt from around 3000 BC. In China, bronze mirrors were manufactured from around 2000 BC, some of the earliest bronze and copper examples being produced by the Qijia culture. Mirrors made of other metal mixtures (alloys) such as copper and tin speculum metal may have also been produced in China and India. Mirrors of speculum metal or any precious metal were hard to produce and were only owned by the wealthy. Seated woman holding a mirror.Ancient Greek Attic red-figure lekythos, ca. 470–460 BC, National Archaeological Museum, Athens A sculpture of a lady looking into a mirror, India Metal-coated glass mirrors are said to have been invented in Sidon (modern-day Lebanon) in the first century AD, and glass mirrors backed with gold leaf are mentioned by the Roman author Pliny in his Natural History, written in about 77 AD. The Romans also developed a technique for creating crude mirrors by coating blown glass with molten lead. Parabolic mirrors were described and studied in classical antiquity by the mathematician Diocles in his work On Burning Mirrors. Ptolemy conducted a number of experiments with curved polished iron mirrors, and discussed plane, convex spherical, and concave spherical mirrors in his Optics. Parabolic mirrors were also described by the physicist Ibn Sahl in the 10th century, and Ibn al-Haytham discussed concave and convex mirrors in both cylindrical and spherical geometries, carried out a number of experiments with mirrors, and solved the problem of finding the point on a convex mirror at which a ray coming from one point is reflected to another point. By the 11th century, clear glass mirrors were being produced in Moorish Spain. In China, people began making mirrors with the use of silver-mercury amalgams as early as 500 AD. Some time during the early Renaissance, European manufacturers perfected a superior method of coating glass with a tin-mercury amalgam. The exact date and location of the discovery is unknown, but in the 16th century, Venice, a city famed for its glass-making expertise, became a centre of mirror production using this new technique. Glass mirrors from this period were extremely expensive luxuries. The Saint-Gobain factory, founded by royal initiative in France, was an important manufacturer, and Bohemian and German glass, often rather cheaper, was also important. The invention of the silvered-glass mirror is credited to German chemist Justus von Liebig in 1835. His process involved the deposition of a thin layer of metallic silver onto glass through the chemical reduction of silver nitrate. This silvering process was adapted for mass manufacturing and led to the greater availability of affordable mirrors. Nowadays, mirrors are often produced by the vacuum deposition of aluminium (or sometimes silver) directly onto the glass substrate. 8.2 Eye Glasses Glasses, also known as eyeglasses (formal), spectacles or simply specs (informal), are frames bearing lenses worn in front of the eyes. They are normally used for vision correction or eye protection. Safety glasses are a kind of eye protection against flying debris or against visible and near visible light or radiation. Sunglasses allow better vision in bright daylight, and may protect against damage from high levels of ultraviolet light. Other types of glasses may be used for viewing visual information (such as stereoscopy) or simply just for aesthetic or fashion purposes. The Glasses Apostle, painted by Conrad von Soest 1403, public domain The earliest depiction of spectacles [eyeglasses] in a painted work of art occurs in a series of frescoes dated 1352 by Tommaso da Modena in the Chapter House of the Seminario attached to the Basilica San Nicolo in Treviso, north of Venice. Cardinal Hugo of Provence [Hugh de St. Cher] is shown at his writing desk wearing a pair of rivet spectacles that appear to stay in place on the nose without additional support. Detail of a portrait of Hugh de Provence, painted by Tomaso da Modena in 1352. Portrait of cardinal Fernando Niño de Guevara by El Greco circa 1600 shows spectacles with temples passing over the ears: The earliest historical reference to magnification dates back to ancient Egyptian hieroglyphs in the 5th century BC, which depict “simple glass meniscal lenses”. The earliest written record of magnification dates back to the 1st century AD, when Seneca the Younger, a tutor of Emperor Nero of Rome, wrote: “Letters, however small and indistinct, are seen enlarged and more clearly through a globe or glass filled with water”. Nero (reigned 54–68 AD) is also said to have watched the gladiatorial games using an emerald as a corrective lens. The use of a convex lens to form a magnified image is discussed in Alhazen’s Book of Optics (1021). Its translation into Latin from Arabic in the 12th century was instrumental to the invention of eyeglasses in 13th century Italy. Englishman Robert Grosseteste’s treatise De iride (“On the Rainbow”), written between 1220 and 1235, mentions using optics to “read the smallest letters at incredible distances”. A few years later, Roger Bacon is also known to have written on the magnifying properties of lenses in 1262. The first eyeglasses were made in Italy at about 1286, according to a sermon delivered on February 23, 1306 by the Dominican friar Giordano da Pisa (ca. 1255 – 1311). Sunglasses, in the form of flat panes of smoky quartz, were used in China in the 12th century. Similarly, the Inuit have used snow goggles for eye protection. However, while they did not offer any corrective benefits they did improve visual acuity by narrowing the field of vision. The use by historians of the term “sunglasses” is anachronistic before the twentieth century. Subject Related Links: Note: Part 2 is divided into 7 pages. To continue reading, please click on the next page: