content
stringlengths
275
370k
Why to Learn about the Indirect Speech? While learning any language, we always strive to know more about it – in order to understand and be understood. On that purpose we study different types of sentences and utterances, various speech styles and techniques, learn the words and idioms. We aspire to absorb the language and speak it the way we speak our mother tongue – fluently and easily, without even thinking of the way we’re doing it. There are certain techniques that may be called “milestones” in the language studies. They differ from one language to another and on reaching and mastering them one can feel his language develops greatly. Such are correct word endings and cases in Russian, the tone key in Chinese, the agglutination type of word composition in German. One of such techniques for English is the correct use of Indirect Speech. While in some languages the transition of the utterance from direct to indirect is performed “automatically”, by just taking away the inverted commas, in English to make a sentence with the Indirect Speech you will have to consider several factors such as type of utterance, pronouns, adverbs, tense and even the situation in which the actual sentence is created. So this technique has its peculiarities and pitfalls, that is why it requires a thorough studying. In case you face problems with your essay in English you can always use our professional editing services. What’s the difference? To use the techniques of direct / indirect speech properly we must be able to differ them and realize the way they are used. Let’s have a look at their definitions: Now let’s analyze their special features and what these two techniques have in common: Thus we can see that the mentioned techniques serve to the same purpose – to relay the other person’s words, – however perform it by different means. These means are defined by the type of conversation, its style, the punctuation changes. If you feel you need assistance on those rules you can always consult our competent paper editor online. However the most difficult part of the transition is implied in another difference between the direct and Indirect Speech: The syntax and grammar. Let’s examine it more closely. The change of grammar and syntax while converting to the Indirect Speech 1. Scroll up a bit and look at the last utterance example. What else has changed in the second sentence that we hadn’t mentioned? That’s right, the exclamation mark. The imperative utterance became the narrative one. Let’s see what happen if we have an interrogative sentence: Rosie smiled and asked, “Whom are you looking for?”. Let’s make it an indirect speech: Rosie smiled and asked whom were they looking for. Here we have a distinctive feature: whatever type the sentence is in the Direct Speech (and those are narrative, imperative and interrogative), it will be turned into narrative in the Indirect Speech, all the punctuation marks to be changed likewise. There are also some peculiarities depending on the actual type of the sentence: Be alert on the popular grammar mistakes and get to know how to avoid them with the help of our professionals. 2. When you change the utterance from Direct to Indirect Speech, all the characteristics of the person and situation (i.e. pronouns and adverbs) must be changed accordingly in order for the sentence not to lose sense: Nigel said, “I’m going home”. Charlotte cried,”I don’t want to stay here alone until tomorrow!” – If we leave the pronouns and adverbs as they are, the sentences will acquire a new sense or won’t make it at all: Nigel said that I’m going home. Charlotte cried that I don’t want to stay here alone until tomorrow. See? As if Nigel and Charlotte are talking of another person and not about themselves. The correct version is: Nigel said he was going home. Charlotte cried that she did not want to stay there alone until the next day. We see that the pronouns and adverbs are changed in respect to the person or subject that they related to in the Direct Speech sentence. 3. The rule of sequence of Tenses is applied when you change the Direct Speech into the Indirect one. Present Tense becomes Past Tense, Past goes to Perfect, Future to Future in the Past. The teacher is asking: “Will you attend tomorrow math class?” – The teacher was asking if we would attend tomorrow math class. Modal verbs (can, may, must, ought) are also used in the Past tense: I confessed, “I can’t buy this book”. – I confessed I couldn’t buy this book. However if the sentence tells about the present, just happened events, or of some wide-known facts, the Present Tense may be used: “The Earth has the form of an ellipse,” Isaac mused. – Isaac mused that the Earth has the form of an ellipse. I’m repeating, “I’ll be just passing by and not entering the building”. – He’s just repeated that he’ll be just passing by and not entering the building. Learn more about the correct use of the Tenses in the article in our blog http://essay-editor.net/blog/how-to-learn-present-simple-easy . Check the other articles that may be of assistance on this subject: Now that you know… Now you learned the main rules of creating the Indirect Speech utterances, got to know about its pitfalls and peculiarities. But how to implement it into your language practice? Just the same way that all the other rules: Check for other useful tips at our recent post “How to Learn English For Free And by Yourself?” Now that you know everything about the indirect Speech, you can tell yourself, “I am a Master of Indirect Speech! I hit another landmark on my way to the fluent English!”. And now converse the sentence to the Indirect Speech. Got it right? We knew you could do it! Also in this section: Was the essay useful for you? We are always happy to receive your feedback, welcome to our website!
Notice: Users may be experiencing issues with displaying some pages on stanfordhealthcare.org. We are working closely with our technical teams to resolve the issue as quickly as possible. Thank you for your patience. Melanoma is the least common of the 3 main types of skin cancer. Other types are called non-melanoma skin cancers and usually include basal cell and squamous cell carcinomas. Melanoma occurs when pigment cells in the skin (called melanocytes) mutate and begin growing out of control. This is often related to excessive sun exposure, either from natural sunlight or artificial sources, such as indoor tanning beds. The abnormal cells form a spot or lesion that can be seen on top of the skin. The spot can be entirely new, or less commonly involve a prior mole on the skin. Many melanomas are diagnosed before they grow into surrounding areas or spread in the body. Others may grow into nearby tissues (mainly regional lymph nodes) or spread to other parts of the body (metastasize). Melanoma is less common than non-melanoma skin cancer. In general, these skin cancers are highly treatable if they are detected and treated early.
During simulation you can see the current flow in the circuit and the level of the node-voltages, the level of the branch currents and most important, he can see the current path in the circuit. The simulation tool can animate any power electronic circuit or electrical drive. The early simulation programs produced a long list with numerical results. Nowadays most simulation programs offer the presentation of simulation results in graphical windows. The user has the ability to examine these graphical results by using a mouse, to obtain the numerical value at each point in time. The next step is the visualization of the simulation. Most important in power electronics is the current-path. For example, freewheeling of diodes becomes clear, when the user sees the current-path changing from switches to freewheeling diodes. The following visualization guidelines are used: Animation of electric circuit may look like a toy. This is however absolutely wrong. Animation is a viable tool for teaching, gaining insight, checking the behavior, or searching for failure modes. The advantages for teaching are clear. Students, for example, can see the current-paths in rectifiers or understand the freewheeling and discontinuous mode in SMPS. For complex topologies, such as the Vienna-Rectifier [Kolar, 1994], animation can be very helpful to understand and verify the principle of the converter. During animation, it becomes clear to the user how the converter is behaving. During animation, failure modes are detected. Even those failure modes, the user was not aware of. During failure analysis without animation it requires a lot of time to check each component. With animation, each failure is directly displayed, for example a switch is opened or closed at a wrong interval. Or some voltage levels are too high. Animation costs time. A simulation should be as fast as possible. Animation slows down the simulation. Therefore the user should have the ability to turn the animation on or off. If a circuit is animated, the simulation in most cases has to be slowed down, in order to follow the behavior of the system. Therefore the time-consumption of animation is in many cases not a problem. In order to speed up the animation it is not always necessary to show each simulation step. For example, if a small time step is required for the simulation but the animation is slow varying, compared to the time step, not each time step has to be displayed. This can be achieved by not displaying each time step in the animation. Figure 2 shows a typical dialog box for animation properties. To visualize the values of voltage and current, two different levels are required. For example a SMPS operating at the AC-mains can have current levels of only up to 1 Ampere. Also the control signals can vary in value from the voltage and current level. Typical control signals can range between 0 and 1, where the on-status of a control signal is clearly signaled by a red color and the off-status is signaled by a black color. To prevent that the schematic starts blinking like a Christmas tree, the user should have the possibility to turn on or off various animation effects. For example, the constant display of numerical values at each node can make the schematic very crowded and therefore the user should have the possibility to turn it off.Example Vienna-Rectifier In figure 3 the Vienna-Rectifier [Kolar, 1994] is displayed. The current-path, which is colored during the animation, is shown thick in this figure. One can see clearly that the complexity of the current-path gives valuable information on the functioning of the converter. In figure 4 the animation of a buck converter is shown. In the figure the freewheeling of the diode is shown. The level of the output voltage and the level of the current through the inductance L1 is displayed by two analog meters, which during the animation show the actual value. In figure 5 the animation of a DC shunt machine with crane is shown. The DC shunt machine is controlled by a controlled voltage source, which is regulated by a library-block 'Crane Control'. Also a controlled rectifier could have been used here. The library-block modeling the crane includes an object-block, which models the visualization of the crane. Depending on the angle of the axis of the DC shunt machine, the load is lifted by the crane. Animation gives valuable information about the simulation of Power Electronics and Electrical Drives. Displaying current-paths gives insight in the behavior of the circuit and can reveal failure modes. It reveals more insight in the circuit operation then only displaying simulation results in graphs. If animation is based on simulation, also complex circuits can be animated. This makes animation a practical tool for designing power electronics and electrical drives.
Loose parts are items and materials that we can move, adapt, change and manipulate. Play with loose parts is not only fun, it also develops literacy skills as children use creativity, problem solving and language as they negotiate what an item will represent. The materials come with no specific set of directions, they can be used along with or combined with other materials. Children can turn them into whatever they desire. These objects invite conversations and interactions; they encourage collaboration and cooperation. Loose parts promote social competence because they support creativity and innovation. All of these are valued skills we need as adults. Examples of loose parts include stones, buttons, wooden cubes, twigs, leaves, pinecones, fabric, beads, balls, rope, sticks, shells and Q-tips. The list goes on. Children acquire their first math skills and numerical concepts when they manipulate small loose parts. Using blocks and bottle caps for sorting, classifying, combining and separating is math in action! Children learn one-to-one correspondence when they make connections among those loose parts. You will commonly hear them counting and arranging the parts in specific sequences, patterns and categories by color, type and number. Loose parts come with no instructions - rather they invite children to use their imaginations to build, invent, choose, collaborate, consider and more. With loose parts play, the possibilities are only limited by our imaginations. Who knows where loose parts play adventures will take us? It is an exciting mystery tour of fun and learning! Community Literacy Coordinator Columbia Basin Alliance for Literacy – Trail and Area
So you’ve returned from the holiday break and had a couple of weeks to adjust to the school routine. Your children have probably been assessed by their teachers, and you may have a good idea of how well they are retaining the knowledge they learned from the first part of the school year. Now the fun begins, getting them ready for the final countdown to the end of the year assessment, and if your children are like many, they have an area where they are lagging behind academically. If that area is math, you are in luck. Math in Focus is an excellent way to teach any youngster how to master basic math skills which will set a great foundation for algebra and beyond. The first step is to get the appropriate workbook to help your child learn. Don’t be afraid to get a workbook that is below or above their grade level. The key is to get what your child needs. Teaching is about adapting to the child and how they learn. The second step, try to do the workbook yourself. If you understand the concept, it will be far easier for you to explain it to your child, but be prepared to try a few different tactics. Kids will see the world differently than you, and the more tricks you have to teach a concept, the more likely you will find one that works for them. Math isn’t something you can force. It has to be fun and challenging, but not beyond comprehension, so take it slow and keep at it. Here are a few tactics that I have used to teach basic addition and subtraction. This one I call the rainbow effect. It’s pretty simple. First have your child write the numbers zero to five. This gives them practice writing their numbers and keeps them focused on the project. Next, have them draw a colorful rainbow connecting the numbers, like the picture above. Then, have your child write out each addition equation. Keep in mind that you may have to help them with this process. You could have the basic equation already filled in ___ + ___ = ___ , or you could have parts of it filled in and let them complete the rest, ___ + 4 = 5 Examples are good too. The idea is to generate interest, not frustration. Once they appear to get the hang of it, start using numbers from 2 to 10. You can then add the subtraction problems to the mix. They can still use the rainbow effect for this, or if they aren’t a “rainbow” kind of kid, there are other activity toys that work well, like Unifix Cubes. Some kids learn best with something they can build, tactile learning, and Unifix Cubes work really well for these learners. Let them set up the equation using the cubes. This more interactive technique will keep them engaged while they learn addition and subtraction. Once you find your child has mastered a basic technique, you can move on to larger numbers up to twenty and design some Fact Family Triangles. These are great for practicing addition and subtraction skills that will help reinforce the strong bond between the two. Once your child has the “basics” down, these concept can easily be converted to algebraic equations. Using the example above you can get them to fill in an equation like: ___ + 5 = 3 + 2 or 1 + ___ = 2 + 3, and then transition to y + 2 = x + 4 Take each transition and concept slow by introducing it a few times until they start asking questions about it… questions equate to interest, and you can start working on each concept in more depth. Still need more? Here are a few videos to help. Have fun!
This activity gives students a chance to listen to difference with respect. Sections of the Curriculum: Pre-1861: Disunion Students will be able to identify the causes of the American Civil War. Here you can access , games, and activities for your ideal classroom experience. There are 3 versions of the same scenario that will identify 3 different criminals so you can use them for 3 classes — this avoids having the kids tell the next class who the suspect is ;. Here are some of the best lesson plans on the Internet. No Name-Calling Week is January 21-25, 2019. Students can guess where each fact goes into the diagram, then check their work with the answer key and write the answers into their worksheet Older Version: Part 1 — Discussion and Categorizing: Students will work in collaborative groups to determine where each statement will go into the Venn diagram. A lesson plan is important because it supports the material from textbooks and other sources, reinforcing the concepts. Then, the school schedule is set up so that there is a structure and framework within which the home education is based. I also use the characters to assign partners and seats randomly. Electoral College can be a complex concept for students to understand. Evaluate competing design solutions using a systematic process to determine how well they meet the criteria and constraints of the problem. Yet, each child is unique so it is up to the home educator to come up with the best lesson plan. Firstly, they have to know the learning style of the student. How do engineers design new things or improve existing ones? Lesson Plan Grade: 6th-8th Explore how technology can save lives in this fun engineering lesson plan! As such, it can be hard to find good quality lesson plans. Creative technology projects done for a real purpose can engage them in the work necessary high-level content learning at this age. Students design Infographic on Electoral College The U. Once they give the correct answer, click on the Google Slide and the answer will pop up on the screen. The rest of class will think about the fact and where it might go into the Venn Diagram, but not share their answers. Using Lesson Plans In a home school curriculum, the home educators set the school calendar year according to their schedule. We then do further explorations of density and practice using the formula. The next group chooses a statement, and so on, until all of the statements have been placed correctly into the Venn diagram. In most cases, lesson plans from schools, colleges, government agencies, and recognized educational companies are the best. They will then write in the facts as each answer is discussed. Multiple addresses need to be separated by commas 200 characters max. Students in middle school need to be challenged with creative approaches that get them thinking and moving beyond rote responses. This is because the subject matter is different and the ages of the students are different. Lesson Plan Grade: 6th-8th How does light interact with matter? How does this affect which direction their roots will grow? Learn from fellow teachers, parents and experienced caregivers about the development that happens during the middle school years so you can approach potential problems and challenges with wisdom and a sense of humor. The week is rooted in the idea of KindnessInAction — not merely recognizing the importance of kindness, but actively adding kindness into our every action. This process helps us to refine the lessons until they become teacher-friendly, fun for students and effective. Evaluate competing design solutions using a systematic process to determine how well they meet the criteria and constraints of the problem. Depending on your student's abilities, you may also want to explore lesson plans. Lesson Plan Grade: 3rd-8th Coming up with new ideas is hard! Develop a model to generate data for iterative testing and modification of a proposed object, tool, or process such that an optimal design can be achieved. Middle school students are around 11 to 14 years old so their mental capacity and constitution is unique. Analyze and interpret data on natural hazards to forecast future catastrophic events and inform the development of technologies to mitigate their effects. For example, evaporation is placed where liquids and gases overlap. Compare to other schools or official public opinion. Below are the Curriculum Lessons along with additional selected lessons, all of which fit within our Civil War Goals for Middle School Students. From elementary lesson plans that are based heavily on activities to high school lesson plans that are much more academic, lesson plans in middle school can be viewed as an in-between where there should be some activities and some academic stuff. Engineers and inventors use different brainstorming techniques to help them think outside the box and come up with new ideas. In this activity, students will be able to both learn and apply their understanding of the role and function of this element of our electoral system by creating infographics. Email A Friend Send This article to: Enter the e-mail address of the recipient. In this article, we will look at the best middle school lesson plans. If your local, state, or national organization is interested in participating, Developmental Partners Additional Partners National Education Organizations. If the statement is incorrectly placed into the diagram, the statement is returned to the list outside of the diagram. Many plans offer a series of lessons in a unit that you can teach during a several day or week time period, while others offer thematic tie-ins across several subjects, reinforcing the relevancy of the subject matter from different angles. Develop and use a model to describe that waves are reflected, absorbed, or transmitted through various materials. The students also created a graph showing how the school population voted and figured out the percentage of votes cast for each candidate. In this fun lesson plan, your students will measure the energy content of food by literally burning it using a device called a calorimeter that they will design and build themselves. Then, the home educator has to decide which lesson plan is suitable for the student. If the group is correct, it stays in the Venn diagram and each student writes the statement into their Venn diagram handout and crosses it off the list. I use this lab to tie their measuring skills together and introduce the concept of density. Plan an investigation to determine the relationships among the energy transferred, the type of matter, the mass, and the change in the average kinetic energy of the particles as measured by the temperature of the sample. Follow me on Twitter: Crushing the 500 + barrier! Use our for elementary, middle, and high school including our new Identity lesson for grades K-2. Mission Critical from San Jose State University Mission: Critical is an interactive tutorial for critical thinking, in which you will be introduced to basicconcepts through sets of instructions and exercises. For middle school students, there are many lesson plans that can be used for various subjects. It is best that the lesson plans be introduced after a specific idea or concept is introduced to the student because a particular lesson is usually based on a certain idea or theory. For any other use, please contact Science Buddies. They were responsible for generating questions roving reporters , talking to the primary grades, designing posters and ballots, setting up and manning the lemonade stand, distributing ballots and collecting and counting ballots and announcing the winner at the end of the school day. Be the Experts: class surveys public opinion on its own With the help of free online tools, students can ask their own questions and survey public opinion.
UnLike fractions are fractions that have different denominators. Before you add unlike fractions, you need to find the common denominator. The common denominator of two or more fractions is called the common multiple of the denominators. Watch each video to view examples on how to find least common multiples of two or more numbers. To add fractions with unlike denominators, you should: - Find the common denominator. - Rewrite each fraction using the common denominator. - Add the numerators. - Carry across the common denominator. - If possible, reduce the final fraction. Give the answer in reduced form. 0 out of 0 correct.
Flashcards in 13.3 - Antibiotics Deck (21): What is an antibiotic? A substance produced by a living organism that kills microorganism/slows down growth Are antibiotics effective on viruses? Why aren't antibiotics effective on viruses? Viruses have a protein coat, instead of a peptidoglycan wall Who discovered how penicillin works? What are the circles around the penicillin called? Zone of inhibition What are the 2 types of antibiotics? Which antibiotic kills bacteria? How do bacteriocidal antibiotics work? Prevent cell wall synthesis, leading to bursting of the bacterial cells (lysis) Give an example of a bacteriocidal antibiotic What do bacteriostatic antibiotics do? Prevent the reproduction of the bacteria by interfering with protein synthesis What is an example of a bacteriostatic antibiotic? What are broad-spectrum antibiotics? They work against a wide range of different types of bacteria How does antibiotic resistance develop? 1) Mutations occur in genes that regulate resistance 2) Leads to variation within bacterial population 3) Some cells become resistant to the antibiotic 4) These cells survive, reproduce, and pass on gene 5) Eventually all bacterial cells display resistance What effect do sulphonamides have on bacterial cells? Act as competitive inhibitors of the cell's metabolism, meaning they cannot synthesis DNA What effect does erythromycin have? Prevents protein synthesis by blocking one of the sites on the ribosome Why are MRSA infections dangerous? Because MRSA is resistant to most antibiotics What is MDR-TB? Multiple drug resistant strains of the TB bacterium Why is it important the MDR-TB patients complete their course? Because otherwise the most resistant bacterial populations will survive What factors cause the spread of resistance? Overuse of antibiotics Use of antibiotics in animal feed How can we slow down the spread of resistant bacteria? Reduce contamination in hospitals Improve cleanliness of hands Isolate infected patients Rigorous use of aseptic techniques
This expository nonfiction text is about how living things have adapted traits of mimicry or camouflage to either fool or attract prey or to repel or hide from predators. The author includes multiple examples of adaptations, and there are interesting pictures on every page that depict how living things (animals, insects, plants) are disguised to mimic other living things or camouflage/hide from other living things as either predator or prey. (Houghton Mifflin Harcourt StoryTown, 2008) This lesson was created as part of the Basal Alignment Project, during which teachers created CCSS-aligned lessons for existing literary and information texts in basal readers. All page numbers and unit/week designations found in this lesson relate to the edition of the basal reader named above. If you are using a trade book or different edition of this title, the page/unit/week references in this lesson will not match. Consult the content referenced in the body of the lesson to determine appropriate page numbers for your text.
The International Association of Fire Fighters is urging households to change more than just smoke alarm batteries. The IAFF also recommends changing to a photoelectric smoke alarm. About 90 percent of homes are equipped with ionization smoke alarms. “More than 3,000 people die each year in the United States and Canada in structure fires, and we need to do everything we can to reduce that number,” IAFF General President Harold A. Schaitberger said. “Using better smoke alarms will drastically reduce the loss of life among citizens and fire fighters because it will mean earlier detection of fires and result in faster response by emergency crews.” It is the position of the IAFF that all federal, state and provincial officials should require that all relevant building standards and codes developed in the United States and Canada include a mandate for the use of photoelectric smoke alarms. Research clearly has demonstrated that photoelectric smoke alarms are more effective at warning of smoke from smoldering fires than ionization smoke alarms. With earlier warning, people have more time to escape a burning structure and call 911 sooner. Photoelectric smoke alarms also are less susceptible to nuisance alarms. To prevent nuisance alarms, citizens often disable smoke alarms, placing themselves, others in a home or building and fire fighters at greater risk. Photoelectric smoke alarms contain a light source and a light-sensitive electric cell. Smoke entering the detector deflects light onto the light-sensitive electric cell, triggering the alarm. These alarms are more sensitive to large particles given off during smoldering fires – the kind of fires that typically occur at night when people are sleeping. Ionization smoke alarms have a small amount of radioactive material, and establish a small electric current between two metal plates, which sound an alarm when disrupted by smoke entering the chamber. But the technology leads to a delayed warning in smoldering fires that can lead to greater loss of life among people and fire fighters in a burning structure as a result of a more developed fire. A delayed warning during a smoldering fire, especially at night, can incapacitate people who are sleeping and lead to death as fire spreads.
Zinc deficiency can be a major issue overlooked by vegans and strict vegetarians. You can essentially eat the right foods, but due to bioavailability, phytic acid, and medical conditions, your body can’t properly absorb enough of the essential mineral. Deficiency presents in your body early because zinc interacts with your body on multiple levels and affects your body’s major systems. Understanding the balance required can assist you and your doctor in diagnosing and starting a zinc deficiency treatment that works with your diet. What is Zinc? It is an essential nutrient that your body can’t store or produce on its own. Zinc’s function in your body ranges from infant and child growth/development to your immune system working properly. However, those are only two of over 300 functions your body uses zinc to perform. Others include DNA and protein synthesis, aiding in healing your wounds, supporting enzymes, and cell development and division. (1) Zinc occurs naturally in animal and plants. Its synthetic form is also common in fortified foods, such as breads, crackers, and other grain-based goods. Who is at Risk for Deficiency? Your diet and any health conditions that cause malabsorption will determine your risk of a zinc deficiency. Carnivores and omnivores have the lowest deficiency risk. Strict vegetarians and vegans pose the greatest risk for insufficiency related to their diet. Age can sometimes be a factor; however, this is mostly in older infants and children than adults. Babies and children with a history of low zinc levels might find the problems continue into adulthood. (2) Increased Risk if You Have: • Crohn’s disease • Chronic renal disease • Chronic liver disease • Malabsorption syndrome • Sickle cell disease • Ulcerative colitis • Short bowel syndrome • A vegan or vegetarian diet • Had gastrointestinal surgery • Other digestive disorders, such as IBS • Alcohol addiction • Anorexia and bulimia • Medications, such as antibiotics, diuretics, and penicillamine In addition to the list, pregnant and breastfeeding women require additional zinc, and they’re at a greater risk for developing a deficiency. The same applies to infants between seven months and year of age who breastfeed exclusively or if the breastfeeding mother is deficient. (3, 4) How Much Zinc Should You Receive? Your age and sex introduce two factors for determining your recommended dietary allowance (RDA). Men should aim for 11mg per day. Women should receive 8mgper day unless they’re pregnant or breastfeeding. Then they should receive 11-12mg per day. Risks Associated with Zinc If your diet is your main source of zinc, it’s unlikely you will overdose or come anywhere close to high levels. Overdosing on zinc is more common with supplements or long-term use of higher doses under a doctor’s direction. The actual toxicity or maximum dose is unknown. A Zinc Overdose or Toxicity can Cause: • Nausea and/or loss of appetite • Abdominal cramps • Decrease in good cholesterol • Decreased absorption of other key nutrients, like copper and iron Because of these complications, you should take supplements under the care of a doctor, naturopath, or a dietician that is aware of your diet. Your doctor might recommend a higher dose and adjust it as needed until your zinc levels balance. If you choose to supplement without guidance, choose a supplement with a lower dose more in line with the recommended daily milligrams to avoid possible toxicity. Understand that without blood work and monitoring, you can’t know for certain how much zinc your body receives from food alone. How to Diagnose a Deficiency? Only a blood test can diagnose a true deficiency, but it’s not so simple. Your body can’t store zinc. A blood test will show what’s currently in your blood, and if you ate enough zinc rich foods, your results can return normal. Your doctor will likely base their diagnosis on your symptoms, and in severe or troubled cases, they might order a blood test to determine your zinc levels. Assist your diagnosis by keeping a food journal, especially if you’re already consuming a zinc rich diet. This is due to malabsorption syndromes, medications, or other gastrointestinal issues that could be causing your deficiency. You and your doctor can use this information to find the cause and a suitable course of treatment. 1. Unexplained Weight Loss Dropping weight quickly without dieting, exercise, or illness as the cause can be an early warning sign that something in your body is wrong. Be sure to let your doctor know as this can be a symptom of deficiency as well as an early sign for other diseases and conditions. 2. Mental Fog Waking in a fog before your morning coffee or tea isn’t the same as a mental cloud that persists throughout the day. It can make concentrating on simple daily tasks difficult. This can affect adults and children. 3. Persistent Diarrhea Frequent, loose stools can be a sign and cause of a deficiency. You might also be more susceptible to other bacteria and parasites, such as E. coli. Unsurprisingly, taking zinc supplements or consuming more zinc containing foods can aid in stopping deficiency caused diarrhea. 4. Sense of Smell and Taste Drastically Change Zinc is partially responsible for developing your sense of smell and taste. The enzymes that allow us to develop these senses require zinc to work, so when you’re deficient, you’ll notice changes to smell and taste capabilities. Often you’ll have a metallic taste in your mouth. Supplementing or increasing zinc rich foods restores it. 5. Cuts and Scrapes That Take Too Long to Heal Slow healing wounds can be one symptom. Increasing zinc or using zinc based creams can promote healing too, which makes it an effective zinc deficiency treatment. Zinc and skin health share a tight knit link. This is due to its job in our bodies reacting at the cellular level. Without it, our bodies can’t heal properly. 6. Frequently Sick or Long-lasting Colds and Viruses Your immune system requires zinc to function properly. It directly supports T-cell development, assists in killing bad bacteria, viruses, and cancerous cells too. In addition, it supports protecting cellular membranes. When your body is deficient, you become more susceptible to illness. 7. Hair Loss or Thinning Hair More investigation is needed, but Indian researchers believe zinc deficiency and hypothyroidism share a close link. Zinc plays a vital role in hormone balance, which means if you don’t receive enough, your thyroid can reduce its function or lead to adrenal fatigue. Both cause hair thinning and loss in men and women. Returning zinc levels to a normal range reversed the symptoms. 8. Dry Skin If you’re constantly hydrating and eating a nourishing diet, your skin shouldn’t be dry, itchy, and flaking, right? Wrong. Unfortunately, if you’re lacking zinc, your skin will show the earliest signs. Besides its role in cellular development, your body won’t receive the anti-inflammatory and anti-itch properties zinc delivers. Ever wake up and discover unexplained bruises? Do you develop bruises from light bumps? Some people bruise easily, and they might linger a bit too long too. Genetics often take the rap, but it could be that you’re not getting enough zinc for your body to heal at a normal rate. As a teenager, hormonal acne is an unpleasant fact, but as an adult, it’s an early symptom of deficiency. It’s one of the more common symptoms you’ll notice. However, most people overlook the connection to a possible deficiency in zinc. Zinc Deficiency Treatment In most cases, you should address your diet and seek out zinc sources naturally. Some people will need to supplement. Do speak with a doctor or nutrition specialist if you’re already eating a zinc-rich diet but symptoms persist. It might be a sign of another underlying condition or an absorption issue. Vegans and Vegetarians If you’re a vegan or strict vegetarian, you do have food based options. However, you might find it easier to supplement. Receiving enough zinc might be difficult since non-meat sources can contain phytic acid that hinders your body from using the zinc. On top of phytic acid, the bioavailability of zinc in plant-based foods isn’t comparable to meat, dairy, eggs, and shellfish. This means your body can’t use it as efficiently, and you might need to eat more. However, some of the better vegan and vegetarian sources are high in fat and calories. Soaking, heating, sprouting, and fermenting plant-based sources reduces the phytic acid and increases your body’s ability to use the zinc. The upside is these foods are lower in fat and calories, but the steps do take additional work and planning. Zinc Rich Foods • Legumes: adzuki, lentils, chickpeas, soybeans, mung beans, and black beans • Seeds: pumpkin, sesame, and hemp • Nuts: pine nuts, peanuts, pecans, almonds, and hazelnuts • Whole grains: whole wheat flour, quinoa, amaranth, buckwheat, oats, and rye • Potato: all varieties • Green beans • Leafy greens: kale, Swiss chard, and spinach • Dark chocolate • Whole grain and multigrain cereals, breads, and crackers • Cashew milk • Almond milk Supplements aren’t created equal. According to studies, you should opt for zinc citrate, zinc sulfate, or zinc gluconate, which are water-soluble and your body absorbs well. Final Thoughts on Zinc Deficiency Zinc deficiency can be a real problem for vegans and vegetarians, but most carnivores and omnivores won’t require supplements unless they have certain medical conditions or take medications that interact with absorption. Vegans and vegetarians might need a supplement to treat their deficiency. They can also readdress certain areas of their diet. Luckily, they don’t need to eat meat or animal by-products to receive enough zinc. Sprouting and fermenting legumes, seeds, and grains increases zinc’s bioavailability. It does the same for other essential vitamins and minerals, making your tasty, healthy choices more healthful. Talk with your doctor or naturopath if you have any of the symptoms discussed here to explore a zinc deficiency treatment that works for you.
Panspermia is the theory that asteroids, meteors or comets can carry microorganisms from one planetary system to another, and that such a process – perhaps microbes coming from Mars – may have helped life first develop on Earth. But could the reverse also be possible? Could microbial life be launched from Earth by asteroid impacts? Could that earthly life then end up leaving our solar system altogether? A new research paper by theoretical physicist Abraham (Avi) Loeb at Harvard University suggests that there could have been many such events over the lifetime of the Earth so far. He also just wrote a thought-provoking opinion article in Scientific American discussing this fascinating possibility. The new peer-reviewed study was submitted to arXiv on October 14, 2019. From the paper: Exporting terrestrial life out of the solar system requires a process that both embeds microbes in boulders and ejects those boulders out of the solar system. We explore the possibility that Earth-grazing long-period comets and interstellar objects could export life from Earth by collecting microbes from the atmosphere and receiving a gravitational slingshot effect from the Earth. We estimate the total number of exportation events over the lifetime of the Earth to be about 1-10 for long-period comets and about 1-50 for interstellar objects. If life existed above an altitude of 100 km [62 miles], then the number is dramatically increased up to about 100,000 exportation events over Earth’s lifetime. The idea that earthly life could be exported to other places in the solar system or even beyond is a fascinating one. But has it really happened? As Loeb noted, in most cases asteroid impacts wouldn’t be able to send rocks outside the solar system, but some of them could still make that journey with the help of other planets: Most asteroid impacts are not powerful enough to eject terrestrial rocks with enough speed to leave the solar system. But many solar system bodies spend most of their time in the Oort Cloud, a sort of comet nursery that hovers, loosely bound to the sun, at distances up to 100,000 times farther out than Earth. Some of these bodies appear episodically as long-period comets with eccentric orbits that bring them close to the sun, where they can get gravitationally kicked by planets all the way out of the solar system, like a ball running through a pinball machine. As well as microbes in rocks or soil, there are colonies of microbes in the atmosphere itself, at altitudes of about of 30 to 48 miles (48 to 77 kilometers). They could be “scooped up” by asteroids passing very close to Earth, but not impacting. This could even happen with asteroids that originated from beyond the solar system. As Loeb also noted, microbes would be much better suited for surviving being violently ejected into space inside a chunk of rock: It is well known that fighter pilots can barely survive maneuvers with accelerations exceeding 10 gs, where g is the gravitational acceleration that binds us to Earth. But Earth-grazing objects would scoop microbes at accelerations of millions of gs. Could they survive the jolt? Possibly! Microbes and other tiny organisms such as Bacillus subtilis, Caenorhabditis elegans, Deinococcus radiodurans, Escherichia coli and Paracoccus denitrificans have been shown to live through accelerations just one order of magnitude smaller. As it turns out, these mini astronauts are far better suited for taking a space ride than our very best human pilots. So, could Earth have spread life to other worlds? If any microbes from Earth ever did make this journey billions of years ago, could they have survived anywhere else in the solar system if they landed on another planet or moon? Not too likely, apart from maybe Mars (depending on how habitable it was at the time) or ice/ocean moons like Europa or Enceladus. But even on those moons, any microbes would just get dumped on the airless surfaces covered in ice. It’s doubtful that they could make their way down to the oceans below through the ice crusts unless perhaps they fell into a deep crack connected to water vapor geysers, as on Enceladus. If any life is ever discovered in the oceans of Europa or Enceladus, it’s more likely that it evolved there on its own. Also, if any microbes did make it out of the solar system completely, they would be traveling for millions or billions of years before encountering any other exoplanets or exomoons. While it hasn’t been proven yet that life from Earth has previously traveled throughout – and perhaps even out of – the solar system, it is, according to Loeb, certainly a very interesting possibility. Bottom line: A new paper by theoretical physicist Abraham (Avi) Loeb makes the case that microbes could have been ejected into space by asteroid impacts billions of years ago, in a reverse kind of panspermia. Paul Scott Anderson has had a passion for space exploration that began when he was a child when he watched Carl Sagan’s Cosmos. While in school he was known for his passion for space exploration and astronomy. He started his blog The Meridiani Journal in 2005, which was a chronicle of planetary exploration. In 2015, the blog was renamed as Planetaria. While interested in all aspects of space exploration, his primary passion is planetary science. In 2011, he started writing about space on a freelance basis, and now currently writes for AmericaSpace and Futurism (part of Vocal). He has also written for Universe Today and SpaceFlight Insider, and has also been published in The Mars Quarterly and has done supplementary writing for the well-known iOS app Exoplanet for iPhone and iPad.
The slave trade spread Africans far from their homeland, mostly into the colonies that would become the United States of America. After slaves were freed in the United States in 1863, blacks continued to dress in styles similar to others living in the United States, but during the 1950s and 1960s many black people in the United States began to protest the prejudice and injustice they experienced in much of American society, especially in the southern states. They held protest marches and other demonstrations in order to force changes in laws that unfairly favored white citizens over black citizens. This civil rights movement did change many of those laws and brought about many other changes in the lives of African Americans. Among these changes was an increased pride in black identity, which was expressed in many ways, one of which was an appreciation of African heritage. By the mid-1960s a new style of dress and hairstyle, which emphasized African clothing and African physical characteristics, had become popular among American blacks. In the decades before the civil rights movement, white European standards of beauty had dominated the fashion world, and white European hair and facial characteristics were considered "normal" and desirable. African Americans had often tried to imitate those characteristics, by straightening their tightly curled hair and minimizing their African features. However, as American blacks began to speak out and demand their rights, they also began to look differently at their own bodies. "Black is Beautiful" became a popular slogan, and many blacks began to appreciate their African looks. Instead of using hair straighteners, which were often painful and damaging to the hair, many black people let their curly hair go naturally into large round afros or "naturals." African features such as flat noses and thick lips began to be viewed as beauty advantages rather than defects. Many black Americans changed their names to African names. In 1965 an African American woman named Flori Roberts started a company to make cosmetics designed especially for black skin, and in 1969 Essence magazine was founded as a fashion journal for professional black women. Along with this increased appreciation of African features went a growth in the popularity of traditional African clothing styles and fabrics. Both African American men and women began to wear loose, flowing shirts and robes called dashikis and caftans made of brightly colored African fabrics. Many wore turbans or brimless caps of the same bright materials. These traditional fabrics, woven and dyed in Africa, became prized symbols of the heritage of American blacks. The interest in African fashion soon spread into the mainstream, as French designer Yves St. Laurent (1936–), who was born in northern Africa, introduced fashion lines of African and Moroccan clothing.
Heavy metal poisoning led to mass extinction FAU researchers have discovered that natural heavy metal poisoning led to an increase in deformities in plankton, causing a catastrophe The dinosaurs were not the only creatures to become extinct – there have been other more dramatic events in the earth’s history after which large number of species disappeared. The reasons for this remain unclear. However, researchers from FAU have now discovered a previously unknown cause for mass extinctions. Severe naturally occurring heavy metal poisoning before an extinction event led to an increased level of deformities in certain organisms, which could be seen as the precursor to the catastrophe. The ground-breaking results of their study have recently been published in the journal Nature Communications.* The debate as to whether the dinosaurs became extinct 65 million years ago as a result of an asteroid impact or as a consequence of climate change is still ongoing, as is the discussion of the role played by changes in sea level. Yet one thing is certain: mass extinction events have occurred again and again throughout the history of the earth. If you turn back far enough through the pages of the earth’s history, back to the Palaeozoic Era, for example, you will find an extremely dramatic event. Around 450 million years ago, the second largest mass extinction in the planet’s history took place, an event during which half of all species disappeared from the earth forever. In addition, there were also many smaller extinction events during the Palaeozoic Era. The pattern of these smaller events is very similar to that of the second largest event, meaning they can be used to draw conclusions about what caused it. Now research by palaeontologist Prof. Dr. Axel Munnecke and palaeobiologist Wolfgang Kießling from GeoZentrum Nordbayern at FAU is shedding light on the subject. In collaboration with researchers from France and the USA, the FAU researchers carried out a geochemical analysis of 420 million-year-old sediments from Libya that date from the Silurian Period, with interesting results. ‘The microfossils contained in the core samples that were deposited at the beginning of the extinction event display not only a very high rate of deformed organisms but also high concentrations of toxic elements such as arsenic, lead and manganese,’ explains Prof. Axel Munnecke. Prof. Munnecke also has an explanation for how this poisoning occurred. ‘We suspect that toxic compounds were released from deep-sea sediments by what are known as oceanic anoxic events and transported to the flat shelves due to the spread of the resulting oxygen-free water where they led to an increased number of deformities in the organisms living there.’ Deformities as harbingers of a catastrophe The heavy metal poisoning has severe consequences for organisms such as the minute zooplankton measuring a tenth of a millimetre that created the chitinozoa microfossils examined in the research – they developed numerous deformities. These findings allow the FAU researchers to join the dots. ‘One of the similarities between the mass extinctions in the Palaeozoic Era is that higher numbers of deformed organisms occurring at the beginning of these events are often observed worldwide,’ says Prof. Axel Munnecke. The same applies to the second biggest mass extinction event mentioned previously. The researchers are therefore taking their hypothesis one step further. They believe that other mass extinctions could also have been triggered by naturally occurring heavy metal poisoning. This does not mean that the organisms were killed directly by heavy metal poisoning, but that increased levels of metals in the water of oceans started a complex chain of events that is not yet fully understood and which eventually led to mass extinctions. ‘If this is the case, the deformities were the harbingers of the catastrophe or, to put it another way, the canaries in the coal mine.’ *Thijs Vandenbroucke, Poul Emsbo, Axel Munnecke, Nicolas Nuns, Ludovic Duponchel, Kevin Lepot, Melesio Quijada, Florentin Paris, Thomas Servais, and Wolfgang Kiessling: Metal-Induced Malformations in Early Palaeozoic Plankton are Harbingers of Mass Extinction. Nature Communications: DOI: http://dx.doi.org/10.1038/ncomms8966 Pressestelle der FAU
Learning is the process of acquiring new knowledge. Learning style is an individual’s natural pattern of acquiring something that is new and knowledegable. Different people have different learning styles. Some learn by reading, some by hearing, some by writing while many others do so by watching. Learning may occur as a part of education, personal development, schooling or training. There are many styles of learning which can be adopted to make students comfortable while learning; if a student learns effectively by a specific learning style, we can call it as his own natural learning style. In olden days, we didn’t have many learning styles and we were just forced to learn with those traditional learning styles employed by our schools. But nowadays, technology is a boon to students as it helps them learn the way they like. It can help students develop their own learning styles even at a very young age. Let’s learn about such learning styles and how technology helps students in adopting those styles. There are many categories of learning styles. Here we’re focusing on a specific categorization which is very common and widely used . According to this, learning styles are of three types; Visual Learning, Auditory learning and Kinesthetic learning. Let’s go briefly into these learning styles one by one. Visual learners prefer to see the content to understand it. Picture based learning, Video based learning and many more come under this category. It’s a learning style in which ideas, thoughts, concepts, processes and other information are represented and associated with images, graphs, charts and videos. Anyone can remember things easily if they visualize them; this way it’s easy to focus on the meaning, reorganization and grouping of similar content. Visual learning style increases visual memory, and helps in recalling the information better. Below, we provide you with information on a few visual learning tools & resources. Pinterest is a pin-board style photo sharing web platform. This site allows users to create and manage theme based image collections such as events, interests and hobbies. Pinterest has numerous categories like Art, Design, Humor, Education, Science and Nature, Technology, Photography, Products, Quotes, Travel, Fashion and many more themes. For people who focus more on images and graphics instead of words , Pinterest turns out to be a great visual learning tool. For more information on Pinterest, you may refer our previous articles “Pinterest: A Great Visual Learning Tool” & “How Can Educators Use Pinterest?” VariQuest provides many visual learning tools such as, the Poster Maker, Perfecta, Cutout Maker, Awards Maker, Cold Laminator, Design Center and VariQuest Software - featuring thousands of curriculum-aligned templates, eDies and graphics designed specifically for schools. These tools provide students and teachers with the ability to quickly and easily create visual supports that help differentiate instructions and personalize learning for all students. Dragonfly visual learning: Dragonfly visual learning provides visual spatial learners a way of learning next to other than with the regular learning material available in schools. Often, the regular learning materials for visual spatial learners are insufficient. They need a more visual explanation and a bigger picture so they can understand better, where all the pieces of learning material belong within the big picture. It contains several schooling subjects; compatible with iPad. This app requires iOS 4.0 or later. Continued on next page........
How does the word sound? History of this Word "pyelo" is from "puelos" (basin or trough) spoken by people of Greece starting about 1000 B.C. A prefix added to the start of a word. Indicates that "kidney" or "pelvis" modifies the word. Created to expand meanings. Can be used with many words to form new words. Examples of how the word is used |The pyelitis was always right sided with tenderness.| |An intravenous pyelogram is an x-ray examination of the kidneys, ureters and urinary bladder.|
It has become increasingly important that students become more aware of different cultures and societies. For students, international education means learning about the history, geography, literature and arts of other countries and importance or learning a second language. “With the events surrounding the tragedies of September 11, I believe it is more important than ever to learn about other nations and their customs,” State Schools Superintendent Dr. David Stewart said. “During this week, students can gain a better understanding of the world in which they live and come away with a more tolerant outlook toward different cultures.” Some suggested activities for International Week include inviting an international guest speaker to address any number of topics, from the differences in education systems to holiday celebrations. Other activities would be to facilitate a classroom-to-classroom connection with another country. Today’s technology allows computer users to communicate with friends from around the world as easy as a keystroke almost instantaneously. The Teacher’s Guide to International Collaboration on the internet is available at http://www.ed.gov/technology/guide/international/. Also included as a suggestion is an internationally-themed essay contest. Students can select an international topic and compose essays. The schools could publish the contest winners in a school-wide newspaper. Students who are non-native English speakers could enter the National Association for Bilingual Education’s annual Nationwide Writing Contest for Bilingual Students. The WVDE encourages all schools to provide it a brief summary of their activities. Anyone with any questions or comments should direct them to Amelia Davis Courts at [email protected] or Debbie Harki at [email protected].
Novocain, also known as procaine, is a local anesthetic that causes nerve cells to be unable to communicate with one another, thus producing a numbing sensation, according to the University of California Santa Barbara. By inhibiting nervous system communication, Novocain prevents the brain from receiving tactile sensory information.Continue Reading In order for a person to feel pain or any other tactile sensation, neurons within the nervous system must transmit chemical information to one another through biological chemicals known as neurotransmitters located within the synapses, or the spaces between neurons. Novocaine blocks nerve impulse activity by causing dysfunction within the ion channel and nerve cell membranes, according to Scientific American. Because Novocain is a local anesthetic, it only numbs areas in close proximity to where it is applied in the body. Depression of nervous system activity and prevention of nerve impulse activity are Novocain's primary effects on the body; however, animal studies show that Novocain increases levels of the neurotransmitters serotonin and dopamine within the brain. Local anesthetics such as Novocain differ from general anesthetics in that they don't cause a complete lack of sensation within the body and don't cause a loss of awareness. Novocain was once commonly used in dentistry in the United States, but as of 2014 its manufacture has been discontinued due to the more modern, safe and effective local anesthetics being synthesized.Learn more about Medications & Vitamins
Big Blasts: History's 10 Most Destructive Volcanoes 1 of 12 Ready to BlowResidents of volcanically active areas, whether prehistoric creatures or modern humans, haven't always had enough warning to escape before a nearby volcano blew its top, sometimes virtually destroying everything for many miles around. Here are some of the biggest, most destructive volcanic eruptions on Earth, from a series of colossal and sizzling outbursts that occurred about the same time as the dinosaurs went extinct to more recent explosive events like when Mount St. Helens shot a column of dust 15 miles high in 1980. And the countdown wouldn't be complete with Yellowstone supervolcano's enormous eruption some 640,000 years ago (#9 on this list). 2 of 12 Deccan Traps (60 million years ago)The Deccan Traps are a set of lava beds in the Deccan Plateau region of what is now India that cover an area of about 580,000 square miles (1.5 million square kilometers), or more than twice the area of Texas. The lava beds were laid down in a series of colossal volcanic eruptions that occurred between 63 million and 67 million years ago. The timing of the eruptions roughly coincides with the disappearance of the dinosaurs, the so-called K-T mass extinction, the shorthand given to the Cretaceous-Tertiary extinction. Evidence for the volcanic extinction of the dinosaurs has mounted, though many scientists still support the idea that an asteroid impact did in the dinosaurs. An idea put forth in the April 30, 2015 issue of the journal Geological Society of America Bulletin suggests the meteor impact that created the Chixculub crater actually may have kicked the Deccan Traps eruptions into high gear. Above is an aerial photo of the Lonar Crater in India, which rests inside of the Deccan Plateau, the massive plain of volcanic basalt rock left over from the eruption. 3 of 12 Yellowstone Supervolcano (640,000 years ago)The history of what is now Yellowstone National Park is marked by many enormous eruptions, the most recent of which occurred about 640,000 years ago, according to the United States Geological Survey. When this gigantic supervolcano erupted, it sent about 250 cubic miles (1,000 cubic kilometers) of material into the air. The eruptions have left behind hardened lava fields and calderas, depressions that form in the ground when material below it is erupted to the surface.The magma chambers thought to underlie the Yellowstone hotspot also provide the park with one of its enduring symbols, its geysers, as the water is heated up by the hot magma that flows underneath the ground. Until 2016, geologists didn't know for certain the number of eruptions in Idaho and surrounding states that predate Yellowstone's supervolcano. Now, research reported Feb. 10, 2016, in the journal Geological Society of America Bulletin suggests that up to 12 huge volcanic blasts occurred between 8 million and 12 million years ago in Idaho's Snake River Plain. The blasts led up to today's supervolcano, they said. Some researchers have predicted that the supervolcano will blow its top again, an event that would cover up to half the country in ash up to 3 feet (1 meter) deep, one study predicts. The volcano only seems to go off about once every 600,000 years, though whether it ever will happen again isn't known for sure. In more recent years, tremors have been recorded in the Yellowstone area. 4 of 12 Santorini Island (1645 B.C. and 1500 B.C.)While the date of the eruption isn't known with certainty, geologists think that Thera exploded with the energy of several hundred atomic bombs in a fraction of a second, sometime between 1645 B.C. and 1500 B.C. Though there are no written records of the eruption, geologists think it could be the strongest explosion ever witnessed. The island that hosted the volcano, Santorini (part of an archipelago of volcanic islands) in the Aegean Sea, had been home to members of the seafaring Minoan civilization, though there are some indications that the inhabitants of the island suspected the volcano was going to blow its top and evacuated. But though those residents might have escaped, there is cause to speculate that the volcano severely disrupted the culture, with the massive amounts of sulfur dioxide it spewed into the atmosphere altering the climate and leading to temperature declines. Associated tsunamis also resulted from the eruption, geologists speculate. In fact, the cataclysmic eruptions may have inspired the legend of the lost city of Atlantis, some say. In January 2011, the mostly underwater volcano awakened, evidenced by small tremors of about magnitude 3.2, researchers reported. The above image shows the volcanic island of Santorini today. 5 of 12 Mount Vesuvius (A.D. 79)Mount Vesuvius is a so-called stratovolcano that lies to the east of what is now Naples, Italy. Stratovolcanoes are tall, steep, conical structures that periodically erupt explosively and are commonly found where one of Earth's plates is subducting below another, producing magma along a particular zone. Vesuvius' most famous eruption buried the Roman towns of Pompeii and Herculaneum in rock and dust in A.D. 79, killing thousands. The ashfall preserved some structures of the town, as well as skeletons and artifacts that have helped archaeologists better understand ancient Roman culture. Vesuvius is also considered by some to be the most dangerous volcano in the world today, as a massive eruption would threaten more than 3 million people who live in the area. The volcano last erupted in 1944. [Preserved Pompeii: Photos Reveal City of Ash] 6 of 12 Laki – Iceland (1783)Iceland's history is dotted with volcanic eruptions. One notable blast, the eruption of Laki volcano in 1783, released trapped volcanic gases that were carried by the Gulf Stream over to Europe. In the British Isles, many died of gas poisoning from the release. The volcanic material sent into the air also created fiery sunsets recorded by 18th-century painters. Extensive crop damage and livestock losses created a famine in Iceland that resulted in the deaths of one-fifth of the population, according to the Smithsonian Institution's Global Volcanism Program. The volcanic eruption, like many others, also influenced the world's climate, as the particles it sent into the atmosphere blocked some of the sun's incoming rays. In fact, the Laki eruption was blamed for the cold harsh weather during the following winter. But research published online March 15, 2011, in the journal Geophysical Research Letters, suggested another culprit: an unusual combination of climate phenomena, including the negative phase of the the North Atlantic Oscillation, may be to blame. (Shown here, modern-day Laki.) 7 of 12 Mount Tambora (1815)The explosion of Mount Tambora is the largest ever recorded by humans, ranking a 7 (or "super-colossal") on the Volcanic Explosivity Index, the second-highest rating in the index. The volcano, which is still active, is located on Sumbawa Island and is one of the tallest peaks in the Indonesian archipelago. The eruption reached its peak in April 1815, when it exploded so loudly that it was heard on Sumatra Island, more than 1,200 miles (1,930 km) away. The death toll from the eruption was estimated at 71,000 people, and clouds of heavy ash descended on may faraway islands. The huge caldera formed by Tambora's eruption, photographed above in 2009, is 3.7 miles (6 km) in diameter and 3,609 feet (1,100 meters) deep. [200 Years After Tambora, Indonesia Most at Risk of Deadly Volcanic Blast] 8 of 12 Krakatoa (1883)The rumblings that preceded the final eruption of Krakatoa (also spelled Krakatau) in the weeks and months of the summer of 1883 finally climaxed into a massive explosion on April 26–27.The explosive eruption of this stratovolcano, situated along a volcanic island arc at the subduction zone of the Indo-Australian plate, ejected huge amounts of rock, ash and pumice and was heard thousands of miles away. The explosion also created a tsunami, whose maximum wave heights reached 140 feet (40 meters) and killed about 34,000 people. Tidal gauges more than 7,000 miles (about 11,000 km) away on the Arabian Peninsula even registered the increase in wave heights. While the island that once hosted Krakatoa was completely destroyed in the eruption, new eruptions beginning in December 1927 built the Anak Krakatau ("Child of Krakatau") cone in the center of the caldera that had been produced by the 1883 eruption. 9 of 12 Novarupta (1912)The eruption of Novarupta — one of a chain of volcanoes on the Alaska Peninsula, part of the Pacific Ring of Fire — was the largest volcanic blast of the 20th century. The powerful eruption sent 3 cubic miles (12.5 cubic km) of magma and ash into the air, all of which fell to cover an area of 3,000 square miles (7,800 square km) more than a foot deep. The blast was so powerful that it drained magma from under another volcano, Mount Katmai, 6 miles east, causing the summit of Katmai to collapse to form a caldera half a mile deep. The above photo shows a glacier sitting on Novarupta. To learn more about the source of the Novarupta eruption scientists have since installed a network of seismometers around the Katmai Volcanoes. 10 of 12 Mount St. Helens (1980)Mount St. Helens, located about 96 miles (154 km) from Seattle, is one of the most active volcanoes in the United States. It's most well-known eruption was the May 18, 1980 blast that killed 57 people and caused damage for tens of miles around. Over the course of the day, prevailing winds blew 520 million tons of ash eastward across the United States and caused complete darkness in Spokane, Washington, 250 miles from the volcano. The stratovolcano blasted a column of ash and dust 15 miles (24 km) into the air in just 15 minutes; some of this ash was later deposited on the ground in 11 states. The eruption was preceded by a magma bulge on the north face of the volcano, and the eruption caused that entire face to slide away — the largest landslide on Earth in recorded history. In 2004, the peak came back to life and spewed out more than 26 billion gallons (100 million cubic meters) of lava, along with tons of rock and ash. Though not near an eruption, Mount St. Helens began to recharge in the spring of 2014, with the rise of new magma causing the volcano to heave upward and outward by a smidge, seismologists said. [Gallery: The Incredible Eruption of Mount St. Helens] 11 of 12 Mount Pinatubo (1991)Yet another stratovolcano located in a chain of volcanoes created in a subduction zone, the cataclysmic eruption of Pinatubo was a classic explosive eruption. The eruption ejected more than 1 cubic mile (5 cubic kilometers) of material into the air and created a column of ash that rose up 22 miles (35 km). Ash fell across the countryside, even piling up so much that some roofs collapsed under the weight. The blast also spewed millions of tons of sulfur dioxide and other particles into the air, which were spread around the world by air currents and caused the global temperatures to drop by about 1 degree Fahrenheit (0.5 degree Celsius) over the course of the following year. [Photos: The Colossal Eruption of Mount Pinatubo] 12 of 12
Since its establishment in 1935, the United States’ Child Welfare Services system has worked to promote the wellbeing and safety of children. It is a complex system, which can vary quite a bit from state to state, but in general works to fulfill a few specific goals: to investigate reports of possible child abuse and neglect; to provide services to families that need help protecting or caring for their children; to arrange for children to live with family members or with a foster family when they are not safe at home; and to arrange for reunification, adoption, or other permanent family placements for children leaving foster care. With such lofty goals, it comes as no surprise that many child welfare systems in the U.S. were falling short, much to the alarm of advocacy groups. With significant lobbying, these groups were able to help implement changes in recent years, and most states have worked diligently to improve their child welfare procedures to better serve the parents and children within their communities. Despite this progress, one area of the child welfare system remains largely unchanged: issues with racial bias against families of color. Different Types of Bias, Contributing Factors Many studies have been conducted on racial inequities within the child welfare system, and a significant amount of it has documented the overrepresentation of Black children within the system. This is especially true when compared to the percentage they represent within the general population within the U.S.; 12.3% of our entire population is Black, and yet Black children account for upwards of 40% of foster care cases. These disparities have been shown to occur across all decision points within the child welfare system, from opening a case to designating a child available for adoption. In comparison to White children, Black children are one and a half times more likely to be identified as victims by the child welfare system, and twice as likely to be put into foster care. There are many factors that are thought to contribute to the overrepresentation of Black children in the child welfare system. One of the most prevalent factors is referred to as exposure bias. This term refers to the increased exposure that families affected by poverty experience with social service systems, including many avenues of state financial or housing assistance. Applying for these services requires a certain amount of scrutiny be put on a family, which can then lead to becoming involved with the child welfare system. As Black families historically deal with higher levels of poverty, exposure bias affects them significantly more than White families. In both the child welfare system and the world at large, racial bias can present itself in many different forms. Racial bias can fall under four different categories according to The Center for Racial Justice Innovation: - Internalized is within individuals. These are private beliefs and biases about race and racism, influenced by our culture. - Interpersonal: occurs when individuals interact with others and their private racial beliefs affect their public interactions. - Institutional: is the unfair policies and discriminatory practices of institutions, such as schools, workplaces, the criminal justice system, and the child welfare system, that routinely produce unequal outcomes for minorities. - Structural: involves the cumulative effects of history, interactions, and policies that systematically privilege White people and disadvantage people of color. All of these forms of racial bias can play a role in what kind of experience a Black family has with the child welfare system. In fact, two studies were conducted in Texas to ascertain the level to which race played a role in child welfare system outcomes. All families in the system are assessed a risk score, which affects the level of care a family receives, and whether or not a child is removed from the home. The study found that while Black families on average tended to be assessed with lower risk scores than White families, they were still 15 percent more likely than White families to have substantiated cases of maltreatment, 20 percent more likely to have their case opened for services, and 77 percent more likely to have their children removed instead of being provided with family-based safety services. Not all forms of racial disparity are overt. Caseworkers may not be aware that they are letting biases affect their decision making when it comes to the Black families they are assigned to. But the persisting disparity between the experiences of White and Black families involved in child welfare indicate that problems are still very much rampant in the system. Jessica Pryce, the director of The Florida Institute for Child Welfare at Florida State University and an advocate for fair practices in the child welfare system, had this to say on the issue, “Continuing to address racial disparity and the subsequent disproportionality in the child welfare system is necessary, because it exists. Minority families have disparate outcomes in the child welfare system, which negatively affects the family and, ultimately, society.”
Cocoliztli. This is how the Aztecs called the outbreaks that have severely impacted Mexico from 1545 to 1550, and then in 1576. According to estimates, 7 million to 17 million people would have succumbed to these two epidemic waves, likely contributing to the demise of the aztec empire. “But the identification of the pathogen responsible for this carnage has been particularly challenging for scientists, because infectious diseases leave little traces in the archaeological”, note the NPR. analysis of theDNA drawn in the teeth of ten persons who died during the first wave allowed us to identify a culprit. It is a type of salmonella, Salmonella enterica serotype Paratyphi C, which causes a fever life-threatening. An algorithm to the rescue The study, conducted by an international team led by Johannes Krause, a geneticist at the Max Planck institute in Germany, was published on 15 January in the online journal Nature Ecology & Evolution. According to the researchers, the use of a new algorithm dubbed Malt has been a great help, to the extent that it was possible to analyse very small fragments ofDNA and compare to a database of genomes of all known pathogens. “It is possible, however, that some pathogens are undetectable or completely unknown.”, emphasized The Guardian. “We can’t say with certainty that S. enterica was the cause of the epidemic cocoliztli”, specifies to the british daily Kirsten Bos, one of the authors of the study, and continues : We believe that it should be considered as a serious candidate.” On the other hand, states NPR, “the study does not identify the source of the bacteria, which fascinates both biologists archaeologists”. indeed, It would, in the end, to know its geographical origin, and thus whether it has been imported by the Europeans or if it is a strain of mexican that have proliferated in the favour of severe droughts, as suggested by a study published in 2002.
- The definition of none is an ancient way to means not any. An example of none used as an adjective is in the phrase "Thou shalt go with none other men but your husband," which means no male friendships for wives except with her husband. - None is defined as not at all. An example of none used as an adverb is in the phrase "none happy," which means not happy at all. - None means not one or any. An example of none used as a pronoun is in the sentence, "None of them were ready to eat," which means that no one was ready to eat. - not one: none of the books is interesting - no one; not anyone: none of us is ready - no persons or things; not any: many letters were received but none were answered - not any (of); no part; nothing: I want none of it, none of the money is left Origin of noneMiddle English ; from Old English nan ; from ne, not (see no) + an, one none other than Origin of noneOld English non: see noon - No one; not one; nobody: None dared to do it. - Not any: None of my classmates survived the war. - No part; not any: none of your business. - Not at all: He is none too ill. - In no way: The jeans looked none the better for having been washed. Origin of noneMiddle English, from Old English n&amacron;n : ne, no, not; see ne in Indo-European roots + &amacron;n, one; see oi-no- in Indo-European roots. Usage Note: It is widely asserted that none is equivalent to no one, and hence requires a singular verb and singular pronoun: None of the prisoners was given his soup. It is true that none is etymologically derived from the Old English word &amacron;n, “one,” but the word has been used as both a singular and a plural since the ninth century. The plural usage appears in the King James Bible (“All the drinking vessels of king Solomon were of gold &ellipsis; none were of silver”) as well as the works of canonical writers like Shakespeare, John Dryden, and Edmund Burke. It is widespread in the works of respectable writers today. Of course, the singular usage is perfectly acceptable and often preferred by copyeditors. Choosing between singular or plural is thus more of a stylistic matter than a grammatical one. Both options are acceptable in this sentence: None of the conspirators has (or have) been brought to trial. When none is modified by almost, however, it is difficult to avoid treating the word as a plural: Almost none of the officials were (not was) interviewed by the committee. None must be treated as plural in its use in sentences such as None but his most loyal supporters believe (not believes) his story. See Usage Notes at every, neither, nothing. Although uncountable nouns require none to be conjugated with a singular verb, e.g., None of this meat tastes right, the pronoun can be either singular or plural in most other cases, e.g., Fifty people applied for the position, but none were accepted., and None was qualified. However, where the given or implied context is clearly singular or plural, then a matching verb makes better sense: - None of these men is my father. - None of those options is the best one. - None of these people are my parents. - (now archaic except Scotland) Not any; no. - To no extent, in no way. [from 11th c.] - I felt none the worse for my recent illness. - He was none too pleased with the delays in the program that was supposed to be his legacy. - Not at all. [from 13th c.] - Now don't you worry none. - A person without religious affiliation. From Middle English none, noon, non (“not one"), from Old English nÄn (“not one, not any, none"), from ne (“not") + Än (“one"). Cognate with Scots nane (“none"), West Frisian neen & gjin (“no, none"), Dutch neen & geen (“no, none"), Low German nÄ“n, neen (“none, no one"), German nein & kein (“no, none"), Latin nÅn (“not"). - Obsolete form of no one.
Knowledge of shapes is one of the earliest educational processes that young kids are exposed to. Preschoolers use visual information about shapes to discriminate between objects and to learn about the world around them. This week the Early Childhood Education Team is offering playful learning suggestions for helping young kids learn about SHAPES. The playdough shapes building challenge below will not only help preschoolers observe and compare various shapes, but it will challenge them to integrate basic shape knowledge into a hands-on activity that encourages critical thinking skills. Playdough Shape Building Challenge Objectives: To encourage kids to use critical thinking skills and individual creativity to construct basic shapes with playdough. Skills presented in the building challenge: - Fine Motor - Visual Measurement - Number concepts for creation of the shapes - Relationships between the shapes - Planning and design that exhibits early attempts at engineering - Sensory (Tactile and Visual) - Connections with familiar and new shape knowledge - Creative Expression - Inquiry and Problem solving BUILDING INQUIRY: What basic shapes can the children create using ONLY playdough? The shapes building challenge can be modified for kids of various ages through the addition of more complex shapes, polygons, cylinders/cones/spheres, or creating towers from basic shapes. Rolling, squeezing, squishing, and molding playdough into various shapes is a FUN way to offer opportunities for the development of fine motor skills and knowledge of shapes through inquiry and solution-based learning! Introduction for students: Review basic shapes the children have had introduced or are learning: circles, triangles, squares, rectangles are good shapes to begin with. Challenge the children to use playdough only to create any shape(s) they know. The children will ask questions about how to make the shapes. Try to guide the children with open-ended questions that will allow them to think about the shape they wish to create and make a plan for the design with only the playdough. It is exciting to observe kids engaged in planning, building, making visual “measurements”, and formulating ideas about various shape designs! In the photo above, kids chose to create an 8-sided octagon and a 3-sided triangle from rolled pieces of playdough. CREATIVE BUILDING STRATEGY EXAMPLES To create circles, the kids tried various designs. Some of the children rolled a long playdough worm and then attached the ends. Other children chose to roll small balls and flatten with their hands. One group “pieced together” a large circle comprised of other small playdough balls. It’s amazing the observe young children thinking and creating. This challenge was FUN for the kids, but they gained some important problem solving skills that they will carry with them as they grow and learn about shapes! For even MORE ways to play with SHAPES in Early Childhood, please visit the activity suggestions from the #TeachECE team below: Shiny Showy Shapes Alliteration Fun by Growing Book by Book Roll and Cover The Shapes Alphabet Activity by Mom Inspired Life Shapes Preschool Theme Sand Writing Tray by Learning 2 Walk Mixing Shapes with Our Bodies – Group Activity by Capri + 3 Shape Sensory Squish Bag by Still Playing School Shape I Spy for Preschoolers: Free Printable by Life Over C’s Exploring Shapes with Yarn by Tiny Tots Adventures Playdough Shapes Building Challenge for Preschoolers by The Preschool Toolbox Blog DIY Shapes Puzzle by Munchkins and Moms Preschool Shape Hunt Activities by Fun-A-Day
The social cognition learning model asserts that culture is the prime determinant of individual development. Humans are the only species to have created culture, and every human child develops in the context of a culture. Therefore, a child’s learning development is affected in ways large and small by the culture–including the culture of family environment–in which he or she is enmeshed. 1. Culture makes two sorts of contributions to a child’s intellectual development. First, through culture children acquire much of the content of their thinking, that is, their knowledge. Second, the surrounding culture provides a child with the processes or means of their thinking, what Vygotskians call the tools of intellectual adaptation. In short, according to the social cognition learning model, culture teaches children both what to think and how to think. 2. Cognitive development results from a dialectical process whereby a child learns through problem-solving experiences shared with someone else, usually a parent or teacher but sometimes a sibling or peer. 3. Initially, the person interacting with child assumes most of the responsibility for guiding the problem solving, but gradually this responsibility transfers to the child. 4. Language is a primary form of interaction through which adults transmit to the child the rich body of knowledge that exists in the culture. 5. As learning progresses, the child’s own language comes to serve as her primary tool of intellectual adaptation. Eventually, children can use internal language to direct their own behavior. 6. Internalization refers to the process of learning–and thereby internalizing–a rich body of knowledge and tools of thought that first exist outside the child. This happens primarily through language. 7. A difference exists between what child can do on her own and what the child can do with help. Vygotskians call this difference the zone of proximal development. 8. Since much of what a child learns comes form the culture around her and much of the child’s problem solving is mediated through an adult’s help, it is wrong to focus on a child in isolation. Such focus does not reveal the processes by which children acquire new skills. 9. Interactions with surrounding culture and social agents, such as parents and more competent peers, contribute significantly to a child’s intellectual development. How Lev Vygotsky Impacts Learning: Curriculum–Since children learn much through interaction, curricula should be designed to emphasize interaction between learners and learning tasks. Instruction–With appropriate adult help, children can often perform tasks that they are incapable of completing on their own. With this in mind, scaffolding–where the adult continually adjusts the level of his or her help in response to the child’s level of performance–is an effective form of teaching. Scaffolding not only produces immediate results, but also instills the skills necessary for independent problem solving in the future. Assessment–Assessment methods must take into account the zone of proximal development. What children can do on their own is their level of actual development and what they can do with help is their level of potential development. Two children might have the same level of actual development, but given the appropriate help from an adult, one might be able to solve many more problems than the other. Assessment methods must target both the level of actual development and the level of potential development. Lev Vygotsky, L.S. (1962). Thought and language. Cambridge, MA: MIT Press. (Original work published 1934) Vygotsky, L.S. (1978). Mind in Society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. A paper by James Wertsch and Michael Cole titled “The role of culture in Vygotskyean-informed psychology”. This paper gives an accessible overview of the main thrust of Lev Vygotsky’s general developmental framework and offers a contrast to the Piagetian approach. This is an introduction to some of the basic concepts of Lev Vygotskyean theory (culturally-mediated identity) by Trish Nicholl. This is a site for Cultural-Historical Psychology and provides a periodically-updated listing of Vygotskyean and related resources available on the Web. This is a 1997 paper by P.E. Doolittle titled “Vygotsky’s zone of proximal development as a theoretical foundation for cooperation learning” and is published in Journal on Excellence in College Teaching, 8 (1), 83-103.
It may not be Jurassic Park, but it’s certainly a step in that direction. Scientists from the University of Chile have successfully created chicken embryos with a unique feature: dinosaur legs. Don’t imagine chickens stomping around with giant Brontosaur legs, though — the changes are all internal. The feature in question is the fibula, one of two bones inside the leg. In normal birds, the fibula stops about halfway down the leg, connected at the top but not the bottom, leaving it just hanging in space. Bone Stops Growing When chicken embryos form, they have a full fibula, but as they grow, the fibula stops growing along with them. Instead, it separates from the ankle where it was first attached and grows thinner and more spindle-like. Dinosaurs, like humans, had full fibulas to help support their massive weight. It seems that, at some point in their evolution, birds stopped needing the extra bone. By modifying one gene, however, the researchers were able to undo the change. When they inhibited the expression of the Indian Hedgehog (IHH) gene, all the chicken embryos kept their fibulas throughout their development. They believe it has something to do with a bone in the ankle called the calcaneum, which normally connects to the fibula. When the IHH gene was suppressed, the calcaneum began to produce the parathyroid-related protein (PthrP) gene, responsible for promoting bone growth. With the gene modification, the chickens’ fibulas kept growing throughout their development, as opposed to stopping halfway. The researchers discovered that, in normal birds, the cartilaginous growth plates at the end of their fibulas disappear shortly after they form, effectively halting bone growth. But with one small modification, millions of years of evolution can be undone. They published their work in the journal Evolution. Genetic experiments of this kind have goals far more scientific than the creation of a theme park filled with once-extinct reptiles. By selectively altering specific genes in birds, researchers can essentially de-evolve them to see how they attained their current forms over millions of years. The same team of researchers succeeded in creating chickens with dinosaur-like feet last year, and a separate team in the U.S. led by famed paleontologist Jack Horner grew dinosaur beaks on chickens. All three experiments have a similar goal: to find the genetic roots of avian evolution. The chickens grown in the lab did not hatch, but matching a key feature of the evolution of birds from dinosaurs to a single gene highlights the fact that echoes of the dinosaurs remain in the DNA of birds today. Whether we choose to use that knowledge for good or evil remains up to us.
Control Systems/Nervous System The nervous system functions by the almost instantaneous transmission of electrochemical signals. Highly specialized cells called neurons control the means of transmission. They are also the functional unit of the nervous system. The neuron is an elongated cell with three parts dendrites, cell body, and axon. The typical neurone contains many dendrites which appear to look like thin branches that extend from the long branch of the cell body. The axon is a single long projection that extends from the cell body. It usually ends in a few small branches called axon terminals. These neurons are usually connected in chains and networks. They are physically close together, yet never actually come in contact with one another. The gap that separates the axon terminals of one neuron from the dendrites of another neuron is called the synse. When an electrical impulse moves through the neuron, it starts at the dendrites. From there, it passes through the cell body along the axon. Impulses always follow the same path from the dendrite to the cell body, and then the axon. When the electrical impulse reaches the synapse at the end of the axon, special chemicals called neurotransmitters are released. The neurotransmitters will carry a signal across the synapse to the dendrites of the next neuron to restart the process in the next cell. When there is no impulse traveling through a neuron, the cell is said to be at its resting potential. The inside of the cell contains a negative charge in relation to the outside. The cell requires energy to maintain this negative charge. The cell membrane of the neuron contains a protein called Na+/K+ ATPase that uses the energy provided by one molecule of ATP to pump 3 positively charged sodium molecules out of the cell. It also simultaneously takes into the cell 2 positively charged potassium ions. Thus, there is a high concentration of sodium ions outside the cell and an excess of potassium ions inside the cell. A sodium leak channel allows some of the potassium ions to flow out of the cell. Nonetheless, the difference in concentrations creates a net potential difference across the cell membrane of about -70mV, which is the value of the resting potential. action potential The action potential is the electrochemical impulse that can travel along the neuron. The neuron membrane also contains voltage-gated proteins. These proteins will respond to changes in the membrane potential by opening and allowing certain ions to cross that would normally not be allowed to do so. The neuron has both voltage-gated sodium channels and voltage-gated potassium channels. Each one will open under different circumstances. The action potential will being when another neuron sends chemical signals to depolarize, or make less negative, the potential of the cell membrane in one localized area of the cell membrane- usually in dendrites. When the neuron is stimulated so that the cell membrane potential reaches -50mV, the voltage-gated sodium channels in that region will open up. The voltage at which these channels open up is called the threshold potential. (in this case, the threshold potential is -50). When the voltage-gated channels open, the sodium ions outside the cell will follow the concentration gradient and rush into the cell. The flood of sodium ions will cause the cell to depolarize and eventually the membrane potential will increase to +35mV. At this point, the voltage-gated sodium channel will close and the voltage-gated potassium channel will reach their threshold and open up. The positive potassium ions concentrated in the cell will now rush out of the neuron to repolarize the cell membrane to its negative resting potential. The membrane potential will now drop to -90mV and the voltage-gated potassium channels will close. After this occurs, the potassium leak channels will restore the membrane back to its original state with a potential of -70mV. This whole process will take place in about one millisecond. VERTEBRATE NERVOUS SYSTEM The vertebrate nervous system can be divided into 2 main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The central nervous system acts as a central command that receives sensory input from all regions of the body and integrates the information toe create a response. It controls most of the basic functions that are needed for survival, such as breathing, digestion, and consciousness. On the other hand, the peripheral nervous system refers to pathways through which the central nervous system communicates with the rest of the organism. In highly evolved systems, such as the human nervous system, there are 3 types of neural binding blocks: sensory, motor, and interneurons. sensory neurons send information to the central nervous system after the organ's sense organs receive a stimulus from the environment. Another name for these neurons is afferent neurons. motor neurons carry information away from the central nervous system to an organ or muscle as a response to a stimulus or a voluntary action. Another name for these neurons is efferent neurons. interneurons provide connection between sensory neurons and motor neurons. CENTRAL NERVOUS SYSTEM It consists of the brain and the spinal chord. The spinal cord a long cylinder cord that extends along the vertebral column back bone from the head to the lower back. The brain is made of almost entirely interneurons. The cerebrum is the largest portion of the brain and controls consciousness. It controls all voluntary movement, perception, sensory perception, speech, memory, and creative thought. The cerebellum helps to fine-tune voluntary movement, but does not initiate it. It makes sure that movements are coordinated and balanced. The brainstem is responsible for the control of involuntary functions, such as breathing, cardiovascular regulation, and swallow. It is a portion of the medulla oblongata, which is essential for life and processes a great deal of information. The medulla also helps maintain alertness. The hypothalamus is responsible for maintenance of homeostasis. It regulates temperature, controls hunger, manages water balance and helps to generate emotion. The spinal cord contains 3 types of neurons: axons, interneurons, and glial cells. Axons of motor neurons extend from the spinal column into the peripheral nervous system. Interneurons link motor and sensory neurons. Glial cells provide physical and metabolic support for neurons. It serves as a link between the body and the brain so that it can regulate simple reflexes THE PERIPHERAL NERVOUS SYSTEM It is a sensory system that carries information from the senses into the central nervous system from the body and motor system that branches out from the CN to organs or muscles. The motor system can be divided into 2 parts: somatic system and the autonomic system The somatic nervous system is responsible for voluntary or conscious movement. Neurons will only target skeletal muscles needed for bodily movement. All of the neurons in the somatic system release acetylcholine, an excitatory neurotransmitter that causes skeletal muscles to contract. The autonomic nervous system controls tissues other than skeletal muscles. It controls processes that an animal that does not have voluntary control over, such as heart beat, movement of the digestive tract, and contraction of the bladder. It can be subdivided into sympathetic division and parasympathetic division. Sympathetic division works to prepare the body for emergency situations. It increases heart rate, dilates pupils, and increases breathing rate. It also stimulates the medulla of the adrenal glands to release epinephrine and norepinephrine into he bloodstream. Together, it creates the "fight of flight" response. The parasympathetic division is most active when the boy is at rest. It slows the heart rate, increases digestion, and slows breathing. it creates the "rest and digest" response.
The Watson-Crick Model of DNA (1953) Deoxyribonucleic Acid (DNA) is a double-stranded, molecule. It consists of two sugar-phosphate backbones on the outside, held together by hydrogen bonds between pairs of nitrogenous bases on the inside. The bases are of four types (A, C, G, & T): pairing always occurs between A & T, and C & G. James Watson (1928 - ) and Francis Crick (1916-2004) realized that these pairing rules meant that either strand contained all the information necessary to make a new copy of the entire molecule, and that the order of bases might provide a "genetic code". Watson and Crick shared the Nobel Prize in 1962 for their discovery, along with Maurice Wilkins (1916-2004), who had produced a large body of crystallographic data supporting the mode. Working in the same lab, Rosalind Franklin (1920-1958) had earlier produced the first clear crystallographic evidence for a helical structure. Crick went on to do fundamental work in molecular biology and neurobiology. Watson become Director of the Cold Spring Harbor Laboratory, and headed up the Human Genome Project in the 1990s.
rectifier, component of an electric circuit used to change alternating current to direct current. Rectifiers are made in various forms, all operating on the principle that current passes through them freely in one direction but only slightly or not at all in the opposite direction. One early type of rectifier was the diode electron tube. Semiconductor rectifiers are essentially diodes made large enough to safely dissipate the heat caused by current flow. For heavy currents, they are often equipped with cooling fins or heat sinks. Rectifiers are commonly used in power supplies for electronics. There are two kinds of mechanical rectifiers. One, for polyphase alternating current, is a rotating switch that is synchronized with the fluctuations of the alternating current. The other uses a synchronized vibrating reed to change single-phase alternating current into pulsating direct current. Both have been largely superseded by solid-state devices. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
As it can be deduced, the territorial distribution of the world population is quite unequal, since there are concentrative areas of great population agglomerations and other areas that are practically depopulated. AChina is, at national scale, the first most populated country (nearly 1350 million inhabitants), India is the second one (about 150 million less) and United States of America is the third one (309 million inhabitants). In relative terms, the population density is the indicator for the population distribution accordingly to the area it occupies. The usual unit for measuring the density is inhabitants/km2. However, the population density by countries usually gives a false image of the population distribution on the earth’s surface, since some huge countries (as China, Brazil, United States, Canada and others) have extremely high populated areas next to very low populated ones. Therefore, the highest populated countries or regions are very small; sometimes city-states (as Macau, a special administrative region of China; Singapore; Hong Kong, another Chinese SAR; Monaco, a small principality; and some Lesser Antilles islands). On the other hand, Bangladesh, India and Japan, for instance, have the densest highest absolute population. Other standing out countries can be found in the American continent: Puerto Rico, El Salvador, Guatemala and Cuba.
DNA transcription takes place in the nucleus of a cell, which is where DNA is located. The process begins when the enzyme RNA polymerase hooks itself to a specific sequence found on the DNA at a place called the promoter region.Continue Reading The purpose of DNA transcription is to copy genetic information from DNA to RNA so that the information in RNA can be used to produce proteins. This process helps preserve the integrity of the DNA information and prevents it from becoming tainted. One of the main differences in DNA and RNA is the composition of their nucleotide bases. DNA contains adenine, guanine, cytosine and thymine. RNA also contains adenine, guanine and cytosine, but its fourth base is uracil.Learn more about Biology
Soft drinks are the most significant factor in severity of dental erosion, according to a new study published in the Journal of Public Health Dentistry. Dental erosion is when enamel – the hard, protective coating of the tooth – is worn away by exposure to acid. The erosion of the enamel can result in pain – particularly when consuming hot or cold food – as it leaves the sensitive dentine area of the tooth exposed. The enamel on the tooth becomes softer and loses mineral content when we eat or drink anything acidic. However, this acidity is cancelled out by saliva, which slowly restores the natural balance within the mouth. But if the mouth is not given enough time to repair itself – because these acid attacks are happening too often – the surface of the teeth is worn away. Anything with a pH value (the measure of acidity) lower than 5.5 can damage the teeth. Diet and regular sodas, carbonated drinks, flavored fizzy waters, sports drinks, fruit and fruit juices are all known to be harmful to teeth if they are consumed too often. Study finds that a ‘substantial proportion’ of adults have dental erosion The study finds that a substantial proportion of adults show some evidence of dental erosion, with the most severe cases being among people who drink sugary soft drinks and fruit juices. Examining 3,773 participants, the researchers found 79% had evidence of dental erosion, 64% had mild tooth wear, 10% had moderate tooth wear and 5% displayed signs of severe tooth wear. The participants in the study with moderate and severe tooth wear consumed more soft drinks and fruit juices each day than the other groups. Among participants with lower levels of tooth wear, the researchers found that milk was a more popular drink than soda or fruit juice. Men were also found to be at twice the risk for dental erosion as women, and tooth wear became more severe with age among the participants. Commenting on the study, Dr. Nigel Carter OBE, chief executive of the British Dental Health Foundation, says that while fruit juice may be a nutritious drink, the high concentrations of sugar and acid can lead to severe dental damage if these drinks are consumed often each day. “Water and milk are the best choices by far, not only for the good of our oral health but our overall health too,” says Dr. Carter. “Remember, it is how often we have sugary foods and drinks that causes the problem so it is important that we try and reduce the frequency of consumption.” “Dental erosion does not always need to be treated. With regular check-ups and advice your dental team can prevent the problem getting any worse and the erosion going any further. The more severe cases of tooth wear can often result in invasive and costly treatment so it is important that we keep to a good oral hygiene routine to make sure these future problems do not arise.” Many sodas and fruit juices contain at least six teaspoons of sugar, and as they often come in portions that are larger than recommended, they can lead to tooth decay as well as dental erosion. Source: Medical News Today
Rust is a fungus problem that can affect nearly every green and leafing plant. It's descriptive, rusty name comes from the reddish, brownish or yellowish color that it causes the leaves to turn, and also for the spores that appear powdery and will rub off on anything that touches the affected leaves and will spread on the slightest air current. Controlling rust and curing plants affected is simple. Remove Infected Leaves and Stalks Remove leaves on plants that are infected immediately. If rust infects your turf grass, regular mowing with a bag-style mower will help to do this. Prune away infected plant stalks as soon as the plant has finished its blooming cycle. Never compost this infected organic material. Composting infected organic material can cause it to infect the resulting mulch and loam. Instead, burn the infected plant leaves and stalks to control the spread of the fungus. Use Sulfur and Fungicides Use sulfur or other fungicides to kill the surface spores and clamp down on the spread of the infection to other plants. Plants should be treated every 10 days with fungicide. According to Drs. Buck and Williams-Woodward at the University of Georgia, you should change the fungicide that you use periodically to prevent rust from developing a tolerance to it. Sulfur used on a lawn will also help repel insects such as mosquitoes and ticks, but must be reapplied after each rain. Adopt Preventative Measures You can stop the spread of rust and help to bolster rust-infected plants by increasing the overall "hygene" of a plant. Rust is a problem in areas where plants are exposed to moisture for long periods of time, such as hot, humid areas where dew does not dry. Increasing air circulation around and through a plant will help it to dry faster and create conditions where rust cannot thrive. To do this, space plants such as day lilies, roses and fruit trees and bushes farther apart when planting. Additionally, prune plants such as roses and fruit trees and bushes so that air can pass through their center mass more easily. Finally, adding organic fertilizer such as compost or liquid fertilizer in the plant's water system will help to make a plant healthier. Healthier plants will be able to resist infections of rust and other fungus more effectively than unhealthy plants in the same way that people who take their vitamins are more resistant to the common cold.
You are here - 800 000 The term Paleolithic was created at the end of the nineteenth century. Its ancient Greek etymology refers to the « Old Stone Age », as opposed to the « New Stone Age », which refers to the succeeding Neolithic period. The Paleolithic period begins with the first evidence of human technology (stone tools) more than three million years ago, and ends with the major changes in human societies instigated by the invention of agriculture and animal domestication. In the Mesolithic period, the successors of Paleolithic humans adapted to the rapid global warming that constitutes the beginning of our present interglacial period, circa 9,600 BCE ago. Their lifestyle nonetheless continued to be based on hunting, fishing and gathering until with the arrival of the Neolithic farmers-breeders circa 6000 BCE in France. In France, the Neolithic period, which corresponds to the first farming societies, extended from 6000 to 2200 BCE. During this time, the nomadic way of life was replaced by a sedentary one. Ceramic technology was used make pottery and some stone tools, such as axes, were polished. After Prehistory, which includes the Paleolithic, Mesolithic and Neolithic, the Bronze Age is the first period of « Protohistory », also called the « Metal Ages ». Marked by significant technological and social advances, the Bronze Age was an important step in the evolution of European societies. It is characterized by the use of bronze metallurgy, to create this alloy mainly composed of copper and tin. The Iron Age, which corresponds to the second part of Protohistory, extends from 800 BC to the end of the first century AD. During this period, the regions corresponding to present-day France were gradually frequented by populations with a prolific written language (Greeks andRomans). The local populations (Celts, Gauls, Ligures, Iberians, etc) had little or no writing, on the other hand. Most of our knowledge of these human groups is therefore provided by archaeology, along with a few Greek and Latin texts. The Roman civilization, which existed for twelve centuries in Italy, from the 8th century BCE to the 5th century AD, was constantly nourished by outside influences and borrowings. It extended outside of Italy as early as the 3rd-2nd centuries BCE. In 120 BCE, the Roman province of Transalpine (roughly equivalent to the southern part of present-day France) was created, and in 52 BCE Caesar conquered Gaul. The Middle Ages spanned more than one thousand years. According to historians of texts, it began in 476 AD , at the end of the reign of Romulus Augustule, the last Roman emperor of the Occident, or in 496 AD, the date of the baptism of Clovis. It ended in either 1453, with the taking of Constantinople by the Turks and the end of the Eastern Roman Empire, or in 1492, the date of Christopher Columbus' landing on the American continent, or the death of Louis XI in 1481. The modern era covers the three centuries between the end of the Middle Ages and the French Revolution. In France, it can be subdivided into three periods that are marked by important political and artistic transformations : the Renaissance (from the end of the 15th century to the first decades of the 17th century), the advent of the nation-state during the reign of Louis XIV (17th and early 18th century), and the Enlightenment (the eighteenth century until the Revolution). The contemporary period extends from the beginning of the nineteenth century to the present day. Many historians place its beginning in 1789 or at the Congress of Vienna in 1815), which marks the end of the Napoleonic period. In Europe, these two centuries are characterized by phenomena and events of an unprecedented magnitude: demographic growth, industrialization and productivism, political revolutions, globalization of crises, colonialist extensions and collapses, nationalisms, wars, etc., along with the extension of democracy, totalitarian episodes, mass education, the decline of Christianity, agriculture, progress in medicine, etc. The Amerindian period starts from the origins of Prehistory until contact with the Old World in 1555, with the arrival of the brothers Pinzon on the coast of Guyana. This definition applies to all of South America and the West Indies. It is characterized by Amerindian migrations between the West Indies and South America and the development of eponymous cultures. The colonial period began with the contact between the Old and New Worlds in 1555, and ended with the abolition of slavery in 1848. It was characterized by the slave trade between Africa and the Americas, and exchanges between the new arrivals and Native Americans. It was also marked by the bacteriological shock associated with the arrival of the settlers who contributed to the disappearance of part of the Amerindian population. This section contains information on excavations carried out elsewhere in the world, whose chronology does not correspond to that of the European regions or the West Indies.
According to the Centers for Disease Control, 1 in 13 Americans have asthma. More than 50% of asthma is found in children. Asthma affects more than 7 million children in the United States. Coughing and wheezing are the most common symptoms of asthma, which occurs especially at night. Asthma triggers include inhalant irritants like perfume or tobacco smoke, inhalant allergens like pets and pollen, viral infections like the common cold, exercise and cold air or weather changes. Risk factors for asthma are tobacco smoke exposure, having other allergic diseases like eczema and pet and pollen allergies, a family history of asthma, living in an urban city with increased air pollution, obesity, reflux, being male and being African American. During the Winter, the cold dry air and upper respiratory tract infections can worsen asthma symptoms. To minimize asthma attacks in this weather, it is important to get the flu vaccine, keep your mouth closed and breathe through your nose when outdoors, replace heating system filters, exercise indoors, wash your hands, and stay away from sick people. It is also very important to regularly take your prescribed asthma medication, which may consist of inhaled steroids, combination steroid/ long acting beta-agonist, and//or montelukast and make sure these medications and your rescue albuterol inhaler are not expired. About 25 million Americans who have asthma also have allergies. For people with allergic asthma, winter is a critical time to avoid irritants and indoor allergens like dust and pet dander. It is also a time to optimize treatment for pollen, dust, and other environmental allergies which may include receiving allergen immunotherapy or allergy shots. Other new asthma treatment options include Xolair (omalizumab) and Nucala (mepolizumab) which are FDA approved to specifically treat allergic asthma. Children as young as 6 years old can now receive Xolair to help with asthma with an allergic trigger. Xolair blocks Immunoglobulin E (IgE), which is an antibody in your body that plays a key role in the allergic response in allergic asthma. Nucala can be ministered to children as young as 12 years old. Nucala reduces levels of a certain type of white blood cell called eosinophils that may contribute to lung inflammation found allergic asthma. If you think you or a family member may have asthma or your asthma is not well- controlled with your current medications, be sure to consult with your physician or an asthma specialist as soon as possible.
This Combination Factory lesson plan also includes: - Join to access all included materials Fourth graders use cut-outs of clothing to illustrate the numbers of combinations of articles of clothing they can create. Students create a tree-diagram and count the number of possibilities. They practice showing the information in other forms such as an organized list and in a table. Students practice combinations, using manipulatives as needed, in up to a 3x3 combination.
Year-round high temperatures characterize the biome, with a daily temperature range exceeding the seasonal range. Day lengths are essentially the same year-round. Precipitation is seasonal, but it is seldom dry enough for drought stress. There are one or more relatively dry months (with less than 100 mm rainfall) almost anywhere in the zone, and few areas are wet throughout the year. The wet/dry seasons are associated with the movement of the thermal equator back and forth over the geographic equator, and this movement produces two rainy seasons within that zone. Strong winds are associated with storms or the dry season. The microclimate is substantially different above and below the canopy, very significant to plants and insects. Soils of this region are typically latosols. Chemical weathering is pronounced with high rainfall, thus soil profiles are deep, and there is little development of horizons below the shallow organic layer. Silica and other cations are carried away by leaching, leaving an acidic soil with high proportions of aluminum and iron oxides. The soil color is often reddish or yellowish-red. Under certain rainfall regimes, iron compounds become concentrated in a particular horizon ("laterite"), which may become hard and almost impervious to plant roots. Decomposition is very rapid, with organic material in the soil concentrated right at the surface and most nutrients retained in the above-ground biomass. Tree growth is luxuriant, with emergent trees to 60 m and canopy trees to 30 m or more. The canopy is continuous except over water bodies and at windfalls ("gaps"). These are complex forests with as many as five moderately well-defined layers--emergents, upper canopy, lower canopy, understory and shrub/herb. Because of the dense leaf canopy, plant growth is suppressed and the undergrowth is relatively open in mature forest; the dense "jungle" of popular concept is associated with preclimax stages and gaps. Most plant species are evergreen, their leaves elliptic, often with an elongate ("drip") tip. Tree trunks are usually light-colored, straight and vertical, many with flaring buttresses; the bark is smooth and often patched with lichens. Lianas (large woody vines) are prominent. Epiphytes (growing on branches of other plants) reach their greatest development here, especially at slightly higher elevations, and epiphylls (growing on the leaves of other plants) are found only here. Decomposed plant material recycles almost instantly, so there is very little leaf litter. This zone has the highest plant diversity of any zone. There are thousands of species of trees, up to few hundred even in very restricted areas. Gymnosperms are rare, except cycads. Ferns and monocots are very diverse, many of them with tree form. Many of the tree species are in large families that are entirely or largely restricted to tropical forest areas, including Piperaceae, Moraceae, Annonaceae, Lauraceae, Capparidaceae, Leguminosae, Meliaceae, Anacardiaceae, Sapindaceae, Sterculiaceae, Guttiferae, Myrtaceae, Melastomaceae, Araliaceae, Myrsinaceae, Sapotaceae, Verbenaceae, Bignoniaceae, and Rubiaceae. The majority of large families are distributed throughout all the tropical continents. The Orchidaceae is one of the largest plant families, primarily epiphytic in this biome. Liana and vine families include Vitaceae, Leguminosae, Passifloraceae, Convolvulaceae and Cucurbitaceae. Oxalidaceae, Begoniaceae, Apocynaceae, Asclepiadaceae, Gesneriaceae, and Acanthaceae are important herbaceous families. Animal diversity is also highest in this zone, with an almost incomprehensible variety of insects possible in a few hectares of rain forest. As in plants, many species are rare (few per unit area) and specialized. Large mammals are not diverse in primary forest, as locomotion is hindered by dense vegetation, but a few major orders (Chiroptera, Primates) are especially well represented. Other characteristic mammalian groups include tree shrews, squirrels, cavies, sloths, pangolins, forest deer and antelope, civets, and cats. Birds reach their greatest diversity in this zone, with over 500 species of birds recorded at single tropical localities of restricted extent. Characteristic groups include pigeons, parrots, hummingbirds, hornbills, toucans, ovenbirds, antbirds, cotingas, pittas, birds-of-paradise, babblers, bulbuls, and tanagers. Lizards, snakes, and frogs also exhibit their greatest diversity in the rain forest, including many groups restricted to it. Caecilians are a major amphibian group restricted to the tropics, mostly in forested areas. With so much water available, there is also a tremendous diversity of aquatic animals in this zone, although the temperate-to-tropical diversity gradient is not so extreme as in most terrestrial groups. With intense competition for light, many trees have the ability to remain semidormant under the canopy until a light gap appears, then undertake very rapid growth. Most of the light-receiving leaves in understory species are arranged in a single layer to avoid self-cast shade (monolayering). Epiphytes, epiphylls, and lianas all represent strategies for small plants to grow up higher where there is more light. Canopy leaves are leathery and drought-resistant to withstand severe sun intensity in this layer. Some leaves alter their orientation during the day to avoid sun stress; this is controlled by turgor pressure. Elongate leaf tips may serve to draw water off wet leaves, permitting respiration. New leaves in many plants are without chlorophyll (they look red or white) until they have grown to full size and have survived potential herbivore browsing. Extensive buttresses furnish support necessary because root systems are shallow, extending laterally to tap the surface layer of nutrients. Mycorrhizae (symbiotic fungal associations) in roots allow direct connection with the litter layer for efficient nutrient absorption. Pollination and seed dispersal are largely by animals, and interactions between plants and animals are very highly developed in this zone. Much pollination is by vertebrates and social bees, which travel long distances between scattered individuals of many plant species. Animals show year-round activity and very high diversity, thus interactions among species are intense. With the high diversity of predators, antipredator adaptations are maximally developed here. Camouflage is virtually perfect in the majority of smaller animals. Not only do brown and green predominate, but in some species the color changes with the background color. Background matching extends to shape as well as color; many insects, lizards, snakes, and frogs look like leaves, twigs or vines, down to amazingly fine details. Animals as different as clouded leopard and python have similar markings, and the same leaf type is mimicked by animals as different as katydids and chameleons. A considerable portion of animal activity occurs in the canopy, where light is not limiting and plant productivity is maximized; in this complex landscape, adaptations for arboreality abound. Locomotory modes include climbing, jumping, brachiation, gliding, and flight. There are many specific adaptations such as sharp claws for climbing, opposed digits and prehensile tails for wrapping around branches and twigs, long hind legs for leaping, I-beam construction for rigidity, loose skin or skin fringes or expansible rib cages for gliding, and wings for flight in three major taxonomic groups. The number of animal/plant interactions are maximal in this bioclimatic zone, with many complex adaptations to facilitate these interactions, which include not only destructive interactions such as herbivory but mutually beneficial ones such as pollination and fruit dispersal. Many of the major groups of flower-feeding birds (hummingbirds, sunbirds, flowerpeckers, honeyeaters) and mammals (pteropodid and glossophagine bats) are tropical, as are most fruit-eating birds and mammals. Arboreal species travel through the forest in search of fruiting trees (with learning the location of traditional trees), where much social interaction within and between species takes place. Earthbound animals benefit from the rain of edible fruit. Complex, often coevolutionary, interactions are commonplace, with high levels of mutualism and commensalism, including many of the most fascinating textbook examples. Because of the high species diversity, some groups exhibit substantial "aspect diversity" (great differences in appearance), perhaps to counter predator search images and/or for quick species recognition. The original hunter/gatherer populations had relatively little effect on the environment, but with population increase, especially with development of real population centers, they have hunted out a substantial proportion of large animals, especially conspicuous ones such as macaws and monkeys or rare ones such as cats. Hunting for "bush meat" is still a primary factor in rarity. Habitat destruction is usually the most serious problem, both by small-scale slash-and-burn agriculture and large-scale land clearing for ranching and farming. After a few episodes of clearing, soil loses essentially all its nutrients, becomes infertile and hard (laterization), and neither supports much plant growth nor acts as a water sink. Instead, erosion becomes a major problem, with much runoff of clays into streams and accompanying pollution. Because of the tremendous diversity of tropical rain-forest species, the restricted range of many of them, and especially that so many of them are undescribed or scarcely known, habitat destruction is more serious in this biome than any other. Nowhere else is there such likelihood of widespread species extinction, much of which will happen unknown to us. Specific taxa still being depleted by hunting of individual animals include fur-bearing cats (most diverse in this zone), primates (much sought as food in some areas), and animals used as pets (e. g. parrots, fresh-water fish); stricter import regulations have alleviated but not prevented this. A tremendous variety of tropical rain forest plants have been cultivated by humans. The high levels of secondary compounds in tropical plants have made them valuable as spices, stimulants, and other drugs, and many others are cultivated for food, clothing, and shelter. Recent discoveries indicate the potential value of tropical plants to humans has been scarcely realized.
Browsing through DepEd's curriculum guide for science, one can pick from the grade level standards elements that are related to chemistry: Grade 3: Students will learn that things may be solid, liquid or gas while others may give off light, heat and sound. Grade 4: After investigating, learners will identify materials that do not decay and use this knowledge to help minimize waste at home, school, and in the community. They will also investigate changes in the properties of materials when these are subjected to different conditions. Grade 5: After investigating, learners will decide whether materials are safe and useful based on their properties. They will also infer that new materials may form when there are changes in properties. Learners will recognize that different materials react differently with heat, light, and sound. They will relate these abilities of materials to their specific uses. Grade 6: Learners will recognize that when mixed together, materials do not form new ones thus these materials may be recovered using different separation techniques. Learners will also prepare useful mixtures such as food, drinks and herbal medicines. Grade 7: Learners will recognize the system of classification of matter through semi-guided investigations but emphasizing fair testing. Grade 8: Learners will explain the behavior of matter in terms of the particles it is made of. They will also recognize that ingredients in food and medical products are made up of these particles and are absorbed by the body in the form of ions. Grade 9: Learners will explain how new materials are formed when atoms are rearranged. They will also recognize that a wide variety of useful compounds may arise from such rearrangements. Grade 10: Learners will recognize the importance of controlling the conditions under which a phenomenon or reaction occurs. They will also recognize that cells and tissues of the human body are made up of water, a few kinds of ions, and biomolecules. These biomolecules may also be found in the food they eat. |Downloaded from http://diylol.com/| One of these differences is very important. The word "stoichiometry" cannot be found in DepEd's K to 12 curriculum guide. Tai, Ward and Sadler, in a study published in the Journal of Chemical Education ("High school chemistry content background of introductory college chemistry students and its association with college chemistry grades." J. Chem. Ed., 2006, 83(11), 1703-1711.), found that of all the topics that high school chemistry covers, only "stoichiometry" is found to be a good predictor of college chemistry performance. They arrived at this conclusion from a survey of more than 3000 students across the United States. The statistical analysis shows convincingly that performance in introductory courses in chemistry in college is strongly correlated with how well stoichiometry was covered in high school. And excerpts from individual responses from students provided a glimpse of the underlying reason behind this strong correlation: I think stoichiometry gave a lot of kids trouble so I think my fairly strong background with that gave me a heads up. ...stoichiometry—I learned that really well in high school and I remembered it all throughout chemistry. ...knowledge about stoichiometry from high school chemistry helped me most. I’d have to say stoichiometry because quite a few people had problems with that.” ...stoichiometry and the ability to apply conversions helped the most. ...most helpful was the depth [with which] we covered stoichiometry.... - N2 + 3H2 → 2NH3 |Examples of Lewis acid-base equilibria.| Downloaded from http://www2.chemistry.msu.edu/faculty/reusch/virttxtjml/react1.htm
Sinks and Sources - Awesome Mangroves Carbon sequestration! If you have no idea what this is, I wouldn't blame you - it sounds complicated...and boring. But simply put, it means the ability to take carbon-dioxide - a colourless, odourless gas - out of the atmosphere and hold it in storage. Carbon can be found in some form or another in all living things - in the soil, plants and animals, water and in the air. It moves between these hosts in what is known as the carbon cycle. In this cycle, places which release carbon into the atmosphere are called carbon 'sources' and those which 'sequester' carbon are called carbon 'sinks'. There are a number of 'sources' and 'sinks' but to keep things simple, we will just concentrate on the role that plants play in the carbon story. Forests have sometimes been called the 'lungs of the planet'. If you'd paid any attention in Basic Science class, you would have learnt that plants 'breathe in' carbon dioxide (CO2 ) from the air and 'breathe out' oxygen. It's a little more complicated than that but the general idea is that plants use the CO2 to help produce the energy they need to survive. Some of this CO2 is stored in the plant itself but in some cases, a lot more of it is transferred to the soil beneath it. Over time, soil sediments build up; the mass becomes heavy and compacts. Fast forward a couple of millennia and the decomposed matter in the soil will eventually form deposits of fossil fuels like coal, petroleum and natural gas. Wetlands (mangroves, sea grasses and marshes) are recognised as some of the most effective plant carbon sinks - some studies quote they suck up to ten times more carbon dioxide from the atmosphere than the average rainforest. There's a reason for this. While most tropical forests store carbon above the ground, wetlands also move a lot of their carbon into the soil. Mangroves are a key wetland ecosystem in the tropics. They trap sediment and organic matter driven through their roots by water currents to form thick layers of mangrove mud. Falling leaves and other decomposing plant and animal material are also trapped within the suffocating mud substrate adding to the high amount of carbon in mangrove soil. So we've established that wetlands are great carbon sinks. That's cool but why is a little extra carbon in our atmosphere a problem? Because of global warming and climate change. Like a blanket, our atmosphere wraps around our planet, protecting us from the Sun's harmful ultra-violet (UV) rays. This layer of gasses also traps heat and provides a controlled temperature range for plant life to flourish on Earth and from it, every animal and human further on up the food chain. Most temperate countries compare this to a greenhouse - hence the popular phrase 'Green House Effect'. But the composition of our atmosphere is changing - largely due to the accumulation of certain Green House Gases (GHGs) like CO2 which thicken the atmospheric layer. More heat is trapped on the Earth's surface and in turn, increases global temperatures. It makes sense. Using the blanket example, the thicker the blanket, the warmer you'll be underneath. Heat is good but anyone who's been to the Sahara Desert will tell you that too much heat can be downright uncomfortable, if not fatal. Temperature is a major component of climate and is important for life on Earth. The plants and animals we have in today’s world, have all survived because they have adapted to climatic conditions. It dictates their location, seasonal fruiting, migration, mating etc. Those that didn't adapt have all died out. One of the main causes of this recent warming is the increasing amount of CO2 in the atmosphere over the last century. In the past, CO2 was added naturally through volcanic eruptions but now a lot more carbon is being directly added through the burning of fossil fuels like coal and petroleum and with the explosion of human population consumption of natural resources is bigger than ever. For example land use for urban development and agriculture removed a lot of the forests which absorb CO2. Pollution is also affecting the health and ability of our natural resources to function properly. In short, we're putting more carbon in the atmosphere, getting rid of our carbon sinks and making our planet weaker. If forests are the lungs of the planet, then we're poking holes all over it! But perhaps our lack of alarm is because global warming is not a new phenomenon. Throughout recorded history, the Earth has undergone a natural cycle of warming and cooling periods. Some people will argue that we are just in the middle of another natural warming period. But natural or not, scientists do agree that our planet is warming - faster than any other period - and this is affecting the way in which our climate systems works. We might not be able to fix climate change but what we can do is fix the things we can and this includes reducing our impact on our natural resources. A lot of wetlands (mangroves included) are being removed because they smell, they're unsightly, presumably a waste of flat land, block easy access to the ocean and in some cases, simply because we really need that sea view to drive up our property value. Development is essential but shouldn't be pursued blindly without weighing its consequences. Natural resources once gone are difficult to replace and even if we can, it may take decades to get them fully functioning again. Carbon sequestration is hardly a topic to get the heart racing - but there's a chance that the more we know about the benefits our natural environment offers, the more value we will attribute to it and the more measures we will take to ensure that our natural resources stick around for the long-term. Small steps can often lead to big changes. This is the second article in a series supporting the National Mangrove Awareness Campaign in Fiji. Stephanie Robinson is the Coordinator of the AusAID Building Resilience to Climate Change Programme at WWF South Pacific.
MIT engineers have developed a fuel cell that runs on the same sugar that powers human cells: glucose. This glucose fuel cell could be used to drive highly efficient brain implants of the future, which could help paralyzed patients move their arms and legs again. The fuel cell, described in the June 12 edition of the journal PLoS ONE, strips electrons from glucose molecules to create a small electric current. The researchers, led by Rahul Sarpeshkar, an associate professor of electrical engineering and computer science at MIT, fabricated the fuel cell on a silicon chip, allowing it to be integrated with other circuits that would be needed for a brain implant. The idea of a glucose fuel cell is not new: In the 1970s, scientists showed they could power a pacemaker with a glucose fuel cell, but the idea was abandoned in favor of lithium-ion batteries, which could provide significantly more power per unit area than glucose fuel cells. These glucose fuel cells also utilized enzymes that proved to be impractical for long-term implantation in the body, since they eventually ceased to function efficiently. The new twist to the MIT fuel cell described in PLoS ONE is that it is fabricated from silicon, using the same technology used to make semiconductor electronic chips. The fuel cell has no biological components: It consists of a platinum catalyst that strips electrons from glucose, mimicking the activity of cellular enzymes that break down glucose to generate ATP, the cell’s energy currency. (Platinum has a proven record of long-term biocompatibility within the body.) So far, the fuel cell can generate up to hundreds of microwatts — enough to power an ultra-low-power and clinically useful neural implant. “It will be a few more years into the future before you see people with spinal-cord injuries receive such implantable systems in the context of standard medical care, but those are the sorts of devices you could envision powering from a glucose-based fuel cell,” says Benjamin Rapoport, a former graduate student in the Sarpeshkar lab and the first author on the new MIT study. Rapoport calculated that in theory, the glucose fuel cell could get all the sugar it needs from the cerebrospinal fluid (CSF) that bathes the brain and protects it from banging into the skull. There are very few cells in the CSF, so it’s highly unlikely that an implant located there would provoke an immune response. There is also significant glucose in the CSF, which does not generally get used by the body. Since only a small fraction of the available power is utilized by the glucose fuel cell, the impact on the brain’s function would likely be small. via Science Daily The Latest Streaming News: Medical Implants updated minute-by-minute
From: Arizona State University Posted: Wednesday, September 26, 2001 Miraculous things happen to the desert when it rains - everything changes from brown to green and organisms that have not been seen for months make a brief emergence from underground lairs. In fact, even the desert's soil turns visibly green following the rare desert rain, as hidden filaments of photosynthesizing cyanobacteria suddenly hydrate. Lying a few millimeters deep, these primitive prokaryotes quickly glide upward, migrating en mass to the surface for an hour or so of light exposure until the dirt begins to dry. Then, just as suddenly, they return again to the subsurface, where they begin the long wait for the next rain. The existence of such "cryptic" communities of microbes has long been known, and it has long been assumed that the organisms' behavior can be explained by common light-responsive behavior. Now, a new finding by Arizona State University microbial ecologist Ferran Garcia-Pichel and Olivier Pringault of the Biological Oceanography Laboratory at the University of Bordeaux shows that phenomenon is actually more complicated, with significant implications for the behavior and ecology of other underground microbes. The research is reported in the September 27 issue of the journal Nature. Observing several different species of soil crust-inhabiting cynobacteria, the team found that the bacteria's movements were affected by the presence or absence of water, not just light - the first time such behavior has ever been observed in bacteria. According to Garcia-Pichel, the team was first intrigued by a "serendipitous" field observation. "What we discovered was that when one of these wetting events took place, the cyanobacteria came up to the surface of the soil. But once the soil started drying out, the cyanobacteria returned to the subsurface though the light didn't change. Essentially nothing changed except the availability of water," he said. Subsequently, the bacteria were moved to a laboratory setting and were tested under controlled lighting conditions, using microprobes to measure the relation of bacterial movement to water content in the soil surface. Test results showed clearly that the bacteria "tracked" the water. "These migrations are really population migrations that occur in millimeter scale -- close to 100 percent of the population will come up to the surface," Garcia-Pichel noted. "Their tendency to track the water overwhelms their tendency to track the light. We've never seen this before." Water, Garcia-Pichel hypothesizes, is critical to the bacteria not just for metabolism, but also for movement. "They go down because by tracking the water, they protect themselves. They will get dry eventually, and when they get dry they can't move. At the surface they would be more subject to hazardous conditions." Garcia-Pichel points out that the finding may have large implications for investigating the ecology of the still poorly understood bacterial species that live deep beneath the earth's surface. "Once traits like this are found, they're usually not restricted to one organism. We've seen this in a variety of cyanobacteria. If this really a widespread ability of bacteria, it also has implications on how we understand the bacterial communities in the deep subsurface. Bacterial communities may be following water in the subsurface over large distances," he said. Similarly, there are implications for locating life in another extreme environment - Mars. Though cyanobacteria are among the most primitive living things, they have developed sophisticated skills for dealing with an environment where water is both scarce and transitory. "Desert soils are one of the earthly ecosystems that may have some significance on Mars. If Mars had some water in the past, then these desiccation-resistant environments are probably going to be the last to have existed there. This is one of the most likely ecosystems to have left an imprint that we can find some evidence for," Garcia-Pichel said. "'Follow the water' has become a productive shorthand for expressing the scientific directions of our exploration of Mars, and beyond," said Rose Grymes, Associate Director of the NASA Astrobiology Institute, of which Arizona State University is a member. "This fascinating research contributes directly to our understanding of how living systems adapt to and impact the planetary environment, and how they leave their signature; even in places that appear highly inhospitable." The research was funded by a grant from the U.S. Department of Agriculture.James Hathaway, Source: Ferran Garcia-Pichel, 480-727-7534 // end //
: We examine 3 dimensional objects that are rectangular in shape, measuring their volume, surface area, and the length of a diagonal. The Lesson: We ask in practical terms two questions about 3 dimensional objects. First, how much paint does it take to paint the object (we would have to know the surface area). Second, how much water can the object hold (we would have to know the volume). We show how to calculate surface area and volume. Let's Practice: A rectangular solid is a 3 dimensional object with six sides, all of which are rectangles. We first examine a cube, in which all six sides are squares. In the diagram below, a square of side 2 inches is used to form a cube with six square sides. The square clearly has an area of 4 square inches. Therefore the cube has a surface area of 48 square inches because its sides are composed of six of these squares. The cube is composed of 8 smaller cubes with a side of 1 inch. The volume of this cube is 8 cubic inches. We note that a square has its name because its area is the square of the length of a side. The area 4 = 22. The cube has its name because the volume is the cube of the length of a side. The volume is 8 = 23. We can generalize this result for cubes by saying that the volume of a cube of side a is a3. Since the area of one side is the length of the side squared, the entire surface area of the cube is 6 x 4 = 24. This can also be generalized for any cube of side a. The surface area is 6 x a2 = 6a2. In the diagram below, we show a rectangular solid at right with dimensions 5 x 2 x 3 inches. These are the measures of the length l, the width w and the height h. The area of the Front/Back rectangles is 15 square inches. The area of the Sides is 6 square inches, and the area of the Top/Bottom rectangles is 10 square inches. Adding these we get the surface area which is 62 square inches. The volume is found by multiplying the lengths of the sides as we did with the cube. The volume is 30 cubic inches. We generalize this result. The volume of a rectangular solid is lwh. The surface area is found by adding the areas of the sides of the solid. We can also calculate the length of a diagonal in this rectangular solid. We extend the Pythagorean Theorem and find the length of the diagonal (dotted line in the diagram below) to be . In general, for a rectangular solid we have the length of the diagonal as . - A cube has a side of 3 meters. What are the measures of the surface area, the diagonal, and the volume? The volume of a cube is the length of the side cubed. The volume is 33 = 27 cubic meters. There are six sides which are squares of side 3, each having an area of 9 square meters. The total surface area is 54 square meters. The diagonal is . - A rectangular solid has dimensions 3 x 6 x 7. What are the measures of the surface area, the diagonal, and the volume? The volume is found by multiplying the measures of the sides. The volume is 126. There are two sides with dimensions 3 x 6 = 18, two sides with dimensions 3 x 7 = 21 and two sides with dimensions 6 x 7 = 42. The total surface area is 2(18) + 2(21) + 2(42) = 162. The diagonal is .
Inside your bones is a thick mass of cells called bone marrow. Every hour, a small number of stem cells in it create all other kinds of blood cells that exist in your body, including leukocytes, erythrocytes, and platelets. These cells are essential to your health - leukocytes fight infection, erythrocytes carry oxygen, and platelets help the blood clot.When a person has a blood disease, such as aplastic anemia or leukemia, doctors may perform bone marrow transplants to re-establish a healthy blood supply. Many transplants occur after a patient has received vchemotherapy or radiation treatment to destroy cancerous or other disease-causing cells. Both abnormal and normal cells are killed by these treatments, including stem cells. A bone marrow transplant starts the blood production process from scratch with normal stem cells. An allogeneic transplant - where another person's bone marrow is given to a patient - doesn't always work because of rejection or because of graft-versus-host disease. Rejection of the donor's marrow occurs because our bodies fight off invading foreign cells. If a donor's marrow doesn't match perfectly, the recipient's immune system may identify the new cells as foreign and destroy them, leaving the patient unable to create new blood. Graft-versus-host disease occurs because the new immune system from the donor's marrow may identify the patient's body as foreign and try to destroy it. When the donor's immune cells in the marrow attack the patient, many symptoms may result and, in severe cases, the patient could die.Doctors decrease these risks by trying to select a patient/donor pair whose immune cells will identify each other as "self." An identical twin's cells will see the other twin's cells as self. But most patients do not have an identical twin. So doctors look at a person's human leukocyte antigens (HLA) to match donor and patient bone marrow. These are proteins present on the surface of our cells. They play a big role in telling immune cells that other cells are either foreign or "friendly" self cells.Doctors will look at HLA antigens on your siblings' cells, because you have a 25 percent chance of having an HLA match with a brother or sister. Among unrelated people, only one in 20,000 people will be an acceptable match. Find the connection between rolling dice and a genetic match. What are the chances of getting a match at random from an unrelated donor? In this activity, you will learn about probability using a pair of dice. Materials - pair of dice 1. If you rolled a pair of dice, what chance would you have of getting matching numbers? Write down how many times you think you'd have to try before you got a match. 2. The first concept you need to know is that the probability of something happening is expressed in this simple equation: probability = number of favorable outcomes - divided - by - number of possible outcomes In this example, you're trying to get an outcome where the two dice match. A die is a cube with six possibilities: you can roll either a 1, 2, 3, 4, 5, or 6. So with one die, your probability of rolling a 5 is: 1 (number of favorable outcomes) - 6 (number of possible outcomes) 3. Next, you have to figure out how your probability changes when you roll a pair of dice. First, consider what the new number of possible outcomes is. Before, there were six. Now, there are many more combinations possible. Below is a chart listing all the possible rolls for your dice, naming them Die A and Die B. We started the chart to help you figure it out. Fill in the missing numbers to complete all the possible die rolls. Die A Die B Die A Die B Die A Die B Die A Die B Die A Die B Die A Die B 1 & 1 1 & 2 1 & 3 1 & 4 1 & 5 1 & 6 2 & 1 2 & 2 2 & 3 2 & 4 2 & 5 2 & ( ) 3 & 1 3 & 2 3 & ( ) 3 & ( ) 3 & ( ) ( ) & ( ) 4 & 1 ( ) & 2 4 & ( ) ( ) & ( ) ( ) & ( ) ( ) & ( ) 5 & 1 5 & ( ) ( ) & ( ) ( ) & ( ) ( ) & ( ) ( ) & ( ) 6 & 1 ( ) & ( ) ( ) & ( ) ( ) & ( ) ( ) & ( ) ( ) & ( ) 4. Count how many possible outcomes you can have when rolling two dice. If we wanted to calculate our chances of rolling a 5 on either or both dice, we would have to rewrite our probability equation: probability = ??? (Number of times Die A=5, Die B=5, or both=5 -- fill in your count from above) --- ??? (Number of all possible rolls -- fill in your count from above) 5. Now use what you've learned and the chart you've completed to calculate the chances of rolling matching die. Questions 1. The HLA proteins are determined by genes on chromosome 6. Each parent has two of these chromosomes, and these four HLA types are almost always different. You inherited one HLA type from each parent, as did your siblings. What is the probability that one of your siblings inherited the same HLA types that you did?
||This article's introduction may be too long for the overall article length. (September 2013)| When an object rotates about its axis, the motion cannot simply be analyzed as a particle, since in circular motion it undergoes a changing velocity and acceleration at any time (t). When dealing with the rotation of an object, it becomes simpler to consider the body itself rigid. A body is generally considered rigid when the separations between all the particles remains constant throughout the objects motion, so for example parts of its mass are not flying off. In a realistic sense, all things can be deformable, however this impact is minimal and negligible. Thus the rotation of a rigid body over a fixed axis is referred to as rotational motion. In the example illustrated to the right, a particle on object P at a fixed distance r from the origin, O, rotating counterclockwise. It becomes important to then represent the position of particle P in terms of its polar coordinates (r, θ). In this particular example, the value of θ is changing, while the value of the radius remains the same. (In rectangular coordinates (x, y) both x and y vary with time). As the particle moves along the circle, it travels an arc length s, which becomes related to the angular position through the relationship: Measurements of angular displacement Angular displacement may be measured in radians or degrees. If using radians, it provides a very simple relationship between distance traveled around the circle and the distance r from the centre. For example if an object rotates 360 degrees around a circle of radius r, the angular displacement is given by the distance traveled around the circumference - which is 2πr divided by the radius: which easily simplifies to . Therefore 1 revolution is radians. When object travels from point P to point Q, as it does in the illustration to the left, over the radius of the circle goes around a change in angle. which equals the Angular Displacement. In three dimensions, angular displacement is an entity with a direction and a magnitude. The direction specifies the axis of rotation, which always exists by virtue of the Euler's rotation theorem; the magnitude specifies the rotation in radians about that axis (using the right-hand rule to determine direction). Given that any frame in the space can be described by a rotation matrix, the displacement among them can also be described by a rotation matrix. Being and two matrices, the angular displacement matrix between them can be obtained as - Kleppner, Daniel; Kolenkow, Robert (1973). An Introduction to Mechanics. McGraw-Hill. pp. 288–89.
Play and Learn SetProduct Design This multifunctional didactic set is a good support for teaching, physical activities and play. It inspires teachers and students to come up with new ways of using it. The Play and Learn Set is a response to the needs of contemporary pedagogics. It is intended for the lower primary school grades. It consists of boards that come in several shapes, thick and thin sticks, and spheres. The spheres have holes along the three main axes plus another four, and can be combined with the thin sticks to create almost all geometric shapes and structures. The thin sticks can be bent and "memorise" the bent position. The ends of thick sticks have tongues and grooves so they can be connected to one another, or to thin sticks and spheres. Although they can bend, they return to their original form. One side of the boards features holes where sticks can fit, while the other is painted with blackboard paint. They are strong and insulating so they can be used as floor mats for sitting. The combinations of these elements can be used for teaching (learning letters, syllables, addition and subtraction, geometric figures and solids...), as a medium for developing creativity (combinatorics, construction, creating games and new ways to use the set) and as an incentive for physical activity (motor skills, patience, team spirit...). The options provided by the set will grow with the creativity of its users.
Each year in the U.S., Sleep Awareness Week occurs in the first week of March, when Daylight Saving Time begins and most Americans lose an hour of sleep. The observance is a national public education and awareness campaign to promote the importance of sleep to one’s physical health, mental health and overall well-being. With lots to do throughout a busy day, it can be tempting to cut corners on sleep, but doing so can have damaging effects on many aspects of your life. With between 50 and 70 million Americans suffering from some sort of sleep disorder or occasional sleeping problem, it’s clear that a lack of quality sleep is a major public health issue. However, many people don’t realize just how important sleep is to their health. While we sleep, our bodies secrete hormones that positively affect our mood, energy, memory, concentration, and immune functions. So it’s important to get an adequate amount of sleep each night in order to maintain your health. Getting adequate sleep provides benefits such as: Less stress – Without enough rest, the body functions on high alert. Increased blood pressure and the production of stress hormones can make it harder to fall asleep and recharge the next night. Good sleep enables you to manage your stress levels better. Daytime alertness – With enough rest, you’ll have higher levels of energy and mental acuity for performing complex mental and physical tasks. Sleep-deprived people cannot focus well on tasks. Sleep helps repair cells damaged by stress, fatigue and muscle strain. It improves concentration and memory function. Better mental health – Getting enough sleep helps regulate levels of serotonin, a neurotransmitter that affects our mood. Low levels of serotonin can lead to depression and other behavioral health disorders. Weight control – Lack of sleep adversely affects levels of hormones that regulate our appetite. This can contribute to being overweight or obese. A healthier heart – Blood pressure and cholesterol levels are higher when you’re sleep-deprived, and these are risk factors for heart disease and stroke. A safer day – With enough sleep, you can better avoid auto and workplace accidents caused by drowsiness. Testing has shown that with a driving simulator or a hand-eye coordination task, sleep-deprived people perform just as badly as intoxicated people. A stronger immune system – Adequate sleep helps your body respond to infection, which can enable you to avoid colds, flu, and other viral and bacterial infections. To read more, click here. Other resources regarding the importance of sleep: Jeffrey R. Ungvary President
An electroplater, sometimes referred to as a plater, is responsible for coating items with metal using the process of electroplating. Electroplating involves the application of a thin layer of metal solution to an object, using an electric current to affix the metal particles to the item’s surface. The process involves setting up and operating coating machines, which cover plastic and metal objects with metals, such as chromium, zinc, and copper, for the purposes of either protecting the items or decorating their surfaces. Industries that use electroplated parts include automotive parts, jewelry, and appliances, as well as electronic components and metal furniture. The work activities of an electroplater involve every part of the electroplating process from start to finish. They begin the task by checking work orders and noting specifications regarding the plating and amount of current needed. Items to be electroplated are then put through a cleansing bath and measured. The next step is to prepare and apply the solution to the objects, and later follow the cool down period with quality assessment. Electroplater duties involve other activities, aside from the actual electroplating process. Tasks include inspecting equipment and materials, in addition to monitoring the work process to identify and assess problems. These workers must maintain documents and records, as well as evaluate information to ensure compliance with laws and standards. An electroplater’s job description includes estimating characteristics of products, such as sizes, distances, and quantities, to anticipate the costs and resources necessary to perform a task. The work also involves educational activities, as electroplaters develop training programs along with coaching others to improve their skills. The requirements to become an electroplater involve education, as well as certain physical abilities. A high school diploma is needed, preferably one which has included courses in math, physics, and chemistry, along with blueprint reading and metal shop. Manipulative skills, such as manual dexterity, arm-hand steadiness, and multi-limb coordination, are needed for the safe and efficient electroplating process. Other physical abilities include excellent vision and good auditory attention to ensure accuracy and precision, along with general strength and stamina for coping with the rigors of the job. Visual color discrimination is also needed to notice differences in color shades and brightness, and depth perception is required to judge distances. Other electroplater requirements involve mental skills and personality traits. Written and oral speech comprehension and expression skills are needed for understanding instructions, maintaining records, and sharing information with others. Inductive and deductive reasoning skills are required for task performance, as well as for anticipating and solving problems. Adaptability is necessary to deal with the substantial variety in the workplace, and independence is required to work without supervision. Electroplater jobs also necessitate being detail oriented, as much care and thoroughness is required for accuracy.
In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property "mass" for gauge bosons. Without the Higgs mechanism, all bosons (one of the two classes of particles, the other being fermions) would be considered massless, but measurements show that the W+, W−, and Z0 bosons actually have relatively large masses of around 80 GeV/c2. The Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field (the Higgs field) which permeates all of space to the Standard Model. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass. In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Large Hadron Collider at CERN announced results consistent with the Higgs particle on 14 March 2013, making it extremely likely that the field, or one like it, exists, and explaining how the Higgs mechanism takes place in nature. The view of the Higgs mechanism as involving spontaneous symmetry breaking of a gauge symmetry is technically incorrect since by Elitzur's theorem gauge symmetries can never be spontaneously broken. Rather, the Fröhlich–Morchio–Strocchi mechanism reformulates the Higgs mechanism in an entirely gauge invariant way, generally leading to the same results. The mechanism was proposed in 1962 by Philip Warren Anderson, following work in the late 1950s on symmetry breaking in superconductivity and a 1960 paper by Yoichiro Nambu that discussed its application within particle physics. A theory able to finally explain mass generation without "breaking" gauge theory was published almost simultaneously by three independent groups in 1964: by Robert Brout and François Englert; by Peter Higgs; and by Gerald Guralnik, C. R. Hagen, and Tom Kibble. The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism, or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, Anderson–Higgs mechanism, Anderson–Higgs–Kibble mechanism, Higgs–Kibble mechanism by Abdus Salam and ABEGHHK'tH mechanism (for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble, and 't Hooft) by Peter Higgs. The Higgs mechanism in electrodynamics was also discovered independently by Eberly and Reiss in reverse as the "gauge" Dirac field mass gain due to the artificially displaced electromagnetic field as a Higgs field. On 8 October 2013, following the discovery at CERN's Large Hadron Collider of a new particle that appeared to be the long-sought Higgs boson predicted by the theory, it was announced that Peter Higgs and François Englert had been awarded the 2013 Nobel Prize in Physics.[a] In the Standard Model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field develops a vacuum expectation value; the symmetry is spontaneously broken by tachyon condensation, and the W and Z bosons acquire masses (also called "electroweak symmetry breaking", or EWSB). In the history of the universe, this is believed to have happened about a picosecond (10−12 s) after the hot big bang, when the universe was at a temperature 159.5 ± 1.5 GeV. In the standard model, the Higgs field is an SU(2) doublet (i.e. the standard representation with two complex components called isospin), which is a scalar under Lorentz transformations. Its electric charge is zero; its weak isospin is 1/2 and the third component of weak isospin is −1/2; and its weak hypercharge (the charge for the U(1) gauge group defined up to an arbitrary multiplicative constant) is 1. Under U(1) rotations, it is multiplied by a phase, which thus mixes the real and imaginary parts of the complex spinor into each other, combining to the standard two-component complex representation of the group U(2). The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group U(2). This is often written as SU(2)L × U(1)Y, (which is strictly speaking only the same on the level of infinitesimal symmetries) because the diagonal phase factor also acts on other fields – quarks in particular. Three out of its four components would ordinarily resolve as Goldstone bosons, if they were not coupled to gauge fields. However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons ( ), and are only observable as components of these weak bosons, which are made massive by their inclusion; only the single remaining degree of freedom becomes a new scalar particle: the Higgs boson. The components that do not mix with Goldstone bosons form a massless photon. The gauge group of the electroweak part of the standard model is SU(2)L × U(1)Y. The group SU(2) is the group of all 2-by-2 unitary matrices with unit determinant; all the orthonormal changes of coordinates in a complex two dimensional vector space. Rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor (0, v). The generators for rotations about the x, y, and z axes are by half the Pauli matrices σx, σy, and σz, so that a rotation of angle θ about the z-axis takes the vacuum to While the Tx and Ty generators mix up the top and bottom components of the spinor, the Tz rotations only multiply each by opposite phases. This phase can be undone by a U(1) rotation of angle 1/2θ. Consequently, under both an SU(2) Tz-rotation and a U(1) rotation by an amount 1/2θ, the vacuum is invariant. This combination of generators defines the unbroken part of the gauge group, where Q is the electric charge, T3 is the generator of rotations around the 3-axis in the SU(2) and YW is the weak hypercharge generator of the U(1). This combination of generators (a 3 rotation in the SU(2) and a simultaneous U(1) rotation by half the angle) preserves the vacuum, and defines the unbroken gauge group in the standard model, namely the electric charge group. The part of the gauge field in this direction stays massless, and amounts to the physical photon. By contrast, the broken trace-orthogonal charge couples to the massive In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance. For these fields, the mass terms should always be replaced by a gauge-invariant "Higgs" mechanism. One possibility is some kind of Yukawa coupling (see below) between the fermion field ψ and the Higgs field Φ, with unknown couplings Gψ, which after symmetry breaking (more precisely: After expansion of the Lagrange density around a suitable ground state) again results in the original mass terms, which are now, however (i.e., by introduction of the Higgs field) written in a gauge-invariant way. The Lagrange density for the Yukawa interaction of a fermion field ψ and the Higgs field Φ is where again the gauge field A only enters via the gauge covariant derivative operator Dμ (i.e., it is only indirectly visible). The quantities γμ are the Dirac matrices, and Gψ is the already-mentioned Yukawa coupling parameter for ψ. Now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value Again, this is crucial for the existence of the property mass. Spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstone's theorem, these bosons should be massless. The only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking. A similar problem arises with Yang–Mills theory (also known as non-abelian gauge theory), which predicts massless spin-1 gauge bosons. Massless weakly-interacting gauge bosons lead to long-range forces, which are only observed for electromagnetism and the corresponding massless photon. Gauge theories of the weak force needed a way to describe massive gauge bosons in order to be consistent. That breaking gauge symmetries did not lead to massless particles was observed in 1961 by Julian Schwinger, but he did not demonstrate massive particles would eventuate. This was done in Philip Warren Anderson's 1962 paper but only in non-relativistic field theory; it also discussed consequences for particle physics but did not work out an explicit relativistic model. The relativistic model was developed in 1964 by three independent groups: Slightly later, in 1965, but independently from the other publications the mechanism was also proposed by Alexander Migdal and Alexander Polyakov, at that time Soviet undergraduate students. However, their paper was delayed by the editorial office of JETP, and was published late, in 1966. The mechanism is closely analogous to phenomena previously discovered by Yoichiro Nambu involving the "vacuum structure" of quantum fields in superconductivity. A similar but distinct effect (involving an affine realization of what is now recognized as the Higgs field), known as the Stueckelberg mechanism, had previously been studied by Ernst Stueckelberg. These physicists discovered that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry group, the gauge bosons can consistently acquire a nonzero mass. In spite of the large values involved (see below) this permits a gauge theory description of the weak force, which was independently developed by Steven Weinberg and Abdus Salam in 1967. Higgs's original article presenting the model was rejected by Physics Letters. When revising the article before resubmitting it to Physical Review Letters, he added a sentence at the end, mentioning that it implies the existence of one or more new, massive scalar bosons, which do not form complete representations of the symmetry group; these are the Higgs bosons. The three papers by Brout and Englert; Higgs; and Guralnik, Hagen, and Kibble were each recognized as "milestone letters" by Physical Review Letters in 2008. While each of these seminal papers took similar approaches, the contributions and differences among the 1964 PRL symmetry breaking papers are noteworthy. All six physicists were jointly awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. Benjamin W. Lee is often credited with first naming the "Higgs-like" mechanism, although there is debate around when this first occurred. One of the first times the Higgs name appeared in print was in 1972 when Gerardus 't Hooft and Martinus J. G. Veltman referred to it as the "Higgs–Kibble mechanism" in their Nobel winning paper. The proposed Higgs mechanism arose as a result of theories proposed to explain observations in superconductivity. A superconductor does not allow penetration by external magnetic fields (the Meissner effect). This strange observation implies that somehow, the electromagnetic field becomes short ranged during this phenomenon. Successful theories arose to explain this during the 1950s, first for fermions (Ginzburg–Landau theory, 1950), and then for bosons (BCS theory, 1957). In these theories, superconductivity is interpreted as arising from a charged condensate. Initially, the condensate value does not have any preferred direction, implying it is scalar, but its phase is capable of defining a gauge, in gauge based field theories. To do this, the field must be charged. A charged scalar field must also be complex (or described another way, it contains at least two components, and a symmetry capable of rotating each into the other(s)). In naïve gauge theory, a gauge transformation of a condensate usually rotates the phase. But in these circumstances, it instead fixes a preferred choice of phase. However it turns out that fixing the choice of gauge so that the condensate has the same phase everywhere, also causes the electromagnetic field to gain an extra term. This extra term causes the electromagnetic field to become short range. (Goldstone's theorem also plays a role in such theories. The connection is technically, when a condensate breaks a symmetry, then the state reached by acting with a symmetry generator on the condensate has the same energy as before. This means that some kinds of oscillation will not involve change of energy. Oscillations with unchanged energy imply that excitations (particles) associated with the oscillation are massless.) Once attention was drawn to this theory within particle physics, the parallels were clear. A change of the usually long range electromagnetic field to become short ranged, within a gauge invariant theory, was exactly the needed effect sought for the weak force bosons (because a long range force has massless gauge bosons, and a short ranged force implies massive gauge bosons, suggesting that a result of this interaction is that the field's gauge bosons acquired mass, or a similar and equivalent effect). The features of a field required to do this was also quite well defined - it would have to be a charged scalar field, with at least two components, and complex in order to support a symmetry able to rotate these into each other. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. In the non-relativistic context this is a superconductor, more formally known as the Landau model of a charged Bose–Einstein condensate. In the relativistic condensate, the condensate is a scalar field that is relativistically invariant. The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged, or, in field language, when a charged field has a nonzero vacuum expectation value. Interaction with the quantum fluid filling the space prevents certain forces from propagating over long distances (as it does inside a superconductor; e.g., in the Ginzburg–Landau theory). A superconductor expels all magnetic fields from its interior, a phenomenon known as the Meissner effect. This was mysterious for a long time, because it implies that electromagnetic forces somehow become short-range inside the superconductor. Contrast this with the behavior of an ordinary metal. In a metal, the conductivity shields electric fields by rearranging charges on the surface until the total field cancels in the interior. But magnetic fields can penetrate to any distance, and if a magnetic monopole (an isolated magnetic pole) is surrounded by a metal the field can escape without collimating into a string. In a superconductor, however, electric charges move with no dissipation, and this allows for permanent surface currents, not just surface charges. When magnetic fields are introduced at the boundary of a superconductor, they produce surface currents which exactly neutralize them. The Meissner effect arises due to currents in a thin surface layer, whose thickness can be calculated from the simple model of Ginzburg–Landau theory, which treats superconductivity as a charged Bose–Einstein condensate. Suppose that a superconductor contains bosons with charge q . The wavefunction of the bosons can be described by introducing a quantum field, which obeys the Schrödinger equation as a field equation. In units where the reduced Planck constant, ħ, is set to 1: The operator annihilates a boson at the point x, while its adjoint creates a new boson at the same point. The wavefunction of the Bose–Einstein condensate is then the expectation value of which is a classical function that obeys the same equation. The interpretation of the expectation value is that it is the phase that one should give to a newly created boson so that it will coherently superpose with all the other bosons already in the condensate. When there is a charged condensate, the electromagnetic interactions are screened. To see this, consider the effect of a gauge transformation on the field. A gauge transformation rotates the phase of the condensate by an amount which changes from point to point, and shifts the vector potential by a gradient: When there is no condensate, this transformation only changes the definition of the phase of at every point. But when there is a condensate, the phase of the condensate defines a preferred choice of phase. The condensate wave function can be written as where ρ is real amplitude, which determines the local density of the condensate. If the condensate were neutral, the flow would be along the gradients of θ, the direction in which the phase of the Schrödinger field changes. If the phase θ changes slowly, the flow is slow and has very little energy. But now θ can be made equal to zero just by making a gauge transformation to rotate the phase of the field. The energy of slow changes of phase can be calculated from the Schrödinger kinetic energy, and taking the density of the condensate ρ to be constant, Fixing the choice of gauge so that the condensate has the same phase everywhere, the electromagnetic field energy has an extra term, When this term is present, electromagnetic interactions become short-ranged. Every field mode, no matter how long the wavelength, oscillates with a nonzero frequency. The lowest frequency can be read off from the energy of a long wavelength A mode, This is a harmonic oscillator with frequency The quantity is the density of the condensate of superconducting particles. In an actual superconductor, the charged particles are electrons, which are fermions not bosons. So in order to have superconductivity, the electrons need to somehow bind into Cooper pairs. The charge of the condensate q is therefore twice the electron charge −e. The pairing in a normal superconductor is due to lattice vibrations, and is in fact very weak; this means that the pairs are very loosely bound. The description of a Bose–Einstein condensate of loosely bound pairs is actually more difficult than the description of a condensate of elementary particles, and was only worked out in 1957 by John Bardeen, Leon Cooper, and John Robert Schrieffer in the famous BCS theory. Gauge invariance means that certain transformations of the gauge field do not change the energy at all. If an arbitrary gradient is added to A, the energy of the field is exactly the same. This makes it difficult to add a mass term, because a mass term tends to push the field toward the value zero. But the zero value of the vector potential is not a gauge invariant idea. What is zero in one gauge is nonzero in another. So in order to give mass to a gauge theory, the gauge invariance must be broken by a condensate. The condensate will then define a preferred phase, and the phase of the condensate will define the zero value of the field in a gauge-invariant way. The gauge-invariant definition is that a gauge field is zero when the phase change along any path from parallel transport is equal to the phase difference in the condensate wavefunction. The condensate value is described by a quantum field with an expectation value, just as in the Ginzburg–Landau model. In order for the phase of the vacuum to define a gauge, the field must have a phase (also referred to as 'to be charged'). In order for a scalar field Φ to have a phase, it must be complex, or (equivalently) it should contain two fields with a symmetry which rotates them into each other. The vector potential changes the phase of the quanta produced by the field when they move from point to point. In terms of fields, it defines how much to rotate the real and imaginary parts of the fields into each other when comparing field values at nearby points. The only renormalizable model where a complex scalar field Φ acquires a nonzero value is the Mexican-hat model, where the field energy has a minimum away from zero. The action for this model is which results in the Hamiltonian The first term is the kinetic energy of the field. The second term is the extra potential energy when the field varies from point to point. The third term is the potential energy when the field has any given magnitude. This potential energy, the Higgs potential, has a graph which looks like a Mexican hat, which gives the model its name. In particular, the minimum energy value is not at z = 0 , but on the circle of points where the magnitude of z is Φ . When the field Φ(x) is not coupled to electromagnetism, the Mexican-hat potential has flat directions. Starting in any one of the circle of vacua and changing the phase of the field from point to point costs very little energy. Mathematically, if with a constant prefactor, then the action for the field θ(x) , i.e., the "phase" of the Higgs field Φ(x) , has only derivative terms. This is not a surprise: Adding a constant to θ(x) is a symmetry of the original theory, so different values of θ(x) cannot have different energies. This is an example of configuring the model to conform to Goldstone's theorem: Spontaneously broken continuous symmetries (normally) produce massless excitations. The Abelian Higgs model is the Mexican-hat model coupled to electromagnetism: |Alternate form of the Abelian Higgs model action| The Abelian Higgs model action can also be written where the potential is and the covariant derivative is For completeness, the tensor is the Maxwell tensor, also known as the electromagnetic field strength, field strength or more geometrically the curvature of the connection The four-vector gauge field is also known as the four-potential. This makes the gauge-invariance of the action (and therefore Lagrangian and resulting equations of motion) manifest. The potential makes the non-zero vacuum expectation value evident. The classical vacuum is again at the minimum of the potential, where the magnitude of the complex field φ is equal to Φ . But now the phase of the field is arbitrary, because gauge transformations change it. This means that the field can be set to zero by a gauge transformation, and does not represent any actual degrees of freedom at all. Furthermore, choosing a gauge where the phase of the vacuum is fixed, the potential energy for fluctuations of the vector field is nonzero. So in the Abelian Higgs model, the gauge field acquires a mass. To calculate the magnitude of the mass, consider a constant value of the vector potential A in the x-direction in the gauge where the condensate has constant phase. This is the same as a sinusoidally varying condensate in the gauge where the vector potential is zero. In the gauge where A is zero, the potential energy density in the condensate is the scalar gradient energy: This energy is the same as a mass term 1/2m2A2 where m = q Φ . |Lagrangian in explicit symmetry broken form| Start from the Lagrangian Guided by the minimum of the potential being at we write the complex scalar field in terms of real scalar fields and as follows: The field is known as the Nambu-Goldstone field, and the field is known as the Higgs boson. Upon rewriting the Lagrangian in terms of and one finds At this point the only term which contains is the term containing But the dependence on can be gauged away by the gauge transformation which sends This is known as the unitary or unitarity gauge. In differential-geometric language, as is spelled out in the following box, the condensate has defined a canonical trivialization. In unitary gauge, the Lagrangian can be organised into parts which depend on the gauge field and Higgs field or into quadratic and interaction pieces By focusing on the quadratic piece, we see that the gauge field has acquired a Proca mass, while the Higgs field has a mass of This method largely carries over to the case where the gauge symmetry is promoted to a non-abelian gauge group The Nambu-Goldstone field is then promoted to a -valued field, where is the Lie algebra of |Spontaneous symmetry breaking and trivializations| A more mathematical or specifically differential-geometric viewpoint is that the field picks out a canonical trivialization which breaks the right-invariance of the principal bundle that the gauge theory lives on. This is realized most easily when the theory is based on flat spacetime as then the base spacetime is contractible, and hence any fibre bundle is trivial. In gauge theory one considers principal bundles with the spacetime as its base manifold, where the fibre is a torsor of the gauge group Crucially, since the principal bundle must be trivial, there exists a global trivialization. In physics, one generally works under an implicit global trivialization and rarely in the more abstract principal bundle. However, there are many choices of global trivialization, which differ from one another by a transition function, which can be written as a function From the physical viewpoint, this is known as a gauge transformation. There is a corresponding (choice of) transition function or gauge transformation at the algebra level such that where is the exponential map for Lie algebras. Then we can view the phase function as a transition function at the algebra level. It picks out a canonical global trivialization which 'differs from' the initial implicit global trivialization by This breaks the (right-)invariance of the principal bundle under the action of as this action does not preserve the canonical trivialization. Mathematically, this is the symmetry which is broken during spontaneous symmetry breaking. For the Abelian Higgs mechanism the relevant gauge group is The Non-Abelian Higgs model has the following action where now the non-Abelian field A is contained in the covariant derivative D and in the tensor components and (the relation between A and those components is well-known from the Yang–Mills theory). It is exactly analogous to the Abelian Higgs model. Now the field is in a representation of the gauge group, and the gauge covariant derivative is defined by the rate of change of the field minus the rate of change from parallel transport using the gauge field A as a connection. Again, the expectation value of defines a preferred gauge where the vacuum is constant, and fixing this gauge, fluctuations in the gauge field A come with a nonzero energy cost. Depending on the representation of the scalar field, not every gauge field acquires a mass. A simple example is in the renormalizable version of an early electroweak model due to Julian Schwinger. In this model, the gauge group is SO(3) (or SU(2) − there are no spinor representations in the model), and the gauge invariance is broken down to U(1) or SO(2) at long distances. To make a consistent renormalizable version using the Higgs mechanism, introduce a scalar field which transforms as a vector (a triplet) of SO(3). If this field has a vacuum expectation value, it points in some direction in field space. Without loss of generality, one can choose the z-axis in field space to be the direction that is pointing, and then the vacuum expectation value of is (0, 0, Ã), where à is a constant with dimensions of mass ( ). Rotations around the z-axis form a U(1) subgroup of SO(3) which preserves the vacuum expectation value of , and this is the unbroken gauge group. Rotations around the x and y-axis do not preserve the vacuum, and the components of the SO(3) gauge field which generate these rotations become massive vector mesons. There are two massive W mesons in the Schwinger model, with a mass set by the mass scale Ã, and one massless U(1) gauge boson, similar to the photon. The Schwinger model predicts magnetic monopoles at the electroweak unification scale, and does not predict the Z boson. It doesn't break electroweak symmetry properly as in nature. But historically, a model similar to this (but not using the Higgs mechanism) was the first in which the weak force and the electromagnetic force were unified. Ernst Stueckelberg discovered a version of the Higgs mechanism by analyzing the theory of quantum electrodynamics with a massive photon. Effectively, Stueckelberg's model is a limit of the regular Mexican hat Abelian Higgs model, where the vacuum expectation value H goes to infinity and the charge of the Higgs field goes to zero in such a way that their product stays fixed. The mass of the Higgs boson is proportional to H, so the Higgs boson becomes infinitely massive and decouples, so is not present in the discussion. The vector meson mass, however, is equal to the product eH, and stays finite. The interpretation is that when a U(1) gauge field does not require quantized charges, it is possible to keep only the angular part of the Higgs oscillations, and discard the radial part. The angular part of the Higgs field θ has the following gauge transformation law: The gauge covariant derivative for the angle (which is actually gauge invariant) is: In order to keep θ fluctuations finite and nonzero in this limit, θ should be rescaled by H, so that its kinetic term in the action stays normalized. The action for the theta field is read off from the Mexican hat action by substituting . since eH is the gauge boson mass. By making a gauge transformation to set θ = 0, the gauge freedom in the action is eliminated, and the action becomes that of a massive vector field: To have arbitrarily small charges requires that the U(1) is not the circle of unit complex numbers under multiplication, but the real numbers R under addition, which is only different in the global topology. Such a U(1) group is non-compact. The field θ transforms as an affine representation of the gauge group. Among the allowed gauge groups, only non-compact U(1) admits affine representations, and the U(1) of electromagnetism is experimentally known to be compact, since charge quantization holds to extremely high accuracy. The Higgs condensate in this model has infinitesimal charge, so interactions with the Higgs boson do not violate charge conservation. The theory of quantum electrodynamics with a massive photon is still a renormalizable theory, one in which electric charge is still conserved, but magnetic monopoles are not allowed. For non-Abelian gauge theory, there is no affine limit, and the Higgs oscillations cannot be too much more massive than the vectors.
Haptic may be the “science of applying responsive sensation to human connection with computers”. The sensation of touch may be the brains best learning mechanism –more effective than discovering or experiencing –which is the reason why the new technology holds a great deal promise like a teaching application. With this kind of technology we are able to now sit down at your computer terminal and touch things that is out there on “mind” of the computer system. By using special input/output devices (joysticks, data gloves or other devices), users may receive reviews from laptop applications by means of felt feelings in the palm or other areas of the human body. In combination with a visible display, Haptic technology may be used to train persons for jobs requiring hand- eye coordinatio, such as surgical treatment and spaceship maneuvers. Inside our paper we certainly have discussed the standard concepts behind haptics together with the haptic devices and how the unit are interacted to produce perception of feel and push feedback components. Then, we all move on to some applications of Haptic Technology. Finally we all conclude by mentioning a few future advancements. Haptic technology, or haptics, is actually a tactile feedback technology which takes advantage of the sense of touch by utilizing forces, vibration or actions to the consumer. This physical stimulation can be used to assist in the creation of virtual objects in a laptop simulation, to regulate such virtual objects, and also to enhance the handy remote control of equipment and gadgets (telerobotics). It is often described as “doing for the sense of touch what computer design does pertaining to vision”. Haptic devices may possibly incorporate tactile sensors that measure forces exerted by user on the interface. Haptic technology has made it conceivable to investigate the way the human sense of touch works by enabling the creation of thoroughly controlled haptic virtual things. These things are used to systematically probe man haptic capacities, which would otherwise always be difficult to obtain. These study tools contribute to the understanding of just how touch and its particular underlying mind functions operate. The word haptic, from the Traditional ἅπτικός (haptikos), means related to the sense of feel and comes from the Greek verb ἅπτεσθαιhaptesthai, meaning to contact or to contact. WHAT IS HAPTICS? Haptics is pretty Literally The Science of Touch. The origin of the word haptics is the Traditional haptikos, which means able to understand or perceive. Haptic feelings are created in consumer gadgets by actuators, or engines, which produce a vibration. Those vibrations will be managed and controlled simply by embedded software program, and incorporated into device end user interfaces and applications via the embedded control software APIs. You’ve probably experienced haptics in many of the consumer devices that you use daily. The rumble effect within your console game controller as well as the reassuring contact vibration you receive on your touch screen phone dial cushion are both types of haptic results. In the world of mobile phones, computers, gadgets, and digital devices and controls, meaningful haptic details is frequently limited or lacking. For example , when ever dialing a number or entering text on a conventional touch screen without haptics, users have zero sense of whether or not they’ve efficiently completed a task. With Immersion’s haptic technology, users feel the vibrating force or level of resistance as they force a digital button, slide through a list or face the end of any menu. Within a video or perhaps mobile video game with haptics, users can feel the weapon recoil, the engine revolution, or the split of the softball bat meeting the ball. When ever simulating the location of cardiac pacing leads, a user can easily feel the forces that would be found when browsing through the potential clients through a defeating heart, providing a more realistic experience of performing this procedure. Haptics can enhance the user encounter through: 5. Improved User friendliness: By rebuilding the perception of contact to or else flat, chilly surfaces, haptics creates satisfying multi-modal experiences that boost usability by simply engaging contact, sight and sound. Through the confidence a user receives through touch verification when choosing a virtual button to the in-text awareness that they receive through haptics within a first person player with the dice game, haptics improves simplicity by completely engaging the user’s detects. * Enhanced Realism: Haptics injects a feeling of realism into user experience by exciting the feelings and allowing for the user to feel the action and nuance from the application. This is particularly relevant in applications like game titles or ruse that count on only aesthetic and sound inputs. The inclusion of tactile opinions provides extra context that translates into a feeling of realism to get the user. * Restoration of Mechanical Experience: Today’s touchscreen-driven devices shortage the physical feedback that humans often need to fully understand the circumstance of their connections. By providing users with intuitive and unmistakable tactile confirmation, haptics can make a more confident end user experience and can also boost safety by simply overcoming interruptions. This is especially essential when audio tracks or visual confirmation is usually insufficient, such as industrial applications, or applications that involve distractions, just like automotive routing. HISTORY OF HAPTICS In the early 20th century, psychophysicists presented the word haptic to label the subfield of their research that dealt with human touch-based perception and manipulation. In the early 1970s and eighties, significant research efforts in a completely different field, robotics likewise began to give attention to manipulation and perception by touch. Initiallyconcerned with building autonomous programs, researchers quickly found that building adexterous robotic hands was far more complex and subtle than their first naive hopeshad suggested. In time these two areas, one that sought to understand a persons hand and one that aspired to create equipment with dexterity inspired simply by human talents found fertile mutual interest in topics just like sensory design and control, grasp control andmanipulation, thing representation and haptic data encoding, and grammars intended for describing physical tasks. Inside the early nineties a new usage of the word haptics began to come out. The confluence of a number of emerging solutions made virtualized haptics, or perhaps computer haptics possible. Very much like computer system graphics, laptop haptics enables the display of lab-created objectsto human beings in an interactive manner. However , computer haptics uses a display technology by which objects can be physically palpated. Basic system configuration. Quite simply a haptic system incorporate two parts namely your part as well as the machine component. In the figure shown over, the human portion (left) feelings and controls the position in the hand, as the machine component (right) applies forces in the hand to simulate exposure to a digital object. As well both the devices will be provided with necessary receptors, processors and actuators. In the case of the human system, nerve receptors performs sensing, brain works processing and m-uscles executes actuation of the motion performed by the hand while in the circumstance of the machine system, all these functions happen to be performed by encoders, pc and motors respectively. Basically the haptic information provided by the device will be the mix of (i)Tactile details and (ii) Kinesthetic information. Tactile information refers the data acquired by sensors that happen to be actually coupled to the skin from the human body using a particular reference to the spatial distribution of pressure, or even more generally, tractions, across the get in touch with area. Such as when we manage flexible elements like cloth and daily news, we feeling the pressure variation through the fingertip. This is really a sort of responsive information. Responsive sensing is additionally the basis of complex perceptual tasks like medical arriver, where doctors locate hidden anatomical structures and assess tissue properties using their hands. Kinesthetic info refers to the info acquired through the sensors inside the joints. Connection forces are normally perceived by using a combination of these two information’s. Creation of Digital environment (Virtual reality) Virtuelle realit�t is the technology which allows a user to connect to a computer-simulated environment, if that environment is a simulation of the real world or a great imaginary universe. Most current virtuelle realit�t environments are primarily visual experiences, exhibited either on a computer screen or through exceptional or stereoscopic displays, but some simulations contain additional physical information, just like sound through speakers or perhaps headphones. A lot of advanced haptic systems right now include responsive information, generally known as force responses, in as well as gaming applications. Users can interact with a virtual environment or a online artifact (VA)either through the use of common input products such as a computer keyboard and mouse, or through multimodal products such as a born glove, the Polhemus growth arm, and omnidirectional fitness treadmill. The lab-created environment may be similar to the real life, for example , ruse for pilot or fight training, or perhaps it can differ significantly from reality, such as VR video games. In practice, it truly is currently very hard to create a high-fidelity virtual reality knowledge, due to typically technical constraints on cu power, image resolution and interaction bandwidth. Nevertheless , those limitations are expected to eventually become overcome as processor, imaging and data communication technology become more powerful and cost effective over time. Virtuelle wirklichkeit is often accustomed to describe a wide variety of applications, frequently associated with their immersive, highly visual, 3 DIMENSIONAL environments. The development of CAD application, graphics equipment acceleration, brain mounted shows; database gloves and miniaturization have helped popularize the motion. The most successful usage of virtual reality is definitely generated 3-D simulators. The pilots employ flight simulators. These airline flight simulators include designed exactly like cockpit from the airplanes or the helicopter. The screen in front of the pilot creates virtual environment and the trainers outside the simulators commands the simulator to get adopt diverse modes. The pilots are trained to control the aircraft indifferent hard situations and emergency clinching. The simulator provides the environment. These simulators cost vast amounts. The virtuelle wirklichkeit games are also used almost inside the same fashion. The player must wear exceptional gloves, headsets, goggles, total body using and exceptional sensory type devices. The player feels that he is inside the real environment. The special goggles possess monitors to determine. The environment changes according to the moments of the gamer. These games are very costly. Virtual reality (VR) applications make an effort to simulate real or mythical scenes which users can easily interact and perceive the consequence of their actions in real time. Ultimately the user interacts with the ruse via all five feelings. However , present typical VR applications rely on a smaller subsection, subdivision, subgroup, subcategory, subclass, typically eye-sight, hearing, plus more recently, feel. Figure listed below shows the structure of a VR app incorporating image, auditory, and haptic responses. Haptic Reviews Block Diagram The application’s main elements are: 1) The simulation engine, accountable for computing the virtual surroundings Behaviour with time; 2) Aesthetic, auditory, and haptic rendering algorithms, which usually compute the virtual Environment’s graphic, audio, and power responses toward the user; and3) Transducers, which usually convert aesthetic, audio, and force signs from the Laptop into a constitute the operator may perceive. Your operator commonly holds or perhaps wears the haptic software device and perceives audiovisual feedback coming from audio (computer speakers, earphones, and so on) and image displays (for example a computer screen or head-mounted display). Whereas audio tracks and image channels characteristic unidirectional data and flow of energy (from the simulation engine toward the user), the haptic technique exchanges details and strength in two directions, from and toward the user. This kind of bi-directionality is often referred to as the single most important feature of the haptic interaction technique. A haptic device is the one that provides a physical interface between your user and the virtual environment by means of a pc. This can be carried out through an input/ output system that sensory faculties the body’s movement, such as termes conseill�s or data glove. By utilizing haptic products, the user cannot only nourish information towards the computer nevertheless can also get information through the computer by means of a felt sensation in some part of the body. This really is referred to as a haptic software. These devices can be broadly classified into: – a)Virtual reality/ Tele-robotics based devices: – Exoskeletons and Stationary device, Gloves and wearable devices, Point-source and Specific activity devices, Locomotion Interfaces b) Reviews devices: – Pressure feedback products, Tactile shows Virtual reality/Tele-robotics based gadgets: – Exoskeletons and Immobile devices: The word exoskeleton identifies the hard external shell that exists in many animals. In a technological sense, the word refers to a process that covers the user or the user has to wear. Current haptic gadgets that are classified as exoskeletons are large and immobile systems that the user must attach him / her to. Mitts and wearable devices: They are smaller exoskeleton-like products that are often , but not constantly, take the down by a huge exoskeleton or other fig� devices. Considering that the goal of building a haptic system is to be able to immerse a person in the virtual or remote control environment and it is important to give a small rest of the wearer’s actual environment as possible. The drawback of the wearable systems is that seeing that weight and size of the devices are a concern, the systems may have more limited sets of capabilities. Stage sources and specific activity devices: This is a class of devices that are very particular for doing a particular offered task. Developing a device to do a single sort of task restricts the application of that device into a much smaller quantity of functions. Nonetheless it allows founder to focus the product to perform its task very well. These job devices possess two basic forms, single point of interface devices and certain task gadgets. An interesting using haptic responses is in the kind of full body Force Opinions called locomotion interfaces. Locomotion interfaces are movement of force restrictiondevices in a restricted space, simulating unrestrained range of motion such as strolling andrunning to get virtual reality. These kinds of interfaces overcomes the limitations of using termes conseill�s for maneuvering or whole body motion systems, in which the consumer is sitting and does not expend energy, associated with room surroundings, where just short distances can betraversed. b) Reviews Devices: – Force responses devices: Force feedback type devices are usually, but not solely, connected to computer systems and is built to apply forces to imitate the sensation of weight andresistance in order to provide data to the consumer. As such, the feedback equipment represents an even more sophisticated form of input/output gadgets, complementing others such as input keys, mice or perhaps trackers. Type from the consumer in the form of side, or additional body segment whereas responses from the computer or different device with the form of palm, or other body section whereas feedback from the pc or other device with the form of pressure or placement. These devices convert digital data into physical sensations Tactile display gadgets: Simulation process involving effective exploration or delicate treatment of a virtualenvironment require the addition of feedback data that shows an object’s surface geometry or texture. Such feedback is furnished by tactile reviews systems or perhaps tactile display devices. Responsive systems vary from haptic systems in the range of the forces being made. While haptic interfaces will present the shape, excess weight or conformity of an subject, tactile extr�mit� present the area properties associated with an object such as the object’s area texture. Responsive feedback can be applied sensation towards the skin. c)COMMONLY USED HAPTIC INTERFACING EQUIPMENT: – It is a haptic interfacing system developed by a firm named Practical technologies. It truly is primarily intended for providing a THREE DIMENSIONAL touch towards the virtual objects. This is an excellent00 resolution six DOF system in which the user holds the conclusion of a motor unit controlled jointed arm. It possesses a programmable feeling of feel that allows the user to feel the feel and shape of the electronic object using a very high level of realism. The key features is that it may model cost-free floating three or more dimensional things. Cyber baseball glove: The theory of a Internet glove is easy. It involves opposing the movement from the hand in not much different from the way that an subject squeezed between fingers withstands the movement of the other. The glove must therefore be able, in the a shortage of a real subject, of re-creating the pushes applied by object on the human palm with (1) the same power and (2) the same way. These two circumstances can be simple by needing the baseball glove to apply a torque corresponding to the interphalangian joint. The perfect solution is that we have picked uses a physical structure with three unaggressive joints which usually, with the interphalangian joint, make up a flat four-bar closed-link system. This answer use cabling placed on the interior with the four-bar system and following a trajectory identical to that utilized by the extensor tendons which in turn, by nature, go against sb/sth ? disobey the movements of the flexor tendons to be able to harmonize the movement in the fingers. Among the advantages of this structure anybody can cite: – •Allows 5 dof for every single fingers •Adapted in order to size of the finger Situated on the back of the hand •Apply different causes on each phalanx (The probability of applying a lateral pressure on the fingertip by motorizing the abduction/adduction joint) •Measure finger angular flexion (The measure of the joint sides are Self-employed and can have a good image resolution given the key paths went by the cabling when the ring finger shut. Internet glove Mechanism Mechanical framework of a Web glove: The glove comprises of five fingers and has 19 degrees of flexibility 5 that are unaggressive. Each finger is made up of a passive kidnapping joint which links it to the basic (palm) and 9 rotoid joints which in turn, with the 3 interphalangian important joints, make up 3closed-link mechanism with four pub and you degree of flexibility. The framework of the thumb is composed of only two closed-links, for several dof of which one is unaggressive. The sections of the baseball glove are made of aluminum and can tolerate high fees; their total weight does not surpass 350 grams. The size of the sections is proportionate to the length of the phalanxes. All of the bones are mounted on miniature ball bearings in order to reduce scrubbing. Fig 3. 4 Mechanised Structural of Cyber glove The mechanised structure offers two necessary advantages: the very first is the center of establishing to different sizes of the human being hand. We now have also presented to lateraladjustment in order to adapt the interval between the fingers with the palm. The second advantage is the presence of physical prevents in the composition which offer total security for the operator. The force sensor is placed inside a fixed support on the top part of the phalanx. The sensor is made up of a steel remove on which a strain gauge was glued. The position sensor utilized to measure the cable displacement is incremental optic encoders giving an average assumptive resolution comparable to 0. 1 deg for the ring finger joints. Charge of Cyber glove: The baseball glove is managed by 16 torque power generators with ongoing current that may develop a maximum torque comparable to 1 . 4 Nm and a continuous torque equal to zero. 12 Nm. On each engine we repair a pulley with a great 8. a few mm radius onto that this cable is definitely wound. The maximal power that the motor unit can apply on the cable is therefore equal to 13. 0 In, a value satisfactory to ensure resistance to the activity of the finger. The electronic interface of the force responses data glove is made of COMPUTER with a number of acquisition greeting cards. The global structure of the control is given in the figure displayed below. You can distinguish two command spiral: an internal cycle which compares to a classic power control with constant increases and an external loop which integrates the model of contortion of the electronic object talking to the fingers. In this schizzo the action of gentleman on the position of the hands joints is taken into consideration by two control loops. Gentleman is considered as a displacement electrical generator while the glove is considered being a force generator Haptic Rendering: It is a procedure for applying pushes to the end user through a force-feedback device. Employing haptic object rendering, we can allow a user to touch, feel and manipulate electronic objects. Improve a wearer’s experience in virtual environment. Haptic rendering is process of displaying synthetically generated 2D/3D haptic stimuli to the end user. The haptic interface acts as a two-port program terminated using one side by human operator and on the other side by the electronic environment. The addition of haptics to various applications of virtual reality and teleoperation clears exciting choices. Three model applications that have been pursued for our Contact Lab will be summarized beneath. • Medical Simulators: Just like flight simulators are used to teach pilots, the multimodal electronic environment program we have created is being found in developing virtuelle wirklichkeit based hook procedures and surgical simulators that permit a medical trainee to determine, touch, and manipulate reasonable models of biological tissues and organs. The job involves the development of both instrumented hardware and software methods for current displays. An epidural injections simulator was already tested simply by residents and experts in two hospitals. A minimally invasive surgical treatment simulator is usually being created and includes (a) in vivo measurement of the mechanised properties cells and internal organs, (b) progress a variety of current algorithms pertaining to the computation of tool-tissue force communications and organ deformations, and (c) verification of the traning effectiveness in the simulator. This kind of work can be reviewed in .. • Collaborative Haptics: Within project, the usage of haptics to enhance humancomputer discussion as well as human-human interactions mediated by computers is being explored. A multimodal shared electronic environment program has been created and trials have been performed with man subjects to analyze the role of haptic feedback in collaborative jobs and if haptic connection through pressure feedback can facilitate a sense of being and collaborating which has a remote spouse. Two cases, one in that this partners happen to be in close proximity as well as the other through which they are segregated by several thousand miles (transatlantic touch with collaborators in University College, London, ), have been proven. • Human brain Machine Extr�mit�: In a collaborative project with Prof. Nicolelis of Fight it out University Medical School, we recently succeeded in managing a robot in current using indicators from regarding 100 neurons in the electric motor cortex of the monkey . We all demonstrated that this could be done not only with a software within Fight it out, but also across the internet with a automatic robot in our lab. This function opens a complete new paradigm for learning the sensorimotor functions inside the Central Nervous System. Additionally , a future program is the chance of implanted brain-machine interfaces intended for paralyzed individuals to control exterior devices just like smart prostheses, similar to pacemakers or cochlear implants. Given below are several even more potential applications: • Treatments: manipulating tiny and macro robots to get minimally invasive surgery; remote control diagnosis for telemedicine; aids for the disabled including haptic extr�mit� for the blind. • Entertainment: game titles and simulators that permit the user to feel and manipulate online solids, liquids, tools, and avatars. • Education: providing students the feel of tendency at nano, macro, or perhaps astronomical weighing machines; “what if” scenarios for non-terrestrial physics; experiencing sophisticated data units. • Industry: integration of haptics in CAD systems such that a designer may freely adjust the mechanical components of an assembly in an immersive environment. • Visual Arts: electronic art exhibits, concert rooms, and museums in which the user can login remotely to play the audio instruments, also to touch and feel the haptic attributes of the displays; specific or co-operative virtual sculpturing across the internet APPLICATIONS, RESTRICTION & FUTUREVISION Haptic interfaces to get medical ruse may confirm especially useful for training in minimally invasive techniques such as laparoscopy and interventional radiology, along with performing remote control surgery. A particular advantage of this sort of work is the fact surgeons can perform more procedures of a identical type with less fatigue. It is well documented that the surgeon whom performs even more procedures of the given kind will have statistically better effects for his patients. Haptic interfaces are also used in therapy. By using this technology a person can include exercise simulated and be accustomed to rehabilitate a person with damage. A Electronic Haptic Again (VHB) was successfully built-in in the programs at the Kansas University University of Osteopathic Medicine. Study indicates that VHB is a significant teaching aid in palpatory diagnosis (detection of medical problems by way of touch). The VHB copies the contours and rigidity of human being backs, which can be palpated with two haptic interfaces (SensAble Technologies, PHANToM 3. 0). Haptics are also applied in neuro-scientific prosthetics and orthotics. Research has been underway to provide essential feedback via a prosthetic limb to its wearer. Several research projects through the ALL OF US Department of Education and National Acadamies of Health focused on this place. Recent function by Edward cullen Colgate, Pravin Chaubey, and Allison Okamura et ‘s. focused on investigating fundamental concerns and deciding effectiveness pertaining to rehabilitation. Haptic feedback is commonly employed in arcade games, especially sporting video games. In 1976, Sega’s motorbike game Moto-Cross, also known as Fonz, was your first video game to use haptic feedback which usually caused the handlebars to vibrate throughout a collision with another automobile. Tatsumi’s TX-1 introduced pressure feedback to car driving games in 1983. Basic haptic devices are common by means of game controllers, joysticks, and steering rims. Early implementations were supplied through optionally available components, including the Nintendo 64controller’s Rumble Pak. Many newer era console controllers and termes conseill�s feature integrated feedback devices, including Sony’s DualShock technology. Some vehicle steering wheel controllers, for example , will be programmed to get a “feel” from the road. While the user makes a turn or perhaps accelerates, the steering wheel responds by resisting turns or slipping unmanageable. In 3 years ago, Novint introduced the Falcon, the initial consumer 3D touch system with high resolution three-dimensional pressure feedback; this allowed the haptic simulation of things, textures, recoil, momentum, as well as the physical existence of objects in online games. In 2008, Apple’s MacBook and MacBook Pro started out incorporating a “Tactile Touchpad” design with press button functionality and haptic responses incorporated in the tracking surface area. Products including the Synaptics ClickPad followed afterwards. Windows and Mac operating environments, will likely benefit significantly from haptic interactions. Picture being able to experience graphic buttons and obtain force feedback as you depress a button. Tactile haptic feedback has become common in cellular devices. Handset suppliers like LG and Motorola are which include different types of haptic technologies within their devices; in many instances, this requires the form of vibration response to touch. The Nexus 1 features haptic feedback, in accordance to their specifications. Nokia telephone designers possess perfected a tactile touchscreen display that makes onscreen buttons behave as if they were real buttons. When a end user presses the button, they feels activity in and movement away. He also hears a great audible click. Nokia designers accomplished this kind of by putting two little piezoelectric messf�hler pads under the screen and designing the screen soit could move slightly the moment pressed. Everything, movement and sound is definitely synchronized correctly to imitate real press button manipulation. The Shadow Hand uses the impression of contact, pressure, and position to reproduce the skills, delicacy, and complexity in the human grasp. The SDRH was developed by Richard Greenhill and his team of engineers working in london as part of The Darkness Project, today known as the Darkness Robot Organization, an ongoing research and development program whose goal is to complete the first persuasive artificial humanoid. An early prototype can be seen in NASA’s collection of humanoid robots, or perhaps robonauts. The Shadow Palm has haptic sensors inserted in every joint and ring finger pad, which usually relay info to a central computer pertaining to processing and analysis. Carnegie Mellon School in Pa and H�llhorst University in Germany located The Shadow Hand to be an invaluable device in evolving the comprehension of haptic understanding, and in 2006 they were involved in related analysis. The initial PHANTOM, which allows one to interact with objects in virtual reality through touch, was developed by Jones Massie although a student of Ken Salisbury at �BER. Future applying haptic technology cover a large spectrum of human discussion with technology. Current analysis focuses on the mastery of tactile discussion with holograms and isolated objects, which in turn if effective may result in applications and advancements in gaming, films, manufacturing, medical, and other sectors. The medical industry stands to gain from digital and telepresence surgeries, which usually provide fresh options pertaining to medical care. The product retail industry could gain from haptic technology by allowing users to “feel” the texture of clothes for sale around the internet. Long term advancements in haptic technology may create new industrial sectors that were previously not feasible or reasonable. Future medical applications 1 currently producing medical advancement is a central workstation utilized by surgeons to perform operations remotely. Local breastfeeding staff create the machine and prepare the person, and rather than travel to a great operating place, the doctor becomes a telepresence. This allows qualified surgeons to work from around the world, increasing accessibility to expert medical treatment. Haptic technology provides responsive and level of resistance feedback to surgeons as they operate the robotic gadget. As the surgeon makes an incision, they experience ligaments as if working upon the patient. Since 2003, experts at Stanford University had been developing technology to simulate surgery to get training reasons. Simulated functions allow doctors and operative students to train and educate more. Haptic technology aids in the simulation by setting up a realistic environment of touch. Much like telepresence medical procedures, surgeons think simulated fid�lit�, or the pressure of a virtual incision as though it were real. The researchers, led by T. Kenneth Salisbury Jr., professor of pc science and surgery, wish to be able to produce realistic bodily organs for the simulated surgical procedures, but Salisbury stated the fact that task will be difficult. The theory behind the study is that “just as industrial pilots train in flight simulators before they’re unleashed in real passengers, surgeons can practice their first sillon without in fact cutting anyone”. According into a Boston College or university paper published in The Lancet, “Noise-based equipment, such as at random vibrating insoles, could also amend, better age-related impairments in balance control. ” If effective, affordable haptic insoles were available, perhaps many accidental injuries from falls into old age or perhaps due to illness-related balance-impairment could possibly be avoided. You may also be interested in the subsequent: haptic technology, haptics technology examples
The majority of the food you eat is converted by your body to sugar (glucose), which is subsequently released into your bloodstream. For blood sugar to reach your body’s cells and give them energy, insulin is necessary. Unfortunately, having diabetes causes your body to create too little or use it incorrectly. Due to low insulin levels or cells that no longer respond to insulin, too much blood sugar remains in your bloodstream. This may eventually result in serious health problems like heart disease, kidney disease, and vision loss. Blood Sugar Levels: A Quick Summary Extremely high blood sugar levels (far more than 300 mg/dL) may, in extreme situations, result in coma. High blood sugar is defined as 180 to 250 mg/dL. Blood sugar levels of more than 250 mg/dL or less than 50 mg/dL are dangerous and require prompt medical care. To preserve their health and prevent complications, diabetics must understand their blood sugar levels. Knowing your blood sugar levels can also be beneficial for people who do not yet have a diabetes diagnosis because it may enable you to take appropriate action for undetected what level of blood sugar is dangerous? The following are some typical blood sugar ranges: Is Having Low Blood Sugar Dangerous? The body’s main energy source is sugar, also referred to as glucose. The medical word for when your blood sugar levels fall below the usual range is hypoglycemia. A blood glucose test is the only way to diagnose low blood sugar, despite the fact that low blood sugar can induce a variety of symptoms. Diabetes medication is one of the most frequent reasons of low blood sugar. When a person develops type 1 diabetes, their pancreas is unable to produce insulin anymore. The use of high insulin dosages or oral diabetic medications can also lead to hypoglycemia. Contrary to popular belief, low blood sugar can happen even when your body is producing more insulin than is deemed healthy. It’s not just a symptom of diabetes, either.Alcohol consumption combined with anti-diabetic drugs can result in hypoglycemia. When your blood sugar levels are too low, your cells run out of energy. Simple symptoms like hunger and headaches may be the first ones to appear. If you don’t quickly raise your blood sugar levels, you could run serious hazards. To avoid hypoglycemia or high blood sugar levels, you need to take the right amount of insulin. On the other hand, if you use too much insulin, your blood sugar will drop quickly. 5 Quick Ways to Lower Blood Sugar Levels Exercise effectively and swiftly decreases blood sugar levels. Your blood sugar may stay lower for up to 24 hours after exercising out. This is also due to the fact that it makes your body more sensitive to insulin. Because of this, the body requires glucose to function when exercising. Consequently, blood sugar levels frequently drop as a result of the muscles absorbing glucose from the cells. You must perform an exercise that raises your heart rate. For instance, cycling and brisk walking are both beneficial exercises. Importantly, if your blood sugar is higher than 240 mg/dl, you should check your urine for ketones. If ketones are present, you should avoid activity and first seek medical attention. If you have type 1 diabetes, your doctor will undoubtedly urge you to check your blood sugar before working out. Home urine ketone testing kits are available. You can control your blood sugar levels throughout the day by exercising. However, certain types of exercise, particularly quick bursts of intensive activity, may momentarily increase them. Additionally, physical exertion triggers the stress response in the body, which releases glucagon to fuel the muscles. 2) Intake of Carbs The amount of carbs you eat has a big effect on your blood sugar levels. This is also due to the fact that your body turns carbohydrates into simple sugars, primarily glucose. When you consume too many carbohydrates or your body produces less insulin, this process is slowed down, and your blood glucose levels may rise. By doing this, you may better plan your meals and improve blood sugar regulation. A low-carb diet reportedly lowers blood sugar levels and helps prevent blood sugar rises. You can still eat some carbohydrates while monitoring your blood sugar. The nutritious content of whole grains is superior to processed grains and refined carbohydrates, and they also lower blood sugar levels. 3) Intake of Fibre By delaying the digestion of carbohydrates and the absorption of sugar, it encourages a more gradual increase in blood sugar levels. Both are important, but soluble fibre mostly aids in blood sugar regulation whereas insoluble fibre does not. A diet high in fibre can therefore improve your body’s ability to control blood sugar and lower blood sugar levels. You might be able to better control type 1 diabetes as a result. Vegetables, fruits, and legumes are some foods high in fibre. The USDA recommends a daily fibre intake of 25 grammes for women and 38 grammes for men. 4) Water intake By getting enough water, you may be able to keep your blood sugar levels in check. Regular water drinking before a meal may reduce fasting blood sugar levels, according to study. Keep in mind that one of the healthiest drinks is water. Drinks with added sugar should be avoided as they can raise blood sugar levels, lead to weight gain, and increase the chance of developing diabetes. 5) Checking Blood Sugar Ranges By routinely checking your blood sugar levels, you can better control them. This can be done at home with a CGM. With regular tracking, you can determine whether you need to alter your diet or medication. It also explains to you how your body reacts to certain foods. Additionally, regular blood sugar monitoring may be more advantageous than a one-time practise. If a meal causes your blood sugar to spike, a HealthifyPRO2.0 CGM can help you decide whether you should make modifications to it rather than completely forego it. Reducing portion sizes and incorporating more non-starchy vegetables are two examples of adjustments. Note by TheHealthkeet Diabetes is a chronic condition that requires ongoing observation and quick treatment. Diabetes patients must maintain blood sugar levels within the normal range because both hypoglycemia and hyperglycemia can be harmful. Maintaining blood sugar levels requires a diet high in fibre, enough water consumption, and the correct exercise programme. High blood sugar arises when your body can’t use the insulin it does make or doesn’t produce enough of it. By taking insulin, you can lower your blood sugar levels. Ask your doctor how much rapid-acting insulin you should take when your blood sugar is high. Additionally, if you frequently have extreme highs or lows in your blood sugar levels, talk to a healthcare professional about modifying your diabetes treatment plan. Disclaimer: The sole goal of this article is to share information and raise awareness.
satellite data may soon allow scientists to predict regions that face the risk of wildfires. A study shows that the data can track wildfire before it strikes. Earlier, scientists used satellite observations to find out the exact location of wildfires besides monitoring their movement. The paper was published in the Journal of Geophysical Research Biogeoscience (Vol 111). By studying shrublands prone to wildfire in southern California, usa, scientists from the University of California, Santa Barbara, came out with two key indicators plant moisture and fuel condition of the amount of living plant material in an area. The study revealed that the nasa satellites could detect plant moisture and the ratio of dead to live material to provide data on potential areas at risk. Moisture levels and fuel condition, combined with weather conditions, play a major role in the development, rate of spread, and intensity of wildfires. "Wildfires tend to be extreme when live fuel moisture is low. When live fuel moisture is high and green live plant material abundant, they don't start as easily, do not spread as fast and are less hot," said lead author Dar Roberts. Importance of moisture conditions in the development of wildfire was established long back but this research says that there are other significant factors as well. "Even if vegetation is extremely dry, there are other factors that influence whether a fire will develop and how quickly it spreads, including the ratio of live to dead foliage, plant type, seasonal precipitation, and weather conditions," said Roberts. "In southern California, if a strong Santa Ana (hot and dusty winds that blow westward toward coastal areas) occurs before the first major rainfall in the fall or winter, the risk for wildfire is heightened." To verify the accuracy of satellite data, the researchers compared these with ground-based data. "We found that the space-based data were closely linked to the field measurements, suggesting the instruments can be used to determine when conditions are favourable for wildfires," said Roberts. "Unlike in India, forests in North America have high fuel load. A good example of a natural start common to some parts of the us is lightning. The satellite data can help restrict or defuse wildfire altogether," said Rajeev Semwal, ecologist with lead India. But there are limitations as well. The satellite data worked best on landscapes with just one major plant type. "When you have mixed vegetation, it will be difficult to estimate live fuel moisture from remotely sensed data," said Roberts. But Roberts clarifies that improving the role of satellite data in wildfire prediction and monitoring using satellite data is critical, since traditional field sampling is limited by high costs, and the number and frequency of sites that can be sampled. We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together. Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.
A little competition between two rival entities always spices things up. This is true even in the level measurement world where the two most commonly used methods, the Ultrasonic level measurement and the Radar level measurement compete for supremacy. The ultrasonic technique uses sound-based measurements while the radar technique uses high-frequency electromagnetic waves. Below are some of the glaring differences between the two, both functionally and by design. 1. Uses sound waves: The sound waves must travel through a medium such as air which means the transmitters are unsuitable for use in a vacuum. The sound signals require air molecules to be transmitted and the absence of these will mean the sound will not be propagated. 2. Surface factors: The sound waves are affected by surface conditions such as foam and other forms of debris which may end up affecting the returning sound signals. This has an effect on the measurement accuracy. 3. Reflection and angles of incidents: The sound waves must be reflected in a straight line which means the reflective surface must be flat. If it is a liquid, the liquid surface must be non-turbulent or undisturbed. 4. The operating temperature: The operating temperature for the sound waves must not exceed 60 degrees Celsius. In addition to that, the temperature environment should be constant to avoid inconsistencies in measurement. 5. Operating pressure: Devices that use ultrasonic technology should not be subjected to extreme pressure limits. Most of these devices have a maximum pressure of 30psig. 6. Affected by environmental conditions: Sound waves are affected by environmental factors such as the amount of vapor, humidity, and other contaminants. These may affect the accuracy of the return signal which consequently affects the accuracy of the measurement. 7. Non-contact measurement: Ultrasonic sound signals do not require a contact to travel to the surface interface and back. They can be transmitted through other mediums such as air. The performance of this technique is based on the strength of the reflected signals which are affected by the process conditions. The ultrasonic measurement technique is relatively cheaper as compared to the Radar technique because of its low precision and accuracy. 10. Smaller measurement range: Ultrasonic signals have a smaller measurement range hence a smaller scope in applications. Guided Wave Radar 1. Uses electromagnetic waves: Guided Wave Radar (GWR) uses electromagnetic waves to determine the surface level of fluids. The EM waves travel through a probe to the surface of the fluid and are reflected back while some of the waves proceed to the base of the container. 2. Unstable Process Conditions: Changes in the density, acidity, or viscosity of the fluid do not affect the accuracy of the measurements. 3. Turbulent surfaces: GWR does not depend on the angle of the fluid to measure the surface level. In fact, this technique can be sued to measure recirculating fluids or even fluids with a propeller mixer. 4. Sticky fluids & Fine powder: GWR works with all fluids and powders including highly viscous fluids such as latex, fat, paint, titanium and others. 5. High temperature: GWR can perform under very high-temperature environments, unlike the ultrasonic technique whose performance is limited by extreme temperatures. GWR performs well in temperatures of up to 31 degrees. 6. High pressure: GWR technique is not affected by conditions with extremely high pressure. It can perform well even in maximum pressure of up to 580 psig. 7. Contact measurement: The GWR radar signals require a contact such as a probe to travel to the surface interface and back to the sensors. The performance of the GWR radar signals is not dependent on the process conditions. The signal strength will be maintained regardless of the conditions. GWR has a higher cost in terms of acquisition and maintenance due to the excellent performance and precision of the measurement obtained. 10. Large measurement range: GWR has a wider measurement range and therefore has a larger scope as far as its application in various fields is concerned.
E-cigarettes, or electronic cigarettes, are battery-operated smoking devices. They often look like cigarettes, but work differently. Using an e-cigarette is called vaping. The user puffs on the mouthpiece of a cartridge. This causes a vaporizer to heat the liquid inside the cartridge. The liquid contains nicotine, flavorings, and other chemicals. The heated liquid turns into the vapor that is inhaled. Some people think that e-cigarettes are safer than cigarettes, and that they can be used to help people quit smoking. But not much is known about the health risks of using them, or whether they do help people quit smoking. However we do know about some dangers of e-cigarettes: - They contain nicotine, which is addictive - They contain other potentially harmful chemicals - There is a link between e-cigarette use and tobacco cigarette use in teens - The liquid in e-cigarettes can cause nicotine poisoning if someone drinks, sniffs, or touches it NIH: National Institute on Drug Abuse - Beware of Vaping Products with Unproven Health Claims (Food and Drug Administration) Also in Spanish - E-cigarettes and E-hookahs (Medical Encyclopedia) Also in Spanish - E-Cigs, Menthol, and Dip (National Cancer Institute, Tobacco Control Research Branch) - Electronic Cigarettes (Centers for Disease Control and Prevention) - Quick Facts on the Risks of E-Cigarettes for Kids, Teens, and Young Adults (Centers for Disease Control and Prevention) - Risks of Vaping: A Look at Safety (National Institutes of Health) Also in Spanish - Secondhand Smoke and Electronic-Cigarette Aerosols (Environmental Protection Agency) - Vaping Devices (Electronic Cigarettes) DrugFacts (National Institute on Drug Abuse) Also in Spanish - Vaping: What Parents Should Know (Nemours Foundation) Also in Spanish - Vaping: What You Need to Know (Nemours Foundation) - ClinicalTrials.gov: Electronic Nicotine Delivery Systems (National Institutes of Health) Journal Articles References and abstracts from MEDLINE/PubMed (National Library of Medicine) - Article: Comparing the Effectiveness, Tolerability, and Acceptability of Heated Tobacco Products and... - Article: Association of Fully Branded and Standardized e-Cigarette Packaging With Interest in... - Article: Effects of an App-Based Intervention Program to Reduce Substance Use, Gambling,... - E-Cigarettes -- see more articles
Hearing aids are small and complex electronic devices that help those with hearing loss communicate better. These devices aid in speech understanding, hearing and have even been shown to improve quality of life. Hearing devices come in a wide range of types and styles, but they all have the same basic components. Four Key Components There are four key components in any hearing aid: Soundwaves from the environment are picked up by the microphone. These waves are then converted into electrical signals. The electrical signals are then modified to increase their power and loudness. Filters and equalizers further adjust the signals to ensure only relevant sounds (such as speech) are amplified. The updated electrical signals are sent to the receiver, also known as the speaker, to play the sounds for the user. In order for the hearing aid to perform any of these complicated tasks, it must have power. Hearing aids use specialized batteries that can last anywhere from five to 14 days, depending on their size and the power requirements of your device. Newer devices have rechargeable batteries, which eliminates waste and streamlines the process. Additional Hearing Aid Components While the above four components appear in all hearing aids, the following are style-specific. This plastic piece sits behind the ear and connects to the main part of the hearing aid through a wire or tube. This piece is found on behind-the-ear hearing aid models. This piece is made from an impression of your ear. It sits within the ear canal and attaches to the hearing aid to keep the sounds within the ear. This piece is found in many hearing aid styles. This hole in the earmold helps air flow in and out of the ear to prevent a plugged-up feeling and infection. This guard protects the hearing aid from getting blocked up with earwax. The filter catches the earwax before it can damage the electronic components within the device. These buttons on the side of the device allow the user to control the loudness of sounds. Many newer devices can be controlled via Bluetooth on a corresponding cellphone app. These are just a few of the many parts of a traditional hearing aid. Understanding how the parts work together is especially helpful when you experience a problem. To learn more about hearing aids, contact the experts at Eastern Oklahoma ENT.
What are the terms associated with microscopy? Glossary of Terms - Abbe Condenser. – see condenser. - Aperture. – numerical, a measure of the resolving power of the power of an objective respectively. - Abberation. – see chromatic and spherical. - Achromatic. – term referring to the lens. - Analyzer. – see polarizer. - Barrel Focus. - Bertrand Lens. - Brightfield Illumination. What are the 4 principles of microscopy? To use the microscope efficiently and with minimal frustration, you should understand the basic principles of microscopy: magnification, resolution, numerical aperture, illumination, and focusing. What are the 3 rules of microscopy? Do not touch the glass part of the lenses with your fingers. Use only special lens paper to clean the lenses. Always keep your microscope covered when not in use. Always carry a microscope with both hands. What is the essence of microscopy? The essence of a microscope is its ability to magnify a specimen. Total magnification of a microscope is determined by multiplying the magnification capability of the eyepiece lens by that of the objective lens. The upper part of a compound microscope that holds the objective lens. What is microscopy principle and application? A general biological microscope mainly consists of an objective lens, ocular lens, lens tube, stage, and reflector. An object placed on the stage is magnified through the objective lens. When the target is focused, a magnified image can be observed through the ocular lens. What are the principles of light microscopy? Principles. The light microscope is an instrument for visualizing fine detail of an object. It does this by creating a magnified image through the use of a series of glass lenses, which first focus a beam of light onto or through an object, and convex objective lenses to enlarge the image formed. What is the rule for touching lenses? What is the rule for touching lenses? Do not touch lenses with fingers, or leave liquids on objective lenses. After use, return the objective to low power, remove the slide and if necessary wipe clean the stage, then put it all back! What are the basic microscopy techniques and rules to follow? Important general rules: Always carry the microscope with 2 hands—place one hand on the microscope arm and the other hand under the microscope base. Do not touch the objective lenses (i.e. the tips of the objectives). Keep the objectives in the scan position and keep the stage low when adding or removing slides. What does microscopy mean in medical dictionary? Microscopy: The examination of minute objects by means of a microscope, an instrument which provides an enlarged image of an object not visible with the naked eye. Aside from the usual microscopy, there are various special types of microscopy including, for example: What is microscope technique? Microscopy Imaging Techniques. Microscopy imaging techniques are employed by scientists and researcher to improve their ability to view the microscopic world. Advances in microscopy enable visualization of a broad range of biological processes and features in cell structure. What is the plural of microscope? microscope (plural microscopes) An optical instrument used for observing small objects. Any instrument for imaging very small objects (such as an electron microscope). What does microscopic angioscopy mean? Microscopic Angioscopy The noninvasive microscopic examination of the microcirculation, commonly done in the nailbed or conjunctiva. In addition to the capillaries themselves, observations can be made of passing blood cells or intravenously injected substances.
Any discussion about preventing suicide has to include identifying the symptoms of depression, so that adults and parents and teachers can identify a depressed teenager. Gay LGBTQ youth are two to three times more likely to attempt suicide than other young people. Be an active listener when you speak with your child. Casually observe the moods, and behavioral patterns, that your child exhibits. If your child shares with you how they feel, don’t lecture them or tell them their feelings are wrong. Be a parent, not a best friend, and maintain an open and loving heart and mind when you’re talking about sensitive issues. Facts About Teen Depression - Depression begins in adolescence: average depression onset age is 14 years. - Teen depression is common: by the end of their teen years, 20% will have had depression. - Depression is treatable: more than 70% of teens improve with a combination of medication and therapy. - 80% of teens with depression don’t receive help. - Untreated depression has serious consequences. It can lead to: Substance abuse (24% to 50%). Bullying (30% for those bullied, 19% for those doing the bullying). Other disorders (e.g. Eating disorder). Suicide (the 3rd leading cause of death among 10 to 24 year olds). Symptoms Of Depression - Feelings of sadness for much of the time - Indifference about the future - Uncharacteristic pessimism - Guilty feelings - Lowering self-esteem - Suicidal thoughts - An irritable, sad, empty or cranky mood and belief that life is meaningless. - Loss of interest in sports or activities they used to enjoy, withdrawal from friends and family, pervasive trouble in relationships. - Changes in appetite, significant weight gain or loss. - Excessive late-night activities, too much or too little sleep, trouble getting up in the morning, often late for school. - Physical agitation or slowness, pacing back and forth and/or excessive, or repetitive behaviors. - Loss of energy, social withdrawal, withdrawal from usual activities, or boredom. - Making critical comments about themselves, behavior problems at school or at home, overly sensitive to rejection. - Poor performance in school, a drop in grades, or frequent absences. - Frequent complaints of physical pain (headaches, stomach), frequent visits to school nurse. - Writing about death, giving away favorite belongings, comments like “You’d be better off without me.” If you recognise any of these symptoms in your teenager, please contact your family doctor. The Trevor Project Lifeline for teens is staffed 24/7, and if you or someone you know is depressed or contemplating suicide, call 866-488-7386. TeenLine online is operated by teens for teens, and teenagers can call them at 800-852-8336, or find them online by clicking here. The National Suicide Prevention Lifeline number is 1-800-273-8255. I consulted the “Families for Depression Awareness” website, the PsychCentral website, and the Teen Help website, for research and information. © 2011 - 2013 JIVEINTHE415.COM
The First Field Trip: An introduction to sedimentation and stratigraphy An example set of tasks and questions for the first field trip of a 300-level Sedimentology and Stratigraphy course. Upon completion of this trip the student will be able to: - Recognize sedimentary rocks and describe major stratigraphic units. - Determine the thicknesses the nature of contacts and the orientation of stratigraphic units. - Recognize periods of geologic time not represented in an outcrop. - Observe internal features fossils structures etc. of the rock units. - Construct a complete stratigraphic section including all the units. - Identify modern environments where these rocks are forming. - Apply stratigraphic principles and observation of an outcrop to create some hypotheses about the environments of deposition and a depositional history of the rocks. Context for Use This field trip was designed for Pescadero Point beach in northern California but could be adapted to any sedimentary rock outcrop that includes at least two sedimentary units with well exposed structures separated by an unconformity. The field trip is designed for introductory students as an introduction to sedimentary rocks and stratigraphic principles. Field Trip Handout/Assignment (Microsoft Word 129kB Aug25 09) Teaching Notes and Tips This trip will likely take a full lab period (2-4 hours). When adapting for other outcrops some photos of appropriate modern sedimentary environments can replace the modern beach, or looking at fluvial units and a modern river. Some other notes are mentioned on the last page of the exercise: Based on the Principle of Superposition, what can you infer about the units in the cliff face? Based on the Principle of Initial Horizontality, what can you infer about the lower units on the cliff face? Based on the Principle of Cross Cutting Relationships, what can you infer about the relationship between the two units? Try to get the students to assemble their own list of things to look for and describe based on their readings and activities previous knowledge what they see and instructor guidance. Things to look for and describe include: thickness of beds nature of cross-bedding range of rock types fossils or trace fossils burrows feeding tracks etc. other sedimentary structures grain sizes shapes and sorting color other distinguishing features. Also discuss reasons why fossils occur in lenses why burrow holes are filled etc. Part V: Go over the sequence of events in the field: what happened first 2nd etc. and then ask them to write about it. - Outcrop sketch and stratigraphic column (individual products - can consult other students) - Answers to handout questions (can be done as pairs/groups) - Geohistory (if gone over in the field, the order should be correct and the description should be the assessed) References and Resources - Press F. and Siever R. 1998 Understanding Earth W.H. Freeman and Co.: New York 682 p. - Steno N. 1669 De Solido Intra Solidium Naturaliter Contento Dissertationis Prodromus.
During the second week of May 1919, the recently arrived German delegation to the Versailles Peace Conference, convened in Paris after the end of the First World War, pore over their copies of the Treaty of Versailles, drawn up in the months preceding by representatives of their victorious enemies, and prepare to lodge their objections to what they considered to be unfairly harsh treatment. Presented with the treaty on May 7, 1919, the German delegation was given two weeks to examine the terms and submit their official comments in writing. The Germans, who had put great faith in U.S. President Woodrow Wilson’s notion of a so-called peace without victory and had pointed to his famous Fourteen Points as the basis upon which they sought peace in November 1918, were greatly angered and disillusioned by the treaty. As Ulrich von Brockdorff-Rantzau, Germany’s foreign minister, put it: This fat volume was quite unnecessary. They could have expressed the whole thing more simply in one clause—Germany renounces its existence. Driven by French and British desires to make Germany pay for the role it had played in the most devastating conflict the world had yet seen, Wilson and the other Allied representatives at the peace conference had indeed moved away from a pure peace without victory. Germany was to lose 13 percent of its territory and 10 percent of its population. It was denied initial membership in the League of Nations, the international peace-keeping organization established by the treaty. The treaty also required Germany to pay reparations, though the actual amount ended up being less than what France had paid after the Franco-Prussian War of 1870-71. The real German objection to the Treaty of Versailles, however, was to the infamous Article 231, which forced Germany to accept sole blame for the war in order to justify the reparations. Despite much debate among the Allies themselves and over strenuous German protests—including by Brockdorff-Rantzau, who wrote to the Allies on May 13 that The German people did not will the war and would never have undertaken a war of aggression—Article 231 remained in the treaty. The Germans were given a deadline of June 16 to accept their terms; this was later extended to June 23. Pressured by the Allies and thrown into confusion by crisis within the Weimar government at home, the Germans gave in and accepted the terms at 5:40 p.m. on May 23. The Versailles Treaty was signed on June 28, 1919. Meanwhile, opposition to the treaty and its Article 231, seen as a symbol of the injustice and harshness of the whole document, festered within Germany. As the years passed, full-blown hatred slowly settled into a smoldering resentment of the treaty and its authors, a resentment that would, two decades later, be counted—to an arguable extent—among the causes of the Second World War.
Why is evidence important in science? Levinton: The outside world, to physical scientists, is the way you gather information. There may be controversy in the way you interpret this information, but evidence is what you collect from the outside world. It has two important roles: Artist’s impression of five-eyed Opabinia at the sea bottom. The animal genus was found in Cambrian fossil deposits. Author: Arthur Weasley. There are facts that command explanation. A simple example is Why does the sun rise daily? It allows us to test hypotheses, or ideas that explain the facts. An example of a hypothesis is that the sun seems to rise every day because of Earth’s rotation. Observation and hypothesis are both important. Accidental discovery is crucial. People finding fossils has gone on for hundreds of years. But using fossil evidence to test a hypothesis is what ensures that science will present accurate statements, research, and theories. Some people do not understand the difference between “theory” as used in science and “theory” as used in general conversation. So, would you clarify the concept? Levinton: In general conversation, people might say “I have a theory” when they mean they have an idea or are making an assumption. In science, a theory is not based on speculation. There are many steps to take before a theory is established. - A hypothesis is a testable statement explaining observations about phenomena occurring in the natural world. - A theory is a hypothesis or group of related hypotheses that have been repeatedly tested and which scientists generally agree conform to all known data/observations or a major set of observations about the world. The Cambrian explosion is an important event in Earth’s history. What have we learned about it so far? Levinton: The Cambrian explosion is a brief time in the Early Cambrian when most major groups of animals that have bilateral symmetry first appear in the fossil record. A bilateral animal is one whose body plan is such that it has two mirror-image halves. Modern examples are lobsters, people, dogs, and butterflies. The event is referred to as an “explosion” because a rich diversity of species appeared in a relatively short amount of time. The hypothesis is that all these animal groups arose from a common ancestor and diverged at or near the beginning of the Cambrian period, which spans 543 million to 490 million years ago. Evidence is growing to support this hypothesis, at least from evidence derived from fossil occurrences. After that period, very few additional animal phyla, or large animal categories, arose. A trilobite (Parkaspis decamera) from the Cambrian Period found in the Burgess Shale, Canada. Image © Oklahoma University, Photographer Albert Copley; Source: Earth Science World Image Bank How do we know all of this happened? Levinton: We know it from evidence. There are two things we need to know: You have to have a series of rocks from natural sites that are dated scientifically. Rocks are dated by their relative location and other methods but also by radiometric dating. Radiometric dating involves the use of radioactive isotope series that have half-lives up to many billions of years, such as uranium/lead. The occurrence of the fossils. What we know now is that many of the animal groups go back in time but not past the Cambrian period. Fossils are not always preserved perfectly. Sometimes you will come across a lack of good preservation factors for 200 million years, say, for an appropriate fossil to occur. Evidence shows that the rocks before the explosion were suitable for fossils to be formed but most of the Cambrian animals do not appear in these rocks. Other groups are found before the Cambrian, but not the bilaterian groups participating in the Cambrian explosion, except for a few still controversial specimens. So the date of the rock in which a fossil is found is the date of the fossil. However, it’s possible that a rock can be transported by natural events, for example, eroded out of a rock, transported downstream by a strong current, and deposited somewhere else. Scientists have to be careful about that possibility. Even the famous Burgess Shale in the Rocky Mountains of Canada, where Cambrian fossils were found, may consist of some animal fossils that were transported a few thousand yards. Scientists have to calibrate the data to make sure they are dated correctly. Can molecular clocks determine the lineage of a fossil from such distant times as the Cambrian? Levinton: You can never date rocks with molecular clocks, but you can ask certain questions. If you have two organisms and the DNA sequence of a certain type of molecule that evolved slowly enough so that you can see the difference in DNA sequence in the two organisms, you can go back in time to see when they diverged on the tree of life. However, you must have a way to calibrate the difference in DNA sequence against an absolute time scale. Molecular clocks are not that accurate going back to such distant periods as the Cambrian, for several reasons: There are different ways you can make an analysis, but the calibration points are not that abundant. Let’s say you have a 400-million-year-old fossil and another one that arose 430 million years ago. But which age do you use in your evolutionary calculations? It could be a source of error. There is also a lot of variation in rates of evolution and that has to be compensated for. There are statistical challenges here. When looking at shorter spans of time, say 5 to 10 million years before the present, scientists are a lot more confident. There’s a lot more to be learned about molecular clocks to use them accurately for older times such as the Cambrian explosion. Did the Cambrian explosion happen because it followed an extinction event? Levinton: Maybe. There are groups of organisms that seem to have some major overturns just before the Cambrian. There are also some physical changes on Earth that are well known, but no one can pinpoint the time. There’s an idea, bolstered by data, that the whole of the Earth was covered by ice, which suggests that the oceans were anoxic, that is, life in the oceans was nonexistent. That would have been an extinction event, which, as history shows, is often followed by a burst of new species. But it would be difficult to connect this possible extinction event to the Cambrian explosion. There are other changes that occurred just before the Cambrian, but these include everything from a lowering of ocean temperature to an increase in oxygen in the atmosphere. There are too many variables that are too poorly timed to help us very much at this time. Why is the Cambrian explosion so pivotal as an example of macroevolution? Levinton: Macroevolution is about natural processes on a grand scale of geological time, such as origins and extinctions. The Cambrian explosion is the mother of all animal radiations. All the major body plans—for example, arthropods, brachiopods, and so on—they all arose in a short window of time, if the current fossil record is to be taken at face value. Scientists are still searching for evidence to add to the wealth of knowledge about this period so we can all agree that this hypothesis is absolutely accurate. If it proves to be absolutely true, it means that most of life’s diversity pretty much started then. It’s the moment of animal evolution’s creativity. © 2007, American Institute of Biological Sciences. Educators have permission to reprint articles for classroom use; other users, please contact [email protected] for reprint permission. See reprint policy.
El Niño and Its Impact on the World The activity takes a hands-on approach to understanding El Niño by physically showing and feeling the process. It consists of an El Niño demo to be performed by the teacher and observed by the class as well as an experiment to be conducted by the students themselves individually or in pairs to illustrate the connection between water temperature and atmospheric temperature. Students are asked to make conclusions based on their findings and then examine the chain of events stemming from El Niño. Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials
What are cookies in computers? Also known as browser cookies or tracking cookies, cookies are small, often encrypted text files, located in browser directories. They are used by web developers to help users navigate their websites efficiently and perform certain functions. Due to their core role of enhancing/enabling usability or site processes, disabling cookies may prevent users from using certain websites. What else should I know about cookies? Cookies are NOT viruses. Cookies use a plain text format. They are not compiled pieces of code so they cannot be executed nor are they self-executing. Accordingly, they cannot make copies of themselves and spread to other networks to execute and replicate again. Since they cannot perform these functions, they fall outside the standard virus definition. Cookies CAN be used for malicious purposes though. Since they store information about a users browsing preferences and history, both on a specific site and browsing among several sites, cookies can be used to act as a form of spyware. Many anti-spyware products are well aware of this problem and routinely flag cookies as candidates for deletion after standard virus and/or spyware scans. The way responsible and ethical web developers deal with privacy issues caused by cookie tracking is by including clear descriptions of how cookies are deployed on their site. Most browsers have built in privacy settings that provide differing levels of cookie acceptance, expiration time, and disposal after a user has visited a particular site. Backing up your computer can give you the peace of mind that your files are safe.
Fast & Furious 1.OA.6 – Subtract multiples of 10 in the range 10-90 from multiples of 10 in the range 10-90, using concrete models or drawings and strategies based on place value, properties of operations, and/or relationship between addition and subtraction. 2.OA1 – Using addition and subtraction within 100 to solve one-and two-step word problems involving situations of additing to, taking from, putting together, taking apart, and comparing, with unknowns in all positions. What do you notice? What do you wonder? Focus questions – How far did the yellow car go? How far did the black car go? Which one went farther and by how many cubes?
Embracing Early Independence As parents and educators, one of our primary goals is to nurture independence in our children, especially during their formative years. For 3 to 4-year-olds, kindergarten, or ‘kindy,’ is a pivotal stage where they first encounter a structured environment outside the comfort of their homes. It’s a space where they begin to learn, explore, and grow independently. Starting the Journey The journey towards independence at kindy starts with small yet significant steps. Encouraging children to manage their belongings, make choices about their activities, and take responsibility for their actions lays the groundwork for self-reliance. It’s fascinating to watch a child’s transition from relying on a parent or caregiver for every need to selecting their snack or putting away toys without prompt. Building Blocks of Independence To foster this growth, kindy environments are designed to be safe yet stimulating, allowing children to explore within boundaries. Activities are structured to encourage decision-making and problem-solving, crucial skills for independence. A typical day might include choosing between painting or building blocks, which support independent thinking and creativity. Cultivating Social Skills Social skills are another cornerstone of development for 3-4-year-olds, and kindy is where these skills are actively nurtured. It’s where children learn to interact, share, and cooperate with peers, laying the foundation for future social interactions. Learning to Share and Cooperate In kindy, children are often introduced to sharing and turn-taking. They learn the importance of cooperation and understanding others’ perspectives through games and group activities. These experiences are invaluable in developing empathy and communication skills. Friendships formed in kindy can profoundly impact a child’s social development significantly. These early friendships teach children how to relate to others, resolve conflicts, and express their feelings in a socially acceptable manner. Encouraging Growth and Development Growth in 3-4-year-olds is about physical milestones and cognitive and emotional development. Kindy plays a critical role in this holistic growth through various activities tailored to their developmental stage. Cognitive development in this age group is rapid, and kindy provides an ideal setting for this. Activities like story time, puzzles, and simple math games enhance cognitive skills like memory, concentration, and problem-solving. Kindy is also a safe space for children to express and understand their emotions. Educators are trained to help children identify and articulate their feelings, a crucial aspect of emotional intelligence. Activities like role-playing or art can be instrumental in assisting children to understand and express their emotions healthily. Physical development is equally important, and kindy offers various activities to enhance motor skills. Whether climbing, running, or fine motor activities like cutting and drawing, these activities improve physical abilities and overall confidence and self-esteem. The Role of Parents and Caregivers While kindy provides an ideal environment for growth, the role of parents and caregivers is equally vital. Encouraging independence at home, reinforcing social skills learned at kindy, and supporting the overall development process is crucial. Reinforcing Skills at Home Practising skills learned at kindy, like sharing and decision-making, in a home environment reinforces these lessons. Consistency between home and kindy is critical to a child’s successful development. Communication with Educators Regular communication with kindy educators can provide insights into a child’s progress and areas needing attention. This partnership between parents and educators is essential for a child’s holistic development. Finding the Right Kindy For parents looking for ‘kindy for 3 year olds near me’, choosing a place that aligns with their child’s needs and educational philosophy is essential. Factors like the curriculum, educator qualifications, and the overall environment should be considered to ensure they fit the child well. Kindergarten, or “kindy,” is more than just a preparatory stage for formal schooling; it represents a pivotal time in the life of 3-4-year-olds. This phase fosters independence, social skills, creativity, and overall cognitive and emotional growth. Children can explore, experiment, and discover in a nurturing and stimulating environment, setting a solid foundation for their future learning and development. This period is invaluable in their early years, as it helps shape their personalities, emotional resilience, and problem-solving skills. As parents and educators, it’s our role to support and guide these young minds through this journey, ensuring they emerge as confident, capable, and well-rounded individuals ready to take on the challenges of the following stages of their education and life.
Accommodations: "An accommodation is a change or alteration in the regular way a student is expected to learn, complete assignments or participate in classroom activities. Accommodations include special teaching or assessment strategies, equipment or other supports that remove, or at least lessen, the impact of a student’s special education needs. The goal of accommodations is to give students with special education needs the same opportunity to succeed as other students" (Identifying Student Needs, 2006, p. 1). Modifications: "A modified program has learning outcomes that are significantly different from the provincial programs of study and are specifically selected to meet the student’ s special education needs. Changes to the outcomes are designed to provide the student the opportunity to participate meaningfully and productively across a variety of learning experiences and environments. Modifications may include changes in instruction level, content and/or performance criteria" (Identifying Student Needs, 2006, p. 12) A Guide to Inclusion & Teaching Students with Physical Disabilities. Written by: Stephanie Torreno • edited by: Elizabeth Stannard Gromisch • updated: 6/6/2012 To maintain inclusive classrooms, teachers should have knowledge of physical impairments, assistive technology, teaching strategies, and necessary accommodations and modifications. Use this guide as your source. Children with physical disabilities, once taught in separate classes and even separate schools, now learn beside their peers in regular classrooms. Inclusion has changed how these students are educated, with the continuing development of the Individuals with Disabilities Education Act (IDEA) ensuring rights to a quality education.As types of physical disabilities vary in degree of impairment, teachers will find a general knowledge of various conditions and how they affect children helpful. Assistive technology can level the effects of these impairments by allowing students to participate in classroom activities more easily and independently. References Author's own experience. What Are Classrooms Like for Students with Learning Disabilities? How do general education classroom environments respond to individual differences and needs? How readily do teachers alter their forms of classroom organization; how readily do they modify approaches? Common classroom conditions can and do affect many students adversely-to some degree, at one time or another, in one way or other-but, some students are especiallyvulnerable to classrooms' hazards (e.g., children of poverty, nonnative speakers, those with attention deficits). Students with learning disabilities are among the mostvulnerable-at chronic risk for "not learning" under the aforementioned conditions, for long-term academic and social problems, and for lifelong debilitating side-effects of their classroom experiences. Accommodations: What They Are and How They Work. Accommodations are changes that make it easier for your child to learn. They don’t change what your child is learning. They change how your child is learning. It’s a way to make sure your child’s learning and attention issues don’t get in the way of showing what he knows. What Accommodations Are For Kids with learning and attention issues may need to learn material differently than other kids their age. For example, if your child has trouble with writing, the teacher might let him give answers to a test verbally. Accommodations don’t lower the expectations for what kids learn. IEP and Special Education Terms. If your child has an Individualized Education Program (IEP) or a 504 plan to address learning or attention issues, you may be considering modifications. Modifications change what or how much a child is taught. The goal is to gear the curriculum to the child’s capability. Here’s what you need to know about modifications. IEP - Standards for Development of Program Planning and Implementation- Ontario. IEP Guide - Ontario. Individual Program Planning (Includes planning resources) - Alberta. Identifying Student Needs, Selecting Accommodations and Strategies - Alberta. Making Goals Meaningful, Measurable and Manageable - Alberta. Easy Accommodations in the General Education Classroom. This post is by special request from one of our readers! You are always welcome to click the "Contact Us" tab at the top to request topics you'd like to read about! I was an inclusion teacher for 3 years and I learned a lot about special education before I actually went back to grad school to become a special ed teacher myself. Years later, I now have the perspective of both a general education teacher (and what is reasonable in a class of 25+ kids) and a special education teacher who wants what's best for her kids. Accommodations for children who are visually impaired. Leer este artículo en español. English Language Learners (ELL) Accommodation Tool Kit for Culture in the Classroom. Meeting Physical Needs. Accommodations and Modifications for Children with Autism. Accommodations Which May Assist the ADHD Child. Open PDF Version Accommodations come in three distinct categories; instructional, environmental, and assessment. The following lists are examples of interventions that may impact the success of the ADHD student. In planning a program, remember to try and catch the student doing well or behaving well. Ignore minor inappropriate behaviours. Remember, behaviour is the result of a need not being met. Modifications for students with Intellectual Disability. Written by: Sharon Dominica • edited by: Elizabeth Stannard Gromisch • updated: 9/11/2012. Cystic Fibrosis: Physical Activity and Exercise. What is cystic fibrosis? Cystic fibrosis (CF) is a genetic disease. It affects mainly the lungs and digestive tract. CF causes a build-up of thick mucus in the lungs, which leads to breathing troubles. Mucus in the lungs also benefits bacteria that are responsible for infections. A child with CF may have cycles of infection. Thick mucus also blocks the ducts of the pancreas. Why exercise is important in CF treatment Exercise benefits us all, but people with CF benefit even more from being physically active because exercise can: slow the rate of decline in lung function, which means children with CF may keep good lung function longer. If you have CF, being physically active will not just make you feel better, it will improve your quality of life. How to get the most out of exercise. Sensory Activities for Children with Autism. Strategies for Children with Neglect and Attachment Issues.
- Start from the right most digit. - Going from right to left, the place values of the digits are ones, tens, hundreds, thousands, and ten thousands. - To write the expanded form of a number we multiply each digit of the number by the place value of the digit. Example: Expand 97, 535? 9 is in ten thousands place, 7 is in thousands place, 5 is in hundreds place, 3 is in tens place and 5 is in ones place. Answer: 9 x 10,000 + 7 x 1000 + 5 x 100 + 3 x 10 + 5 x 1 Directions: Expand the following. Also write at least 10 examples of your own.
Silicon has been the heart of the world's technology boom for nearly half a century, but microprocessor manufacturers have all but squeezed the life out of it. The current technology used to make microprocessors will begin to reach its limit around 2005. At that time, chipmakers will have to look to other technologies to cram more transistors onto silicon to create more powerful chips. Many are already looking at extreme-ultraviolet lithography (EUVL) as a way to extend the life of silicon at least until the end of the decade. The current process used to pack more and more transistors onto a chip is called deep-ultraviolet lithography, which is a photography-like technique that focuses light through lenses to carve circuit patterns on silicon wafers. Manufacturers are concerned that this technique might soon be problematic as the laws of physics intervene. Using extreme-ultraviolet (EUV) light to carve transistors in silicon wafers will lead to microprocessors that are up to 100 times faster than today's most powerful chips, and to memory chips with similar increases in storage capacity. In this article, you will learn about the current lithography technique used to make chips, and how EUVL will squeeze even more transistors onto chips beginning around 2007.
The Aufbau Principle: the (n + l) Rule We’ve all seen and use the so-called Aufbau Diagram (Figure 1). It is a mnemonic used to remember the order of “filling” of atomic orbitals during the construction of the ground state electron configurations of the elements. The presentation of this diagram is largely disconnected from any physical meaning. Here’s what we tell our students: “Memorize the diagram, learn to use it, and you’re guaranteed to get the right answer.” Figure 1. The Aufbau Diagram: Atomic orbitals are filled starting at 1s and continuing, from the upper left, in the order indicated by the arrows. Is there a way to connect this diagram to its physical meaning? Yes! That is the goal of this article. How was this diagram constructed in the first place? It turns out that it is a representation of a method of predicting the “order of filling” called the Madelung rule, which is also called the (n + l) rule. The “n” and “l” in the (n + l) rule are the quantum numbers used to specify the state of a given electron orbital in an atom. n is the principal quantum number and is related to the size of the orbital. l is the angular momentum quantum number and is related to the shape of the orbital. Here’s how the (n + l) rule works. The (relative) energies of the orbitals can be predicted by the sum of n + l for each orbital, according to the following rules: a. Orbitals are filled in order of increasing (n + l), which represents the relative energy. b. If two orbitals have the same value of (n + l), they are filled in order of increasing n. The diagram in Figure 1 is the result of these rules. Figure 2 is a version of the diagram that displays the dependence on (n + l) for each orbital, where E represents the relative energy of the orbitals. The orbitals are filled according to the values of E for each orbital: E=1 for 1s, E=2 for 2s, E=3 for 2p and 3s, and so on. According to rule (b) above, when two orbitals have the same E, such as E=3 for 2p and 3s, the orbital with lower n (2p) is filled first. Figure 2. An Aufbau diagram that illustrates the (n+l) rule. The (n + l) rule is a remarkably clever and useful tool. It correctly predicts the order of orbital energies through element 20 (calcium). It also correctly predicts many electron configurations beyond that. And here we arrive at a very important point: predicting the relative energies of each orbital is not the same thing as predicting correct electron configurations. More on this point later. Why does the (n + l) rule work? It’s not magic and now we’ll discuss the connection between the rule and its physical meaning. To understand the connection, we need to start with how the quantum numbers n and l are related to the energy of an orbital. We’ll use 3D models (actually 2D images of the 3D models) of atomic orbitals to demonstrate. [Sorry to disappoint those looking for a deep dive into quantum mechanical calculations. These models are visual representations of the results of those calculations.] In Figure 3, we see a representation of the orbitals occupied by the electrons in the ground state of the element krypton (for clarity, the orbitals have been separated from one another). Notice that as the quantum number n increases (from 1 to 4 in krypton), so does the overall size of the orbital. Figure 3. the electron configuration of krypton. (Generated using the Electron Configuration Lab of Atomsmith Classroom1) How is the size of the orbital related to its energy? Recall that the potential energy of attraction between protons and electrons, which have opposite charges, depends on the distance between them: the closer an electron gets to the protons in the nucleus, the lower its energy will be. Compare the sizes of the 1s (n = 1) and 4s (n = 4) orbitals (Figure 3). Because the 1s orbital is smaller, the average distance of an electron to the nucleus will be smaller than that of the electrons in the 4s orbital. That’s the connection – the higher n is, the higher the energy of the orbital. What about the l in the (n + l) rule? As mentioned above, l, the angular momentum quantum number, determines the shape of an orbital. In all orbitals for which n > 1, there are areas, called nodes, in which it is extremely unlikely to find an electron. There are two types of nodes: radial and planar (or angular). Figure 4 illustrates the radial node in a 2s orbital (l = 0) and a planar node in a 2p orbital (l = 1). Note that radial node (Figure 4, center) does not cross the nucleus, whereas planar nodes (Figure 4, right) do. s orbitals (which all have l = 0) contain only radial nodes. All other orbitals (p, d, f, etc., for which l > 0) contain both radial and planar nodes. Figure 4. Left: 2s and 2p orbitals, overlapped. Center: radial node (l = 0) in a 2s orbital (green circle). Right: a planar node (l = 1) in a 2p orbital (green line). The 2s and 2p orbitals (center and right) have been “sliced” in Atomsmith’s Orbital Lab. The total number of nodes (radial + planar) in an orbital is equal to (n – 1). Of these, l nodes are planar. How does the number of planar nodes affect the energy of an orbital? Look again at the radial and planar nodes in Figure 4: the planar node crosses the nucleus – where the positively charged protons are. Radial nodes do not cross the nucleus. If a node is an area where an electron is not likely to be found, then electrons in orbitals with planar nodes are likely to be found farther from the nucleus (on average). As we discussed earlier, large distances from the nucleus means higher energy. Thus, the higher the value of l, the more planar nodes an orbital has, and the higher the orbital energy. So the (n + l) rule is a way to account for the two main factors that affect the relative energies of atomic orbitals: the size of the orbital (depends on n) and the number of planar nodes (= l). In cases where (n + l) is the same for two orbitals (e.g., 2p and 3s), the (n + l) rule says that the orbital with lower n has lower energy. In other words, the size of the orbital has a larger effect on orbital energy than the number of planar nodes. Like all Models, Push Aufbau (n + l) Far Enough and it Fails. The (n + l) rule is a model. And, as we tell our students, all models have limits. The (n + l) rule works quite well up to Z = 20, calcium (Z is the atomic number). What does “works well” mean? It successfully predicts two things: - the relative energies of the orbitals - the order in which the orbitals are occupied It may not be obvious that these two things are different. But they are, and the differences start to matter at Z = 21, scandium – the beginning of the transition metals. For Z = 20, calcium, the (n + l) rule says: - the 4s orbital is lower energy than the 3d orbital - the 4s orbital is occupied and the 3d orbitals are not (1s2 2s2 2p6 3s2 3p6 4s2). These are both correct! For Z = 21, scandium, the (n + l) rule says: - the 4s orbital is lower energy than the 3d orbital - the 4s orbital is occupied and one 3d orbitals is occupied (1s2 2s2 2p6 3s2 3p6 4s2 3d1). Here’s where the (n + l) rule first fails. #2 (the occupation) is correct, but #1 is incorrect. For transitions metals, 3d is lower in energy than 4s! Figure 5 shows the relationship between orbital energy and atomic number (Z). Notice that the curves of the 4s and 3d orbital energies cross at Z = 21. Figure 5. The relationship between orbital electronic energy and atomic number (Z). Vanquickenborne L.G., Pierloot K., and Devoghel, D. Transition Metals and the Aufbau Principle. J. Chem. Ed., 1994, 71(6) 469- 471. However, and this is an important point: even though the (n + l) rule gets the orbital energies wrong, it still gets the electron configuration (orbital occupancies) right! How is it possible that, for transition metals, the 3d orbitals are lower in energy, but they are not preferentially occupied? The short answer is that the orbital energies are not the only important factor in determining how the orbitals are occupied. The long answer? It’s complicated – very complicated. Prof. Dr. W.H. Eugen Schwarz, a theoretical chemist at the University of Siegen in Germany has published a number of papers on this very subject. His results are clearly beyond the scope of any introductory chemistry course, but we hope to give you a flavor of how other factors besides orbital energies may influence the occupancy of atomic orbitals in an electron configuration. Schwarz elucidates five factors that influence the electron configuration of transition metals:2 - d-orbital collapse - d versus s electron repulsions - s Rydberg destabilization - configurations and states in free and bound atoms - relativistic spin-orbit coupling We’ll look exclusively at the second factor: d versus s electron repulsions. Let’s consider titanium (Z = 22). Its electron configuration is 1s2 2s2 2p6 3s2 3p6 4s2 3d2, which the (n + l) rule correctly predicts. If the electron configuration depended solely on the orbital energies, we would expect: 1s2 2s2 2p6 3s2 3p6 3d4 – with no electrons in the 4s orbital. Why don’t the last four electrons preferentially occupy the 3d orbitals, which are lower in energy than the 4s orbitals? Consider Figure 6a, where we see models of the 4s and 3d orbitals, separated in Atomsmith Classroom’s Electron Configuration Lab. If the electron configuration of titanium was 1s2 2s2 2p6 3s2 3p6 3d4, four of the five 3d orbitals would contain one electron each. And the 4s orbital would be unoccupied. In Figure 6b, the 3d and 4s orbitals have been superimposed on one another around the nucleus. Just as protons and electrons attract each other due to their opposite charges, electrons repel each other because they have the same charge. This repulsion results in a higher energy – things are simply getting crowded. What can the electrons do to minimize these repulsions? Notice that the 4s orbital is larger than the 3d orbitals. If two of these electrons find their way into the 4s orbital instead of 3d orbitals, they have more space to spread out to minimize the repulsion. This is the basis of Schwarz’s “d versus s electron repulsions.” Figure 6a. The 4s and 3d orbitals separated. Figure 6b. 4s and 3d orbitals overlapped. How Can You Use This? Let’s summarize what we’ve discussed so far: Up to Z = 20 (calcium), the (n + l) rule (and the Aufbau diagram) correctly predicts: - Orbital energy levels The order of occupancy of the orbitals - The physical meaning of the (n + l) rule (and its ability to make these predictions) is related to the size (n) and shape (l) of a given orbital. For Z > 20 (starting at the transition metals): - The (n + l) rule is not able to correctly predict orbital energy levels. - Even when we know the orbital energies, this knowledge is not sufficient to predict the order of filling. Other factors, such as “d vs. s electron repulsions” (crowding) must be considered. (Schwarz discusses four more of these). - Although its physical meaning is no long sufficient, the (n + l) rule still correctly predicts the order of filling. Except where it doesn’t and we invoke “exceptions.” The first point to be taken from this is that the (n + l) rule is a model and that it works… until it doesn’t. If you choose to teach it as a model and connect it to some of the physical meaning discussed above, it’s a great example of how models can be both useful and also fail. The “story” outlined above has the potential to be much more fulfilling for your students than “Memorize the diagram, learn to use it, and you’re guaranteed to get the right answer.” But it’s a tough story to tell by just waving your hands. You need a model to tell it, and the model needs the following features: - Students must understand the basics of coulomb interactions: opposite charges are lower in energy when they are close together; the repulsions of like charges result in increased energy. - Representations of the atomic orbitals that are physically accurate in both size (n) and shape (l) - 3D is better than 2D - The orbitals should be separable and superimposable - Interactivity is desirable You can’t tell this story without the ability to show your students the relative sizes and shapes (i.e., the nodes) of the orbitals (#2). Pictures of the orbitals in a textbook can work for many students; but all students will benefit from the ability to interactively observe (#5) the sizes and shapes in 3D (#3), and to separate and superimpose them (#4; Figures 6a and 6b) so that they can gain an appreciation for how crowded an atom really is. The Aufbau principal, first envisioned by Niels Bohr in 1920, and its implementation as the (n + l) rule is a very useful abstraction. Connected to its physical meaning, it can become part of powerful mental model that students can draw on to build (and explain) their understanding of the structure of the atom. This kind of connection demonstrates the real promise of 3D particulate representations of atomic and molecular structure and phenomena. Many more of these kinds of stories will be told. Author's note: The idea for this article arose from a discussion between the author and Tom Kuntzleman. Tom describes this interaction in a blog post, Conversations, Confessions, Confusions (and hopefully some Clarity) on Electronic Configurations. - Atomsmith Classroom, Bitwixt Software Systems, www.bitwixt.com(link is external). Available for Mac and Windows computers, and as an online HTML5 app for browsers on all platforms. - Schwarz, W. H. E. The Full Story of the Electron Configurations of the Transition Elements, J. Chem. Educ., 2010, 87, 444 – 448.
Lactose is the simple sugar found in milk and milk products. It can also be found in a variety of other foods and even as a filler in some pills and capsules. The enzyme lactase, present in the lining of the small intestine, splits lactose into two simple sugars. These simple sugars can then be absorbed by the body and used as nourishment. In infants, milk is the main part of the diet, so it is natural and normal for lactase production to gradually decrease as the diet becomes more varied. This tends to occur in childhood and adolescence in African Americans, Native American Indians, Hispanics, Arabs, Jews, and Asians. Northern European white races seem to keep lactase production the longest. When lactase is absent, lactose passes through the intestine to the colon (large bowel), carrying extra fluid with it. In the colon, bacteria break down lactose into lactic acid and certain gases. Lactic acid is an irritant and laxative. It can cause symptoms such as bloating, diarrhea, abdominal cramps, and gas or flatus. Lactase activity is reduced in people with certain intestinal conditions such as Crohn's disease and celiac disease (gluten enteropathy). Patients taking certain drugs and alcoholic patients may also be lactose intolerant. Finally, patients with surgical removal of part of the stomach or a large portion of the small intestine may need to reduce lactose in the diet. It is important to remember that while lactose intolerance can cause quite uncomfortable symptoms, it does not cause damage to the intestine. The purpose of this diet is to eliminate lactose or reduce it to tolerable levels. Dairy products are important sources of calcium, riboflavin, and vitamin D. Some lactose-intolerant people are able to tolerate certain dairy products in small amounts, and their diets may provide enough of these nutrients. However, the physician or registered dietitian may recommend certain vitamin supplements and/or a calcium supplement for some patients. Tolerance of lactose is variable. Some people can eat small amounts of lactose without having symptoms while others need to avoid it completely. Low-lactose diet: generally eliminates only milk and milk products. However, some can tolerate milk in small amounts (2 oz) throughout the day or as part of a meal. Some can tolerate small amounts of yogurt. These patients can experiment to find a level of lactose they can tolerate. Some people can build up their level of tolerance by gradually introducing the lactose-containing foods. Lactose-free diet: all lactose producst must be eliminated, including foods that are prepared with milk, both at home and in commercially packaged foods. These people may be able to use 100% lactose free milk or soy milk. Labels should always be read carefully Lactase Digestive Aids and Products: Many people can drink milk in which the lactase has been partially or completely broken down. The following products may be available at a pharmacy or grocery LACTAID and Dairy Ease enzyme products - check with a pharmacist, registered dietitian, or a physician for individual guidance on the use of these products. Drops: These are added to milk. Five, 10, or 15 drops per quart of milk will generally reduces lactose content by 70%, 90%, or 99% respectively over a 24-hour period Caplets/Capsules: A person chews or swallows 1 to 6 of these when starting to eat foods containing lactose Non-fat or 1% low-fat is 70% lactose reduced Non-fat calcium-fortified is 70% lactose reduced and 500 mg of calcium per cup has been added Non-fat LACTAID 100 is completely lactose free DAIRY Ease Milk For more information about these products, call the consumer information number listed on the food label. The physician, pharmacist, or registered dietitian may also have information about these products or any newer products now available.
River nourishes unexpected plant life, trapping greenhouse gas Nutrients washed out of the Amazon River are powering huge amounts of previously unexpected plant life far out to sea, thus trapping atmospheric carbon dioxide, according to a new study. Until now, the areas around the Amazon and other great rivers had been thought to be emitting CO2, so the study may affect climate scientists’ calculations of how the greenhouse gas acts. The study appears in this week’s Proceedings of the National Academy of Sciences. “This new understanding allows us to better think about how carbon dioxide is cycled between the atmosphere and the oceans, and how this might change in the future,” said lead author Ajit Subramaniam, a biological oceanographer at Columbia University’s Lamont-Doherty Earth Observatory. Lamont is part of The Earth Institute. The Amazon River is the world’s largest, accounting for nearly a fifth of earth’s river input to the oceans. Using satellite imagery and samples taken at sea over three years, Subramaniam and colleagues outlined a rich plume of microscopic phytoplankton nourished by its outflow, covering 1.3 million square kilometers (500,000 square miles), an area about twice the size of Texas. Tropical waters are generally considered poor because they lack nitrogen, a nutrient essential for plants, so the scientists were surprised to find so much life. The secret: a predominance of diazotrophs, photosynthetic microorganisms that fix nitrogen directly from the air. They were able to bloom far from shore because the Amazon also washes other needed nutrients eroded from land: phosphorus, iron and silicon. Plants use large amounts of carbon dioxide for photosynthesis, so the region, previously estimated to emit a yearly 30 million tons of CO2, is probably soaking that amount back up, said Subramaniam. Furthermore, once the diazotrophs bloom, they tend to sink quickly to the bottom, more or less permanently removing the CO2 from the air. Because excess human-generated CO2 is warming the atmosphere, artificial trapping of CO2, or “sequestration,” has become a subject of rising interest. Here, the scientists showed, it is happening naturally on an unexpected scale. Each year the world’s oceans are thought to absorb about 2 billion tons of CO2 from the atmosphere, but the amount permanently sequestered by sinkage is unclear. The new discovery may tilt the estimated balance between air and oceans by only about 2%, said Subramaniam. However, he said, “it alters our view about the processes that we think are going on. There may be other surprises out there.” Studies of other large rivers including the Congo, Orinoco and Mekong suggest that similar processes may be taking place there. Subramaniam said that climate change, booming human populations and intensified land uses could alter the workings of the great river basins. For instance, ongoing conversion of Amazonian rainforest into farmland may increase outwash of nutrients; in some regions, future warming could greatly increase rainfall, and thus river flows. On the other hand, proposed large dam projects on rivers like the Orinoco could cut flow by as much as half. “The whole process could change,” said Subramaniam. “But we can’t predict how it will change, or what the outcome will be.” Subramaniam and coauthor Doug Capone of the University of Southern California discuss their findings in a video presented by the National Science Foundation, which funded much of the project. Other images are also on the NSF site.
Conventional farming requires large amounts of pesticides and insecticides to keep crops healthy. A conventionally grown apple may be sprayed up to 16 times with over 30 different chemicals [source: OTA]. In a nine-year study, the United States Food and Drug Administration (FDA) reported that between 33 and 39 percent of our food contains detectable amounts of pesticides, including 54 percent of our fruits and 36 percent of our vegetables [source: FDA]. Medical professionals are concerned about the long-term effect of these chemicals on our health. The British Medical Association has found that our bodies store some pesticides [source: Soil Association]. Exposure to pesticides has also been linked to headaches, fatigue, nausea and neurological disorders. Organic farming methods may help reduce the amount of pesticides we ingest. Organic foods are grown and processed under standards created by the United States Department of Agriculture (USDA) and overseen by the USDA's National Organic Program. In order to produce certified organic crops, seeds and organisms cannot be genetically modified and produce cannot be treated with conventional synthetic pesticides or fertilizers. Organic farmers must also use sustainable agricultural methods like crop rotation and composting to build and support healthy soil filled with nutrients -- a stipulation that could lead to higher levels of vitamins and minerals in organic food. Farms are inspected every year by a USDA-approved agency to ensure that standards are maintained. It's still unclear whether nutrient-rich soil actually produces vitamin-rich produce. Although the USDA makes no claim that organic food is more nutritious than conventional produce, the absence of synthetic, artificial or genetically modified ingredients in organic food means it's probably healthier for what it lacks. But while people buy organic food with the best intentions -- thinking it's better for their health and for the environment -- organics are not necessarily environmentally friendly. Organic farming began as a community-based initiative. Small organic farms catered to the local demand for organic food. However, the growing popularity of organics has led to the creation of what the agricultural industry calls Big Organic. Big Organic farms are industrial-sized operations designed for high output. Produce is refrigerated and transported to local grocery stores. Produce labeled organic does not guarantee that it was grown locally. On average, produce in the United States travels anywhere from 1,300 to 2,000 miles from the farmer to the consumer -- a process that creates enormous amounts of greenhouse gasses [source: ATTRA]. These food miles partially negate the benefits of organic farming. A study at the University of Alberta in Edmonton, Canada, showed that the impact of the greenhouse gasses from food transportation diminished the benefits of environmentally friendly organic growing methods and was comparable to transporting the same amount of conventionally grown produce [source: University of Alberta]. So is it better to avoid food miles and just buy locally? In the next section, we'll learn about new food movements.
Key Stage One Testing What are the Key Stage One Tests? At the end of Year Two all of the students will be expected to participate in a series of tests allowing the school to monitor the progress that they have made since the end of the EYFS. Your child’s teacher is responsible for judging the standards your child is working at in English reading, English writing, mathematics and science, by the end of Key Stage One. To help inform those judgements, pupils sit national curriculum tests in English and mathematics, commonly called SATs. They may also sit an optional test in English grammar, punctuation and spelling. The tests are a tool for teachers to help them measure your child’s performance and identify their needs as they move into Key Stage Two. They also allow teachers to see how your child is performing against national expected standards. The grading system involves children's raw score – the actual number of marks they get – being translated into a scaled score, where a score of 100 means the child is working at the expected standard. A score below 100 indicates that the child needs more support, whereas a score of above 100 suggests the child is working at a higher level than expected for their age. The maximum score possible is 115, and the minimum is 85. The tests can be taken any time during May and they are not strictly timed. Pupils may not even know they are taking them as many teachers will incorporate them into everyday classroom activities. To view the most recent and historical outcomes of the Key Stage One tests at Grimsdyke School please use the click below. Please find attached our latest figures for academic year 2018-2019.
Understanding What is the Electromagnetic Radiation ? What is Electromagnetic Radiation ? In physics, electromagnetic radiation (EM radiation or EMR) refers to the waves (or their quanta, photons) of the electromagnetic field, propagating through space, carrying electromagnetic radiant energy.It includes radio waves, microwaves, infrared, (visible) light, ultraviolet, X-rays, and gamma rays. All of these waves form part of the electromagnetic spectrum. Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. Electromagnetic radiation or electromagnetic waves are created due to periodic change of electric or magnetic field. Depending on how this periodic change occurs and the power generated, different wavelengths of electromagnetic spectrum are produced. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. In homogeneous, isotropic media, the oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. The wavefront of electromagnetic waves emitted from a point source (such as a light bulb) is a sphere. The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength these are: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. Electromagnetic waves are emitted by electrically charged particles undergoing acceleration,and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves (“radiate”) without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena. In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions.Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level.Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation.The energy of an individual photon is quantized and is greater for photons of higher frequency. This relationship is given by Planck’s equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is Planck’s constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light. The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation’s power and its frequency. EMR of visible or lower frequencies (i.e., visible light, infrared, microwaves, and radio waves) is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules or break chemical bonds. The effects of these radiations on chemical systems and living tissue are caused primarily by heating effects from the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are called ionizing radiation, since individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, and can be a health hazard. In the modern world, we humans are completely surrounded by electromagnetic radiation. Have you ever thought of the physics behind these travelling electromagnetic waves? Let’s explore the physics behind the radiation in this video. Do not forget to share your opinion with us to provide you with the best posts !
1997 Woburn Computer Programming Challenge 2. Power of Cryptography Current work in cryptography involves (among other things) computing large prime numbers and computing powers of numbers modulo these large primes. Work in this area has resulted in the practical use of result from number theory and other branches of mathematics once considered to be only of theoretical interest. This problem involves the efficient calculation of integer roots of numbers. Given an integer n ≥ 1 and an integer p ≥ 1 you are to write a program that determines the nth root of p — it is guaranteed that p is the nth power of some integer k, i.e. p=kn for some integer k; this is the integer you are to find. InputThe first line of the input is M, the number of test cases to consider. The input consists of M pairs of numbers n and p with each number on a line by itself. For all of these pairs, 1 ≤ n ≤ 200, 1 ≤ p ≤ 10101 and there exists an integer k, 1 ≤ k ≤ 10101 such that OutputFor each set of values for n and p output the value of k. 3 2 16 3 27 7 4357186184021382204544 4 3 1234 Point Value: 20 Time Limit: 2.00s Memory Limit: 16M Added: Sep 28, 2008 C++03, PAS, C, ASM, C#, C++11
Auditory Processing Disorders Children with auditory processing disorder (APD), alternately referred to as central auditory processing disorder (CAPD), struggle to take in information verbally. They do not recognize the subtle differences between sounds in words, even when those sounds are clear and loud enough to be heard. Even though children with auditory processing disorder do not physically exhibit hearing problems, they have trouble registering, or rather correctly registering, what others are saying to them. Additionally, they have difficulties remembering what they hear. Despite being bright, intelligent, and eager to learn, children with APD often struggle with reading and self-expression, because they confuse the sounds of various words. They may also find it difficult to tell where specific sounds are coming from, to make sense of the order of sounds they hear, or to block out opposing background noises. Symptoms of Auditory Processing Disorders - Difficulty processing and remembering language-related tasks - Processes ideas and thoughts slowly - Difficulty explaining thoughts and ideas verbally - Mispronounces and/or misspells similar-sounding words, confuses similar-sounding words, or omits syllables (i.e. belt/built, three/free, celery/salary, etc.) - Difficulty remaining focused on or remembering verbal lessons - May misunderstand or have difficulty remembering oral directions - Difficulty following a series of directions - Difficulty comprehending complex sentence structures or rapid speech patterns - Appears to “ignore” people, often thought to be “in their own world” - Frequently says “what,” despite having heard most of what was said Visual Processing Disorders Visual processing disorder (VPD) can cause issues with the way the brain processes visual information. There are many different types of processing visual disorders and many different symptoms, which can include the inability to detect differences in letters or shapes, trouble copying or drawing, and letter reversals. Visual processing disorder can impact individuals of all ages, and to varying degrees. There are eight recognized types of visual processing difficulties, each with specific symptoms. An individual may have difficulty with one or more than one kind of visual processing disorder. Types & Symptoms of Visual Processing Disorders - Visual Discrimination: Difficulty recognizing the differences between similar shapes, objects, or letters. - Visual Figure/Ground Discrimination: Difficulty distinguishing a letter or shape from its background. - Visual Sequencing: Difficulty recognizing letters, shapes, or words in the correct order. They may read the same line over and over, or skip lines completely. - Visual Motor Processing: Trouble practicing what they see to coordinate with the way they move. For example, they may bump into objects while walking or struggle to write within the lines. - Long/Short Term Visual Memory – Difficulty remembering shapes, symbols, or objects they’ve seen, which can cause difficulty with reading, writing, and spelling. - Visual Spatial Awareness – May struggle to understand how close objects are to one another or to understand where objects are in space. - Visual Closure – Trouble recognizing an object when only specific sections of the object are visible. - Letter and Symbol Reversal – Switches letters or numbers when writing, or mistakes similar letters (i.e. “b” for “d” or “w” for “m”). Language Processing Disorders A specific type of auditory processing disorder (APD), children and adults with language processing disorder (LPD) have difficulty attaching meaning to sound groups that form words, sentences, and stories. While auditory processing disorder impacts the interpretation of all sounds coming into the brain, such as processing the sequence of sounds or where they come from, a language processing disorder relates only to the processing of language. LPD can influence expressive language (what you say) and/or receptive language (how you interpret what others say). Symptoms of Language Processing Disorders - Difficulty gaining meaning from spoken language - Difficulty producing written output - Struggles with reading comprehension - Exhibits difficulty expressing thoughts verbally - Feelings of frustration when there is a great deal to say and difficulty expressing it verbally - May feel that their words are right on the tip of their tongue, but experiences difficulties verbalizing their thoughts - Can draw and describe an object, but can’t think of the word for it - Difficulty understanding jokes
Presentation on theme: "Issues:Articles of Confederation Why weakness?Constitution of the United States Levying Taxes Federal Courts Regulation of trade Executive Amending document."— Presentation transcript: Issues:Articles of Confederation Why weakness?Constitution of the United States Levying Taxes Federal Courts Regulation of trade Executive Amending document Representation of states Raising an army Interstate commerce Disputes between states Sovereignty Passing laws Basics/Issues:Articles of Confederation Levying Taxes Congress could REQUEST states to pay taxes Federal Courts No system of federal courts no means by which to enforce US laws each state could interpret federal laws as it pleased and chose when—or even if—to administer them States relied on own courts rulings and ignored others Regulation of trade Congress endowed with the sole authority to negotiate foreign treaties Did not have the power to control trade between individual states and foreign countries Executive (President) No executive with power. President of U.S. merely presided over Congress Issues:Articles of Confederation Amending document (ratifying) 13/13 needed to amend Articles Representation of states (# of Representatives) Each State received 1 vote and each State provide at least two delegates, regardless of the amount of land, population size, or wealth of that state Raising an army Congress could not draft troops, dependent on states to contribute forces Some states had created their own armies, others their own navies. Issues:Articles of Confederation Interstate commerce -No provision to regulate trade -Each state had a tariff on the other states - Def: A tariff is a tax on goods coming in from another state or country Disputes between States -Congress had the power to settle border and interstate disputes Had to select a panel of judges to hear the case Issues:Articles of Confederation Sovereignty Sovereignty resides in states supported its own cause at the expense of the common good. unwillingness of States to focus on their role as part of a bigger nation Passing laws 9/13 needed to approve legislation When the Congress could agree to enact legislation, there was no judicial system to enforce the laws. Issues:Articles of Confederation *What did the document say about the issues? (Don’t skimp on answers) Levying Taxes Federal Courts Regulation of trade Executive (President) Issues:Articles of Confederation Amending document (ratifying) Representation of states (# of Representatives) Raising an army Issues:Articles of Confederation Interstate commerce Disputes between States – how settled? Issues:Articles of Confederation Sovereignty rested more where? Passing laws Why aren’t we still under the Articles of Confederation? What happened between 1777 (ratified 1781) and 1788 that would cause our Founding Fathers to start over again? Each Case Study What is the problem(s)/issue(s)? Highlight What caused it? or Underline Why couldn’t the National Government (under Articles of Confederation) help solve the problem(s)/issue(s)? – List ALL
The history of bread and bread making The history of bread and bread making starts way back in ancient times. The earliest breads were unleavened. Variations in grain, thickness, shape, and texture varied from culture to culture. Archaeological evidence confirms yeast, both as leavening agent and for brewing ale, was used in Egypt as early as 4000 B.C. Food historians generally cite this date for the discovery of leavened bread and birth of the brewing industry. Baking techniques in Ancient Egypt Judging by the evidence of tomb paintings from the 25th century BC onward, the Egyptians began to evolve baking techniques with results that were both creative and predictable. The dough, made from sifted flour – wheat flour was kneaded in large earthenware tubs. Its consistency was liquid enough for it to be poured into moulds preheated by being stacked in a kind of oven. Once the dough had been poured into the hot mould it was covered with a slightly larger mould placed upside down on it and returned to the oven. When baked, the bread was the shape of a twin-truncated cone. The Assyrians made dough of mixed wheat and barley flour and placed it in a large earthenware vessel heated to a high temperature with embers or hot stones. The vessels were then hermetically sealed with a lid and buried in the ground: the bread inside them was baked on the hay box principle. The First bread ovens The first breads in Greece were also cooked in the embers or under a dome-shaped bell. It is interesting to learn that Greeks actually invented the bread oven, which could be pre-heated and opened at the front. In ancient times barley maza was the staple food. Solon (c. 640 – c. 560 BC), the Athenian lawmaker and poet, drew laws to regulate everything including eating bread, artos, only on feast days. Artos is Greek for leavened loaf, but in Modern Greek, it is now more commonly used in the context of communion bread used in church. The significance of the artos is that it serves to remind all Christians of the events connected with the Resurrection of Jesus Christ. However, in the 5th century BC, at the time of Pericles, artos could be bought from a baker’s shop. So could maza, which was cheaper and long remained the staple food of the poor. Meals consisted of bread or maza, and accompaniments to bread, opson. Opson is an important category in Ancient Greek foodways, similar to Okazu in Japanese cuisine, or Banchan in Korean cuisine. Opson meant any food but bread: vegetables, cheese, onions, olives, meat, fish, and fruit. Later the term was referred to fish. From time of Pericles, the art of the Greek bakers was not only seen in mixing various kinds of bread dough but also in designing different shapes, often made for a particular occasion. There was Cappadocian, milk bread baked in a mould, boletus, a mushroom-shaped with poppy seed sprinkled on top. Daraton, an unleavened bread in shape of the flat cake, Streptice – a plaited loaf, Almogaeus, a coarse rustic bread, Syncomiste, a dark bread made of unbolted rye flour, etc. Furthermore, there were more than 80 different kinds of cakes. Plakon, usually translated as “cake”, was a plain cake made of oat flour, cream cheese, and honey. All other varieties other than plakon, had their own name. The term artos covers any specified type of loaf. Regardless of their close links with the Greeks, the Romans had almost no interest in baking until the 7th century BC. Bread was often free, since emperors and careerists made large-scale distributions to ease their consciences, or prevent riots. Roman bread was originally made at home. Throughout the centuries purists forbade the offering of bread as a sacrifice in the practice of Roman religion. When bread replaces maza the wealthier classes kept slave bakers, some of which had t wear gloves to knead the dough and masks to protect it from undesirable drops of perspiration and the breath of a common person. The baking of the raised dough evolved through the usual stages: in the embers, on a griddle, under a bell, and finally in the brick oven. The Greeks had established colonies on the Mediterranean shores of Gaul before the Romans did. In 168 BC a considerable number of Greek craftsmen bakers (pistores) inhabited Rome. Fond as they were of good bread, Greeks had trained native bakers, and the Gauls, showing talent that was to persist in their modern French descendants, soon became very good at their job. The Gauls, already introduced to beer by the Greeks, soon started using beer yeast as a raising agent. This was the spuma concreta or froth formed on the surface of the liquid by fermentation. Beer yeast made very light, well-risen bread, which was rightfully considered delicious. Bakers College in Ancient Rome During the reign of Augustus around 30 BC, there were 329 bakeries in Rome run by Greeks with Gaulish assistants. Roman bread was usually round, the tops of the loaves being shaped in many different ways, just as there were many different kinds of dough. Roman cakes were made of flake pastry of the modern Arab kind. The pastry was stretched out thin in separate sheets and contained cheese and honey. With the intention to please – placenda est – it soon acquired the name placenta. The dough of the placenta was also used to make cakes called scriblita, spira and spherita, shaped in ways corresponding to their names. The bakers had been allowed to form a collegium – a professional association. This “bakers college” ended up as exclusive a caste as any in India. A baker’s son could only become a baker, and could not follow any other profession, even if he married outside. Besides the religious ritual of the college meetings, there was a sign language known only to the initiates: tokens and passwords which protected trade secrets. There were no women members, however, women were found in the colleges of greengrocers, vendors of clothing, and tavern keepers. Bread was a masculine business. Roman bread was usually round, the tops of the loaves being shaped in many different ways, just as there were many different kinds of dough. Roman cakes were made of flaky pastry of the modern Arab kind. The pastry was stretched out thin in separate sheets and contained cheese and honey. With the intention to please – placenda est – it soon acquired the name placenta. The dough of the placenta was also used to make cakes called scriblita, spira and spherita, shaped in ways corresponding to their names. The Gauls of Roman times proved to be successful bakers. Bread was the basis of the meal in the cereal-growing land of Gaul, even more than in Greece. Bread as prime symbol of nourishment In the early days of Christianity, barley bread seems to have ben considered a food suitable for religious penance or legal punishments. St. Patroclus, a French saint from Troyes, lived on barley bread dipped in water and sprinkled with salt. He was anticipating the soup which was to become a staple item of the European diet from the Dark Ages onwards: a slice of bread at the bottom of a bowl, with broth or soup made in a In the early days of Christianity, barley bread seems to have ben considered a food suitable for religious penance or legal punishments. St. Patroclus, a French saint from Troyes, lived on barley bread dipped in water and sprinkled with salt. He was anticipating the soup which was to become a staple item of the European diet from the Dark Ages onwards: a slice of bread at the bottom of a bowl, with broth or soup made in a pot poured on to it. From the Dark ages, bread became part of the standard table setting. A thick slice of bread, know as a trencher and sometimes laid on a wooden plate, served as a base upon which pieces of meat and sauce were placed. In the Middle Ages, the wealthier classes did not eat the trencher bread. Instead, they threw it to the dogs or poor people waiting outside the door. Because Medieval houses were usually made of wood or daub and could easily catch fire, bread ovens were built away from inhabited areas, usually near water, to put out flames that got out of control. In France, mills and bakeries were not separated until the 15th century. Most flour used for bread making has been made of wheat since the 12th century. The price of wheaten bread set the standard of prices for other breads made of barley or rye flour, oatmeal or maslin. The price of salt used in the dough also had to be included in the price of the bread. It is interesting to know that up to the 19th century, bread in Europe was often adulterated. Some of the commonly used additives in the 19th century were poisonous. To whiten bread, for example, bakers sometimes added alum and chalk to the flour, while mashed potatoes, calcium sulphate, pipe clay and even sawdust could be added to increase the weight of their loaves. Rye flour or dried powdered beans could be used to replace wheat flour and the sour taste of stale flour could be disguised with ammonium carbonate. Brewers too, often added mixtures of bitter substances, some containing poisons like strychnine, to ‘improve’ the taste of the beer and save on the cost of hops. By the beginning of the 19th century the use of such substances in manufactured foods and drinks was so common that town dwellers had begun to develop a taste for adulterated foods and drinks; white bread and bitter beer were in great demand. This gradually came to an end with government action, such as the 1860 and 1899 Food Adulteration Acts in Britain. First commercial yeast was produced in the United States in the 1860s. Charles and Maximillian Fleischmann, immigrants from Austria-Hungary, patented and sold standardized cakes of compressed yeast produced in their factory in Cincinnati. By the early twentieth century, factory-produced yeast was widely available. Cookbook recipes began specifying that commercial yeast be added directly to bread dough in sufficient quantities to leaven it in less than two hours. Bread changed in texture, becoming lighter and softer, and blander in flavor. For generations, white bread was the preferred bread of the rich while the poor ate the whole grain bread. However, in most western societies, in the late 20th century, whole grain bread became preferred as having superior nutritional value. The history of bread making is as rich as the bread itself. Bread, the staff of life, has become the prime symbol of nourishment. Bread demands respect and, as one of the three sacramental foods, is regarded as genuinely sacred. 1. Jacob, Heinrich Eduard. Six Thousand Years of Bread: Its Holy and Unholy History: Its Holy and Unholy History. New York: Lyons and Burfold, 1997. 2. Rubel, William. Bread: A Global History. The Edible Series. The University of Chicago Press Books, 2011.
While scientists don’t completely understand why allergies develop, they do believe that a combination of things create the immune system confusion, from genetic predisposition to environmental factors.| There are two main categories of risks that can contribute to the development of allergies—those that you can't change, and those that you can. Because you can't control whether or not you develop allergies, the line between uncontrollable risks (which are out of your control) and controllable factors is grey. Many things that may prevent allergies need to occur at a very young age. Uncontrollable Risk Factors These variables are out of your control. Although you can’t do anything to change them, it’s important to know if you are at risk. Your family history. While sensitivities to specific allergens are not inherited, the tendency to develop allergies can be traced back to your parents. If your mother was allergic to dust mites for example, you might also develop allergies—but not necessarily to the same substance. If one of your parents had allergies, you have a one in three chance of also developing an allergy. This risk jumps as high as 75% if both of your parents had allergies. Your age. Because repeated exposure to substances can prompt an allergic reaction, you are more likely to develop allergies as you get older. Your immune response. The reactions of your immune system are out of your control. Once your body becomes sensitive to a substance, your immune system will produce larges amounts of antibodies to fight what it sees as a dangerous intruder. The type of antibody most commonly found in allergic reactions is called Immunoglobulin E (IgE), but your body can produce a unique antibody for every type of allergen. Your Environment. Generally speaking, developed countries have much higher incidences of allergies than developing areas of the world. Scientists believe that the clean and sanitized homes of the industrialized world are actually detrimental to the immune system, and that exposure to illness-causing bacteria is necessary for the immune system to function optimally. When your immune system is not challenged by natural foes, it malfunctions and becomes supersensitive to seemingly harmless substances.
When confronted with something truly terrifying (say, for example, an irritated grizzly bear), most human faces assume the same expression, with bulging eyes and flaring nostrils. Researchers have long suspected that those facial adjustments serve some evolutionary purpose, but the mechanism has been unclear for over a century. Now, a study presents an answer that seems rather obvious in retrospect. Those wide-open eyes and flared nostrils take in more sensory information, which helps when you’re trying to figure out how to evade swiping bear claws. Curiosity about the purpose of facial expressions goes back to Charles Darwin. In 1872, Darwin published The Expression of the Emotions in Man and Animals, which discussed the similar facial expressions found across human cultures and in some animal populations, and theorized that the expressions must have some evolutionary benefit. He guessed that the advantage lay in the ability to communicate emotions, which could reduce misunderstandings and help a group function efficiently. Later scientists followed Darwin’s train of thought and discovered that the expression of emotions is strikingly similar across cultures—horror and disgust look pretty much the same on the face of a New Yorker as they do on a Nigerian, and people from different cultures can recognize emotions such as happiness, anger and surprise on others’ faces, even if they don’t share a language. The fact that emotional expressions seem to be universal led scientists to believe they weren’t used only for communication and social purposes, but also served an additional adaptive biological function [LiveScience]. The new findings published in Nature Neuroscience [subscription required] suggest that even though facial expressions are useful socially, they probably evolved first as a way to improve sensory perceptions at a time of crisis. In the study, [neuroscientist Joshua] Susskind developed computer models for the facial expressions of fear and disgust. He then trained volunteers to pull each face. A fearful expression required participants to widen their eyes, raise their eyebrows and flare their nostrils, while a disgusted face was the opposite: a lowered brow, closed eyes and scrunched-up nose. Measurements from video footage revealed those pulling fearful faces were not only better at spotting objects either side of them, but scanned their eyes faster, suggesting they could see danger coming more quickly [The Guardian]. These test subjects also had improved air flow through their noses. All of these reactions would be helpful to a human in the midst of a “fight or flight” response, as he tries to decide between brawling and running away. Researchers also discovered that facial expressions that signify disgust serve the opposite purpose, and decrease both field of vision and air flow. Again, it seems somewhat obvious in retrospect: How much sensory information do people really want about a chunk of rotting meat?
The total of all the money coming into a country from abroad less all of the money going out of the country during the same period. This is usually broken down into the current account and the capital account. The current account includes: *visible trade (known as merchandise trade in the United States), which is the value of exports and imports of physical goods; *invisible trade, which is receipts and payments for services, such as banking or advertising, and other intangible goods, such as copyrights, as well as cross-border dividend and interest payments; *private transfers, such as money sent home by expatriate workers; *official transfers, such as international aid. The capital account includes: *long-term capital flows, such as money invested in foreign firms, and profits made by selling those investments and bringing the money home; *short-term capital flows, such as money invested in foreign currencies by international speculators, and funds moved around the world for business purposes by multinational companies. These short-term flows can lead to sharp movements in exchange rates, which bear little relation to what currencies should be worth judging by fundamental measures of value such as purchasing power parity. As bills must be paid, ultimately a country's accounts must balance (although because real life is never that neat a balancing item is usually inserted to cover up the inconsistencies). "Balance of payments crisis" is a politically charged phrase. But a country can often sustain a current account deficit for many years without its economy suffering, because any deficit is likely to be tiny compared with the country's national income and wealth. Indeed, if the deficit is due to firms importing technology and other capital goods from abroad, which will improve their productivity, the economy may benefit. A deficit that has to be financed by the public sector may be more problematic, particularly if the public sector faces limits on how much it can raise taxes or borrow or has few financial reserves. For instance, when the Russian government failed to pay the interest on its foreign debt in August 1998 it found it impossible to borrow any more money in the international financial markets. Nor was it able to increase taxes in its collapsing economy or to find anybody within Russia willing to lend it money. That truly was a balance of payments crisis. In the early years of the 21st century, economists started to worry that the United States would find itself in a balance of payments crisis. Its current account deficit grew to over 5% of its GDP, making its economy increasingly reliant on foreign credit.
By Lynn Yarris Berkeley Lab scientists have developed a new imaging technique that has the oil industry keenly anticipating its potential use in finding petroleum and natural gas reservoirs hidden beneath underwater bodies of salt. Michael Hoversten and Frank Morrison, geophysicists with the Earth Sciences Division, working with Steven Constable, a marine geophysicist with the Scripps Institution of Oceanography at UC San Diego, are developing the technique, which is called "marine magnetotellurics" or marine MT. Based on the scattering of low-frequency electromagnetic radiation from the upper atmosphere by geological formations below the Earth's surface, marine MT is designed to augment seismic imaging in underwater geophysical surveys. "We've found that these low-frequency electromagnetic fields are still recordable and capable of being used for sub-bottom imaging even at ocean depths of up to two kilometers," says Morrison, who is also a professor on the UC Berkeley campus. With conventional seismic imaging, soundwaves are bounced off underground rock layers and reflected back to the surface where instruments record their travel time. This yields valuable information about rock formations and structures that can be used, among other purposes, to predict the presence and approximate size of petroleum and gas reservoirs. Seismic imaging, however, runs into problems in areas where the reservoir rocks underlie salt structures. Sometimes covering hundreds of square miles, these salt bodies are highly irregular in shape and excellent reflectors of soundwaves. These characteristics prevent surveyors from getting an accurate reading on the thickness and shape of the underside of the salt. Salt is also highly resistant to the flow of electrical current, a fact that marine MT exploits through the utilization of naturally occurring ultra-low-frequency radiation caused by the interaction of the solar wind and the earth's magnetic field. The result of this is electromagnetic waves and associated currents that penetrate deeply into the earth's crust. By measuring both magnetic fields and the resulting electric fields, geophysicists can learn about the properties of individual rock strata. The marine MT device is lowered into the Gulf of Mexico for testing. The device detects low-frequency electromagnetic fields to create sub-bottom ocean images. "The electrical resistivity of salt is often more than 10 times greater than that of the surrounding sediments" says Hoversten. "By measuring the distortion in the flow of electrical currents through seawater and sediment produced by the presence of salt, we can easily map major structures and resolve questions not answered by seismic imaging. In this manner, marine MT provides us with complementary as well as independent information." Hoversten, Morrison, and Constable conducted tests in the Gulf of Mexico where huge oil and gas reservoirs are believed to be hidden under vast expanses of salt. Marine MT surveys were conducted over two sites, known as "Mahogany" and "Gemini," where the prospects for finding oil and gas are rated good. Mahogany is a relatively shallow water site, about 100 meters in depth, off the Louisiana coast. Gemini is further off-shore in water as deep as 1.5 kilometers (nearly 5,000 feet). The device used to measure underwater electrical resistivity consists of an x-shaped frame packed with electrodes and special magnetic field sensors which were developed at Berkeley Lab and are among the most sensitive ever made. To this assembly is added a buoyancy chamber and a concrete anchor. The complete package, which looks somewhat like a four-legged spider the size of a small raft, gets dropped overboard off a ship, sinks to the sea floor, and remains in the sediment for a couple of days. A remote signal is then used to detach the anchor from the frame, and the floatation chamber brings it to the surface. Typically, ten or more similar assemblies would be deployed during a single survey leg. "This is the first time where MT instrumentation has been successfully deployed and retrieved from deep water," says Hoversten who credits Constable and his colleagues at Scripps for the design of the marine equipment. The Scripps researchers believe their assembly will operate in water depths up to five kilometers (16,500 feet). Data from the Mahogany and Gemini surveys are still being processed, but Hoversten says he and Morrison are confident of its potential for mapping the extent and thickness of salt structures with sufficient resolution to gauge the prospects for oil or gas in the underlying sediment. A two-dimensional inversion of numerical data from a deep rooted salt structure, clearly showing the presence of the deep root. "Most of the undiscovered oil and gas in the Gulf and other bodies of water throughout the world are hidden under salt or other problem rocks, such as basalt and carbonates, that make seismic prospecting difficult to apply," says Hoversten. "By showing where and how deep the possible pay zones are, marine MT can go a long way toward helping a company pick its drilling targets." The cost of marine MT pales before the cost of drilling or even the cost of seismic imaging. Marine surveys are divided into "blocks," each of which constitutes an area of three square miles. It costs about $500,000 to survey one block with seismic imaging and about $50,000 to survey it with marine MT. Marine MT technology research is partially sponsored by the Department of Energy's Office of Computational and Technical Research, and the erltr Partnership Program. The surveys of the Mahogany and Gemini sites in the Gulf of Mexico were funded by a consortium of oil companies including AGIP, Chevron, BP, BHP, and Texaco.
The learning differences, preferences, and varied backgrounds existent in the classroom present teachers with a challenging task: help every student become a successful learner. How can teachers support all students’ diverse needs? Much confusion and fear have surrounded differentiated instruction and its use in the classroom. Myth #1: Differentiation = Individualization Differentiation doesn’t mean individualizing the curriculum for each student. Yes, when teachers meet one-on-one and conference with students, modifying instruction to best suit the student’s needs, both individualization and differentiation are taking place. However, writing an individual lesson plan for every student in the classroom is NOT differentiating (it’s insanity). Instead, differentiation involves using quality and effective instructional practices to strategically address groups of students based on various levels of learning readiness, interests, and learning styles. Myth #2: Every student should be doing something different
Subject Verb Agreement: Endings Can you find the subject and verb in a sentence? If not, or if you need some review, check out the “What is a Sentence” handout. Subject – verb agreement means that the verb must agree with its subject in person and number. To review person and number, let’s look at some verb conjugations. Possibly because of speaking habits, many people fail to put the “s” on the 3rd person singular. We need to hear and write: - He knows the answer. - She sees the solution. But remember there is no “s” on the first person verb. - I understand the problem. - Not – I understands the problem. Since you don’t always write with pronouns as the subject of your sentence, you need to see how this works with a noun: - The train runs down the track. - Trains run down the track everyday. Notice here that when the subject does not have an “s,” the verb does, and vice versa. If you are making this kind of subject-verb agreement error, you need to learn what endings are needed and to listen carefully for whether you have put on or left of an ending.
A ketone (pronounced as key tone) is either the functional group characterized by a carbonyl group (O=C) linked to two other carbon atoms or a chemical compound that contains this functional group. A ketone can be generally represented by the formula: A carbonyl carbon bonded to two carbon atoms distinguishes ketones from carboxylic acids, aldehydes, esters, amides, and other oxygen-containing compounds. The double-bond of the carbonyl group distinguishes ketones from alcohols and ethers. The simplest ketone is acetone (also called propanone). The carbon atom adjacent to a carbonyl group is called the α-carbon. Hydrogens attached to this carbon are called α-hydrogens. In the presence of an acid catalyst the ketone is subjected to so-called keto-enol tautomerism. The reaction with a strong base gives the corresponding enolate. A diketone is a compound containing two ketone groups. In general, ketones are named using IUPAC nomenclature by changing the suffix -e of the parent alkane to -one. For common ketones, some traditional names such as acetone and benzophenone predominate, and these are considered retained IUPAC names, although some introductory chemistry texts use names such as propanone. Oxo is the formal IUPAC nomenclature for a ketone functional group. However, other prefixes are also used by various books and journals. For some common chemicals (mainly in biochemistry), keto or oxy is the term used to describe the ketone (also known as alkanone) functional group. Oxo also refers to a single oxygen atom coordinated to a transition metal (a metal oxo). A carbonyl group is polar. This makes ketones polar compounds. The carbonyl groups interact with water by hydrogen bonding, and ketones are soluble in water. It is a hydrogen-bond acceptor, but not a hydrogen-bond donator, and cannot hydrogen-bond to itself. This makes ketones more volatile than alcohols and carboxylic acids of similar molecular weight. The α-hydrogen of a ketone is far more acidic (pKa ≈ 20) than the hydrogen of a regular alkane (pKa ≈ 50). This is due to resonance stabilization of the enolate ion that is formed through dissociation. The relative acidity of the α-hydrogen is important in the enolization reactions of ketones and other carbonyl compounds. Spectroscopy is an important means for identifying ketones. Ketones and aldehydes will display a significant peak in infrared spectroscopy, at around 1700 centimeters−1 (slightly higher or lower, depending on the chemical environment) Several methods exist for the preparation of ketones in the laboratory: Ketones engage in many organic reactions: Acetone, acetoacetate and beta-hydroxybutyrate are ketones (or ketone bodies) generated from carbohydrates, fatty acids and amino acids in humans and most vertebrates. Ketones are elevated in blood after fasting including a night of sleep, and in both blood and urine in starvation, hypoglycemia due to causes other than hyperinsulinism, various inborn errors of metabolism, and ketoacidosis (usually due to diabetes mellitus). Although ketoacidosis is characteristic of decompensated or untreated type 1 diabetes, ketosis or even ketoacidosis can occur in type 2 diabetes in some circumstances as well. Acetoacetate and beta-hydroxybutyrate are an important fuel for many tissues, especially during fasting and starvation. The brain, in particular, relies heavily on ketone bodies as a substrate for lipid synthesis and for energy during times of reduced food intake. At the NIH, Richard Veech refers to ketones as "magic" in their ability to increase metobolic efficiency, while decreasing production of free radicals, the damaging byproducts of normal metabolism. His work has shown that ketone bodies may treat neurological diseases such as Alzheimer's and Parkinson's disease, and the heart and brain operate 25 percent more efficiently using ketones as a source of energy. Ketones are often used in perfumes and paints to stabilize the other ingredients so that they don't degrade as quickly over time. Other uses are as solvents and intermediates in chemical industry. Examples of ketones are Acetophenone, Butanone (methyl ethyl ketone) and Propanone (acetone). All links retrieved June 11, 2014. |Chemical class: Alcohol • Aldehyde • Alkane • Alkene • Alkyne • Amide • Amine • Azo compound • Benzene derivative • Carboxylic acid • Cyanate • Ester • Ether • Haloalkane • Imine • Isocyanide • Isocyanate • Ketone • Nitrile • Nitro compound • Nitroso compound • Peroxide • Phosphoric acid • Pyridine derivative • Sulfone • Sulfonic acid • Sulfoxide • Thioether • Thiol • Toluene derivative| New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
Renal failure refers to temporary or permanent damage to the kidneys that results in loss of normal kidney function. There are two different types of renal failure—acute and chronic. Acute renal failure has an abrupt onset and is potentially reversible. Conditions that may lead to acute or chronic renal failure may include, but are not limited to, the following: - Decreased blood flow to the kidneys for a period of time. This may occur from blood loss, surgery, or shock. - An obstruction or blockage along the urinary tract. - Hemolytic uremic syndrome—usually caused by an E. coli infection, kidney failure develops as a result of obstruction to the small functional structures and vessels inside the kidney. - Ingestion of certain medications that may cause toxicity to the kidneys. - Glomerulonephritis—a type of kidney disease that involves glomeruli. During glomerulonephritis, the glomeruli become inflamed and impair the kidney's ability to filter urine. - Any condition that may impair the flow of oxygen and blood to the kidneys such as cardiac arrest. What are the symptoms of acute renal failure? The symptoms for acute and chronic renal failure may be different. The following are the most common symptoms of acute and chronic renal failure. However, each child may experience symptoms differently. Acute symptoms may include: (Symptoms of acute renal failure depend largely on the underlying cause.) - Bloody diarrhea - Severe vomiting - Abdominal pain - No urine output or high urine output - History of recent infection - Pale skin - History of taking certain medications - History of trauma - Swelling of the tissues - Inflammation of the eye - Detectable abdominal mass - Exposure to heavy metals or toxic solvents What is the treatment for acute renal failure? Specific treatment for renal failure will be determined by your child's physician based on: - Your child's age, overall health, and medical history - The extent of the disease - The type of disease (acute or chronic) - Your child's tolerance for specific medications, procedures, or therapies - Expectations for the course of the disease - Your opinion or preference Treatment of acute renal failure depends on the underlying cause. Treatment may include: - Administration of intravenous (IV) fluids in large volumes (to replace depleted blood volume) - Diuretic therapy or medications (to increase urine output) - Close monitoring of important electrolytes such as potassium, sodium, and calcium - Medications (to control blood pressure) - Specific diet requirements In some cases, children may develop severe electrolyte disturbances and toxic levels of certain waste products normally eliminated by the kidneys. Children may also develop fluid overload. Dialysis may be indicated in these cases. For more information or to schedule an appointment, call 314.454.5437 or 800.678.5437 or email us.
Bruce F. Barber The rash of earthquakes San Felipe experiences occasionally give rise to the question, “What is an earthquake”? The answer is both simple and complex. Simply stated: An earthquake is a shaking of one area of the earth’s surface. A more complex description involves a sudden release of progressively stored energy in underground rocks, causing movement along a “fault”. To understand a fault, it is essential to understand at least a little about the theory of Plate Tectonics, which states that the surface of the earth is divided into individual PLATES. Each plate is made up of a portion of the Earth’s CRUST and a portion of the uppermost part of the subsurface MANTLE …and has an average thickness of 100 kilometers. These plates, which can be envisioned as segments of a cracked shell on a boiled egg, are in motion relative to one another, sliding over the lower part of the mantle. A TRANSFORM BOUNDARY occurs where two plates slide past one another. The San Andreas fault, in Alta and Baja California, is regarded as this type of boundary and the earthquakes along the fault are a by-product of each plate’s motion. The vertical edges of adjacent plates are anything but smooth and can best be described as looking something like a saw blade. These saw-like edges of rock snag on each other and, although each plate continues its relative motion, the snagged areas cannot move, creating a significant build-up of energy. Earthquakes occur when built up energy overcomes the resistance between two snagged plates. If you bend a stick of wood, your hands put stress (the energy) on the stick. Like a bending stick, rock can deform only so far and then it breaks. When a rock breaks under the stress of plate tectonics, waves of released energy are sent out through the earth. These waves of energy are called SEISMIC WAVES. It is these seismic waves that cause the ground to tremble and shake during a temblor. Because of the continual motion of adjacent plates, temblors occur many times each day along a transform boundary like the San Andreas fault. However, only those with a magnitude in excess of 3.0 on the Richter Scale are felt by humans. The two quakes felt on 23 and 24 November were centered 150 miles from San Felipe, at Westmoreland, California, at the southern tip of the Salton Sea, and measured 6.0 and 6.3 on the Richter Scale. Last year, we felt a quake here which was centered at Cerro Prieto south of Mexicali. Each of these quakes occurred along the San Andreas fault system and are constant reminders of the fact that we are continually on the move; between 2 and 6 centimeters per year. Some day (measured at two inches per year!), San Felipe will cross the international border into the United States. When that occurs, it will probably be during an earthquake thereby causing me to ask, “Will we be taking the Sea of Cortez with us”?
A century after the Titanic disaster, scientists have found an unexpected culprit of the crash: the moon. Anyone who knows history or blockbuster movies knows that the cause of the ocean liner’s accident 100 years ago next month was that it hit an iceberg. “But the lunar connection may explain how an unusually large number of icebergs got into the path of the Titanic,” said Donald Olson, a Texas State University physicist whose team of forensic astronomers examined the moon’s role. The team investigated speculation by the late oceanographer Fergus Wood that an unusually close approach by the moon in January 1912 may have produced such high tides that far more icebergs than usual managed to separate from Greenland, and floated, still fully grown, into shipping lanes that had been moved south that spring because of reports of icebergs. Olson said a “once-in-many-lifetimes” event occurred on January 4, 1912, when the moon and sun lined up in such a way that their gravitational pulls enhanced each other. At the same time, the moon’s closest approach to earth that January was the closest in 1,400 years, and the point of closest approach occurred within six minutes of the full moon. On top of that, the Earth’s closest approach to the sun in a year had happened just the previous day. The team’s Titanic research may have vindicated Captain Smith – albeit a century too late – by showing that he had a good excuse to react so casually to a report of ice in the ship’s path. The research will appear in the April issue of “Sky & Telescope” magazine. Source: Yahoo News Image: Connection to People
Jewish History Blog Today is the minor holiday of Lag B’Omer, a break in the semi-mourning period after Passover, and the anniversary of the passing of Rabbi Shimon bar Yochai. This great sage was the primary disciple of Rabbi Akiva, who inherited from his great mentor a strong antipathy towards Roman rule and culture. Rabbi Shimon bar Yochai and his teacher Rabbi Akiva lived in one of the most turbulent periods in Jewish history. The Roman emperor Hadrian was in power, and though he had originally treated the Jews fairly, he underwent a radical change in the middle of his reign. At first, he had been involved in other wars, and didn’t want trouble with the Jews, who had fought so fiercely in 66-70 CE (though they ultimately lost). He was open to the idea of allowing the Jews rebuild their Temple, as long as they would remain under Roman rule. But five years later, he decreed that not only shouldn’t the Temple be rebuilt, the ruins should be razed so that the Jews would have no hope of trying. This was brought about the popular revolt led by Bar Kochba, who was a tremendous warrior and organizer. But once he was victorious and in a position of leadership, Bar Kochba turned paranoid. By definition, a leader is in the public view, and everybody can take shots him, which they always do. So Bar Kochba “lost it.” His paranoia was so extreme that he killed his own uncle. Upon seeing this brutality, Rabbi Akiva withdrew his original support of Bar Kochba. After Bar Kochba’s defeat, Hadrian began to persecute the rabbis unmercifully. He realized where the leadership really lay, and he figured the only way to make the Jews docile was to get rid of the rabbis. Thus, Rabbi Akiva, along with nine other great sages, was tortured to death. But rabbis are hard to get rid of. The Romans may have killed Rabbi Akiva, but his disciple Rabbi Shimon bar Yochai rose up in his place. And Rabbi Shimon was neither reticent nor politically correct. Some of his contemporaries openly praised the Romans for rebuilding the physical infrastructure of the land. They felt the Jews should compromise with them. Rabbi Shimon, however, was outspoken in his condemnation, stating that even the Romans’ seemingly positive actions stemmed from sinister motives. Then, a Jewish spy working for the Roman government reported Rabbi Shimon bar Yochai’s words to the Roman authorities, and a warrant for his arrest was issued. Rabbi Shimon, together with his son Elazar, fled to the desert and found refuge in a cave where they spent thirteen years in hiding. During that long and isolated sojourn in the desert cave, Rabbi Shimon was able to delve into the hidden, mystical level of Torah and comment and explain its mysteries. It was at this time that he wrote The Zohar, the essential book of Kabbalah, though the book was not published until the fourteenth century by a Spanish Jew, Moses de Leon. Though there was much debate over the authenticity of the book, tradition holds fast that it was Rabbi Shimon ben Yochai’s. The Zohar has such depth and spirituality, the majority opinion is that it was beyond the ability of Moses de Leon to write himself. After thirteen years in the cave, there was a regime change in Rome, and Shimon bar Yochai and his son were granted amnesty. This marks the beginning of the melting of the ice in Roman relations with the Jews, which would reach its height with Rabbi Judah the Prince, who would develop a friendship with Emperor Antoninus Pius. That friendship is what allowed the oral tradition of Judaism, the Mishnah, to be written. When Rabbi Shimon ben Yochai emerged from their cave and returned to the society of the land of Israel, he had achieved such a level of spirituality that he could not countenance the ordinary workday activities of his fellow Jews who did not spend every waking moment in the study of Torah. Clearly, he was someone who brooked no compromises. Tradition ascribes the minor holiday of Lag B’Omer as the anniversary of the death of Rabbi Shimon bar Yochai. He is buried on Mount Meron in the Upper Galilee, and up to 500,000 Jews visit the site each year on this day. Large bonfires are lit, young boys are given their first haircut, and entire families encamp on Mount Meron in commemoration of the day. The custom of bonfires has spread from Mount Meron throughout the rest of the Jewish world, inside Israel and out, though there is much rabbinic opinion that disapproves of this custom. Nevertheless, it is apparently here to stay, acrid smoke and dangerous sparks notwithstanding. The combination of Rabbi Shimon ben Yochai’s fierce opposition to Roman ways, his superhuman devotion to Torah study, and his contributions to the rebuilding of Jewish life after the Hadrianic persecutions, all combine to make him one of the giants of Jewish history and tradition.
In 1871, paleontologist Othniel C. Marsh was leading a group of Yale University students on a fossil-collecting expedition through the western United States. That year, in the Cretaceous chalks of the prairies and badlands of western Kansas, Marsh found bones of previously unknown birds: slender, several feet in length, with powerful legs but very small, stubby wings. On a later expedition, Marsh found the skull of one of these birds, and discovered that it had teeth -- a trait missing from all modern birds, but present in the fossil Archaeopteryx, described only a few years earlier, and then as now the oldest and most primitive bird known. Marsh named his discoveries Hesperornis -- "Western bird" -- and designated several species: Hesperornis regalis, H. crassipes, and H. gracilis (now designated Parahesperornis gracilis. Together with another of Marsh's discoveries in Kansas, the flying toothed bird Ichthyornis, Hesperornis filled a large gap in the fossil history of birds. Today, several more genera are known that are similar to Hesperornis, including Baptornis (also described by Marsh), Parahesperornis, Enaliornis, and others. Most have been found in western North America, such as the mounted Hesperornis depicted above (on display at the Sternberg Museum of Natural History in Hays, Kansas). However, species are also known from Cretaceous deposits of Europe, Mongolia, and Kazakhstan. These species are grouped in the Hesperornithiformes. The largest known hesperornithiform, described in 1999 and named Canadaga arctica, may have reached a maximum adult length of over five feet (1.5 meters). Canadaga is the northernmost hesperornithiform yet found, coming from what is now the Northwest Territories of Canada. It is also the latest, dating almost to the end of the Cretaceous. Despite the retention of certain very primitive characters (such as teeth), the Hesperornithiformes was a highly specialized taxon of Cretaceous birds. Hesperornithiform birds all had highly reduced, probably non-functional wings. The legs were powerful, but were set so far back that walking on land was probably awkward -- hesperornithiform birds probably spent almost all their time in the water, except, presumably, the breeding and egg-laying season. One specimen of Parahesperornis has been preserved with imprints of thick, hairy feathers, useless for flight, but efficient insulators. Like modern-day cormorants, hesperornithiform birds appear to have led an aquatic lifestyle, probably diving to catch fish. Although at least one species has been described from ancient freshwater deposits, most hesperornithiforms were marine. In fact, hesperornithiform birds were the only true dinosaurs to colonize the oceans; the aquatic "reptiles" of the time, such as the ichthyosaurs and plesiosaurs, were not true dinosaurs. More images and information about Hesperornis are available at the Oceans of Kansas Paleontology, and at the Sternberg Museum of Natural History at Fort Hays State University, Kansas. The Dinosauricon includes a cladogram and listing of known hesperornithiforms.
Pacific Northwest History: Multicultural Perspectives Summer 2014 quarter Pacific Northwest History introduces multicultural aspects of historical developments of this region. A primary learning objective is for students to be able to articulate through concrete historical examples how liberty and justice has been interpreted and applied in the Northwest. With texts that provide accessible historical accounts, students will be exposed to Native American Indian perspectives on the eventual occupation of their lands by European imperialists, the origins and outcomes of competition among Europeans for the Pacific Northwest, and challenges placed on non-European ethnic groups – such as Chinese Americans, African Americans, Mexican Americans, Japanese Americans – during the 19 th and 20 th centuries and into the 21 st century. Attention to the experiences of women in making this history is included. The local historical development of Tacoma is used to highlight the role of capitalism in creating governing bodies and class differences among white European Americans who collectively discriminated against the aspirations of people of color. Pacific Northwest History also meets a teacher education endorsement requirement for elementary education, middle-level humanities, social studies, and history. Disclaimer: Films and other course material periodically describe and present images of violence and use language that may be considered offensive. The purpose of this material is to present significant events within their respective historical contexts. Fields of Study Preparatory for studies or careers in Location and Schedule Offered during: Day Advertised schedule: Tuesdays & Thursdays, 12:30-4:30 p.m.
The story of Christ preparing the land of Kirtland is as old as time and was without doubt foreseen in the heavens long before man appeared upon the earth. From the beginning, sacred events to transpire in Kirtland were undoubtedly planned and even announced. Joseph Smith emphasized that crucial temple work, the keys for which were received in Kirtland, constituted “a voice of gladness for the living and the dead.”1 Christ ordained such redemptive events “before the world was.”2 Ancient prophets were “fired with heavenly and joyful anticipations” of these events, which would “bring about . . . the salvation of the human family.”3 Christ fired with joyful anticipation the soul of the ancient prophet Malachi as He told him of sacred events to transpire in the Kirtland Temple.4 After the Savior’s resurrection, He commanded followers on the American continent to “write the words” of prophecy He gave to Malachi regarding events to transpire in Kirtland.5 It appears that Christ prepared “the Ohio” for centuries before He gathered His people there. Evidence of this preparation can be found in seventeenth-century England. King Charles II gave the state of Connecticut a thin slice of land that included the land of Kirtland. This thin slice was eventually whittled down to a 120-mile-wide area of land designated as the Connecticut Western Reserve. Settlement was primarily by people from New England. The Western Reserve seems to correspond approximately to the area the Lord called “the Ohio.” A decade after the signing of the Declaration of Independence, America’s Congress of the Confederation included Ohio as a part of the Northwest Territory. In what is called the Northwest Ordinance of 1787, Congress inserted two provisions that would have been important to the Lord. They assured religious freedom for all settlers and stated, “Schools and the means of education shall forever be encouraged.”6 Ohio was a desirable place for settlers, as extolled by George Washington: “If I was a young man, just preparing to begin the world or if advanced in life, and had a family to make provision for, I know of no country where I should rather fix my habitation.”7 From the beginning, Christ prepared an entire land and then selectively peopled it with a remarkable God-fearing frontier folk. The story of Kirtland’s preparation reveals their strong moral values, personal trials, incredibly hard work, generosity, and great faith. The story notes their response to the promptings of the Lord to listen, move, nurture, and prepare a frontier land for the advent of the gospel there and the ultimate building of the first temple of God in the Restoration. Kirtland, known intimately to the Lord, was being groomed for a great mission—greater than anything witnessed in two thousand years. The Savior prepared Kirtland and then commanded His Saints to gather there. In the very early days of this land, however, the story began quietly, and its players were few. Decades before the gospel was brought there, the Lord, as early as 1798, began to nudge key people and their families to “the Ohio.” It was no accident that the family of a future member of the First Presidency, Frederick G. Williams, moved there that year. It must also have been in the eternal plan that other families of future greatness would congregate in and around this underdeveloped land of promise. The Snow family, including Lorenzo and Eliza R., came in 1811. A year later the industrious colonizer Isaac Morley with his wife, Lucy, settled on two hundred acres where matters of importance would later occur. In 1814, John and Elsa Johnson (later instrumental in purchasing land for the temple) settled near Kirtland with their two sons, Luke and Lyman, who in time became members of the first Quorum of the Twelve Apostles. The Lord also planted other future Apostles in Ohio, where He would prepare them to receive His word. He sent Orson Hyde in 1819 and Parley P. Pratt and Lyman Wight in 1826. Future bishops Edward Partridge and Newel K. Whitney arrived in Ohio by 1820 and 1823, respectively. The incredible good that these men were to perform cannot be measured. The Lord not only summoned and prepared His future leaders but also assembled hundreds of other future Church members in the Ohio. Kirtland was ready to receive the gospel and in due time welcome the Prophet of the Restoration, Joseph Smith. 1. Doctrine & Covenants 128:19. 2. Doctrine & Covenants 128:22. 3. Joseph Smith, History of The Church of Jesus Christ of Latter-day Saints, ed. B. H. Roberts, 7 vols., 2d ed. rev. (Salt Lake City: The Church of Jesus Christ of Latter-day Saints, 1932–51), 4:609–10; “The Temple,” Times and Seasons 3 (May 2, 1842): 776. 4. Malachi 3–4. 5. 3 Nephi 24–25. 6. Documents of American History, ed. Henry Steele Commager and Milton Cantor, 2 vols. (Englewood Cliffs, N.J.: Prentice Hall, 1988), 1:131. 7. George Washington Writings, sel. John H. Rhodehamel (New York: Literary Classics of the United States,1997), 687.
The Passover Haggadah is not the creation of a single author but rather a composite liturgical text made up of biblical and rabbinic passages with the addition at the end of what one might call folk songs from long ago. The Haggadah was probably assembled sometime during the late Second Temple Period in Palestine and was meant to be read on Passover eve during the Passover seder, a ceremony held in Jewish homes and meant to commemorate the Israelite redemption from Egyptian bondage in biblical times. The Haggadah text has been embellished and enhanced over the centuries with illustrations. These illustrations serve both esthetic and instructional purposes. Not only do they beautify the appearance of the text but they add depth, meaning and even content to it. For example, Moses is not mentioned in the Haggadah at all even though he is such a dominating figure in the Exodus story in the Bible. In the illustrations that accompany the text, however, he is present in most of the images that deal with biblical narratives concerning the Exodus. These include the Pharaonic decree that all Israelite male new-borns be thrown into the Nile, the suffering and hardship of the Israelite servitude, the ten plagues, the parting of the Reed Sea and receiving of the Ten Commandments at Sinai. The latter is mentioned in the well-known hymn, Dayenu. The illustrations revolving around Moses may even contain scenes from his life that are not mentioned in the Haggadah text such as his coming across the burning bush in the desert. Scenes from the lives of the Patriarchs Abraham, Isaac and Jacob also figure in illustrations that appear in the Haggadah. The illustrations fill in details that the text does not emphasize but are important in Jewish tradition and belief. The Haggadah's message of redemption and freedom has also inspired and captivated modern artists. However, whereas the artists of the past were by and large anonymous and saw themselves as representatives of their various communities, Haggadahs illustrated by modern artists are personal statements as well as communal ones. Be they observant or secular, their Haggadah illustrations give expression to their most personal feelings as artists and as Jews. David Moss in his introduction discusses how his Haggadah reflects his joy at his immigration with his family to Israel. Avner Moriah, an Israeli artist, talks about the connection of his illustrations to his wife's serious illness and recovery. Victor Majzner tells us in his introduction to his Australian Haggadah that his illustrations express not only his connection to Judaism but also to his love for his adopted home, Australia. There is also no way to categorize modern artists' Haggadah illustration. The art may be figurative or highly abstract, childlike and humorous or stately and formal, full of color or black and white. Many artists have specific agendas, be they religious, political, or social. The lack of women in the Haggadah, for example, has become an issue that many current artists address in their illustrations. Miriam, Moses' sister, is represented in Passover Landscapes by Matthew L. Berkowitz. Though not mentioned at all in the traditional Haggadah text, she is mentioned in the Book of Exodus as having led the women of Israel in song and dance after the people crossed the Reed Sea. Berkowitz makes use of the Exodus passage in order to include her and the Israelite women in his illustrations. El Lissitzky and Menachem Birnbaum reflect on the tumultuous times of war and revolution in which they lived in their illustrations for the Had Gadya (a children's song that concludes the Haggadah). David Wander in his powerful and disturbing illustrations interprets the text in light of the devastation of European Jewry during World War II. The Haggadah is a timeless book; it has given voice to the hopes and dreams of Jews throughout the generations. And from medieval times to the present, artists have expressed these hopes and dreams in the magnificent illuminations they created. Text and image maintain an ongoing dialogue.
Use of carbon dating But when gas exchange is stopped, be it in a particular part of the body like in deposits on bones and teeth, or when the entire organism dies, the ratio of carbon-14 to carbon-12 begins to decrease.The unstable carbon-14 gradually decays to carbon-12 at a steady rate. Scientists measure the ratio of carbon isotopes to be able to estimate how far back in time a biological sample was active or alive.In The Cosmic Story of Carbon-14 Ethan Siegel writes: The only major fluctuation [in carbon-14] we know of occurred when we began detonating nuclear weapons in the open air, back in the mid-20th Century.If you ever wondered why nuclear tests are now performed underground, this is why. Along with hydrogen, nitrogen, oxygen, phosphorus, and sulfur, carbon is a building block of biochemical molecules ranging from fats, proteins, and carbohydrates to active substances such as hormones.All carbon atoms have a nucleus containing six protons.Most radiocarbon dating today is done using an accelerator mass spectrometer, an instrument that directly counts the numbers of carbon 14 and carbon12 in a sample.A detailed description of radiocarbon dating is available at the Wikipedia radiocarbon dating web page. Here’s an example using the simplest atom, hydrogen. Carbon-14 is an unstable isotope of carbon that will eventually decay at a known rate to become carbon-12.Radiocarbon dating or in general radioisotopic dating method is used for estimating the age of old archaeological samples. In the upper atmosphere, nitrogen (C in a living plant, were can estimate the age of the object (the age of the object means the number of years ago when plant should have died), by using the formula.For example, age of the earth, moon, rocks, and mineral deposits can be determined by using the principle of radioisotopic dating. Age of the carbon containing object = t C in it, is called radiocarbon dating.The age of glaciers, snow fields, and even wines can be estimated by radioisotopic dating. In these cases, the radioactivity level of tritium (an isotope of hydrogen having mass number of 3) Preserve Articles is home of thousands of articles published and preserved by users like you.
Diamonds may be dazzling, but there's more to them than their appearance. They are some of the oldest crystals on Earth and there are diamonds in space that are even older. Discover what diamonds can reveal about our solar system and how their amazing natural properties are put to use. Find out about the formation of diamonds, a process that involves intense heat, crushing pressure, and the passage of millions of years. Could diamonds give us a glimpse of the universe before our solar system was born? Find out about scientists' research on diamonds that fall to Earth in meteorites. Diamonds could have formed as early as the first continents on Earth, around 3.3 billion years ago. But how do scientists know this? And can they be sure? Diamonds are the hardest natural substances on earth and the most transparent. Find out how these special properties can be used in communications, engineering and even medicine. The Vault gallery at the Museum includes a rare collection of 296 naturally coloured diamonds, not to mention a wide range of other gemstones.
The process of balancing an equation involves applying the law of conservation of mass and ensuring every element on each side of the equation has the same number of atoms. The process is completed in three distinct steps.Continue Reading The first step in balancing an equation is to write it. Start by placing the chemical formulas of reactants on the left-hand side of the equation and the products on the right-hand side. Reactants and products are separated by placing an arrow between them to show the direction of the reaction. A reaction at equilibrium is signified by arrows facing in both directions. Apply the law of conservation of mass and attempt to get the same number of atoms of every element on both sides. According to About, a good tip is to start by balancing an element that appears in only reactant and product. When the first element is balanced, continue to balance another and another until the equation is completed. Remember that balancing an equation is done by adding coefficients in front of the chemical formula, and subscripts should never be added as they change the formula. The final step in balancing an equation is to indicate the state of matter of the reactants and products. Use the letter g for gaseous substances, the letter s for solids, the letter l for liquids and aq for aqueous solutionsLearn more about Chemical Equations
Those two Mars rovers, Spirit and Opportunity, have provided much information about the planet in the five years they’ve been rolling around the surface. Most of the data relates to the central question of the role water might have played in the planet’s past, and a new paper in Science, describing Opportunity’s exploration of Victoria Crater in Meridiani Planum, a plain near the equator, is no exception. The paper, by Steven W. Squyres, a Cornell astronomer, and more than 30 colleagues, summarizes information that has been released over the past several years, and can itself be summarized in two words wet and windy. As in, water and wind have altered the terrain around the crater as they have done elsewhere, suggesting that the processes are regional in scope. The impact that formed the crater (which was originally about 2,000 feet in diameter) ejected sedimentary rocks and exposed layers of sediment along the rim. But there is much evidence of wind erosion the crater has widened to about 2,500 feet, forming indentations and promontories along the rim, and ejected rocks outside have been planed down, leaving smooth terrain. Opportunity examined several exposed rocks near the rim and a 30-foot deep section at a spot named Duck Bay. As in explorations of two other craters, spherules of hematite, a form of iron oxide, were found within the rocks and on the surface. Generally, the spherules, which form in wet conditions, increase in size as depth increases, suggesting that groundwater (which would be more abundant with increasing depth) affected the sediments. Like old man river, Opportunity keeps rolling along and is now headed to another crater. All told, it has traveled nearly 10 miles. Spirit has traveled about half that distance, and is now stuck in sand on the other side of Mars.
Toy Safety: Infants (0 to 18 months) Children need few toys during infancy. Parents' love and attention is more important for infants' healthy development and well-being. In fact, newborns are more attracted to human faces than inanimate playthings, and infants continue to prefer people over toys. Being gently and playfully cuddled, touched, and talked to contribute to children's earliest impressions that the world is wonderful and safe and can be explored without fear. Infants need very close, almost constant, supervision. They are engaged in the process of self-discovery, and are getting to know their new world by looking, listening, tasting, smelling, and grasping. Most of their learning comes through play. They need safe toys that appeal to all of their senses and stimulate their interest and curiosity. Talk with other parents who have infants and small children. They may be able to suggest safe toys and let you know of any recalls. Read the label on the toy. Always buy toys that are age appropriate. Toy Safety Checklist - The toy is sanitary. - The toy is washable. - The toy is not too heavy for your child's strength. - The toy is well-constructed. (A poorly made toy can break or come apart, easily exposing hazards like wires or springs.) - The toy does not have sharp edges that can cut or scratch. - There are no small parts or decorations that can get loose and be swallowed, inhaled, or stuffed into an ear. (Examples include the eyes on a stuffed animal or the squeaker in a squeak toy.) - The toy itself is big enough so it cannot be put into your child's nose, mouth, or ears. (Marbles and beads are examples of toys that are too small.) Check the size of handles and ends of rattles, squeeze toys, and teethers to be sure they aren't too small. A good way to check if the toy is too small if it will fit inside of a cardboard toilet paper tube. - No part of the toy, including print and decoration, is poisonous. Make sure the toy is labeled non-toxic. - The inside of the toy is not filled with a potentially harmful substance like small pellets. - Old baby furniture and toys have not been painted or repainted with lead-based paint. - There are no slots or holes that can pinch your child's fingers. - The toy cannot break and leave a sharp, jagged edge. - There are no pointed objects your child can fall on. - No part of the toy, such as a doll's hair bow, is attached with a straight pin or staple. - All moving parts are securely attached. - No string or cord on the toy is longer than 6 inches. Longer cords can strangle a baby. - A broken toy is repaired or thrown away. - The toy is not stored in a plastic bag. - The windup mechanism in a mechanical toy is enclosed to avoid catching hair, fingers, and clothing. - Toys made with cloth carry the labels "flame resistant", "flame retardant", or "nonflammable". - Keep uninflated balloons out of reach and throw away all broken balloons. More children have suffocated on uninflated balloons and pieces of broken balloons than on any other type of toy. Suggested Play Materials - Interesting objects hung within view - Brightly colored mobile - Colorful wall posters - Sturdy rattle - Large plastic rings - Soft toys for throwing - Colorful balls - Light plastic blocks - Washable cloth cubes - Music box to listen to - Teething toys - Floating animals for the bathtub - Washable squeak toys - Washable, unbreakable doll - Washable cuddly toy - Rough-smooth touching books - Washable cloth picture books - Sturdy, colorful picture books Look for toy recalls posted on the U.S. Consumer Product Safety Commission (CPSC) homepage, http://www.CPSC.gov toll free number 1-800-638-2772. You can search by toy description and manufacturer. Public Interest Research Group (PIRG) provides good information on toy safety at http://www.toysafety.net. Written by Donna Warner Manczak, PhD, MPH. Published by RelayHealth. Last modified: 2010-08-09 Last reviewed: 2010-08-09 This content is reviewed periodically and is subject to change as new health information becomes available. The information is intended to inform and educate and is not a replacement for medical evaluation, advice, diagnosis or treatment by a health care professional.
HorsesThe horse is an odd-toed ungulate mammal belonging to the taxonomic family Equidae. The horse has evolved over the past 45 to 55 million years from a small multi-toed creature into the large, single-toed animal of today. Humans began to domesticate horses around 4000 BC, and their domestication is believed to have been widespread by 3000 BC.Horses in the subspecies caballus are domesticated, although some domesticated populations live in the wild as feral horses. These feral populations are not true wild horses, as this term is used to describe horses that have never been domesticated, such as the endangered Przewalski's horse, a separate subspecies, and the only remaining true wild horse.Horse breeds are loosely divided into three categories based on general temperament: spirited "hot bloods" with speed and endurance; "cold bloods", such as draft horses and some ponies; and "warmbloods", developed from crosses between hot bloods and cold bloods."Hot blooded" breeds include "oriental horses" such as the Akhal-Teke, Arabian horse, Barb and now-extinct Turkoman horse, as well as the Thoroughbred, a breed developed in England from the older oriental breeds. Hot bloods tend to be spirited, bold and learn quickly. They tend to be physically refined - thin-skinned, slim, and long-legged.Muscular, heavy draft horses are known as "cold bloods". They have a calm, patient temperament; sometimes nicknamed "gentle giants". Well-known draft breeds include the Belgian and the Clydesdale. Some, like the Percheron, are lighter and livelier. Others, such as the Shire, are slower and more powerful. The cold-blooded group also includes some pony breeds."Warmblood" breeds are a cross between cold-blooded and hot-blooded breeds. Examples include breeds such as the Irish Draught or the Cleveland Bay.There are more than 300 breeds of horse in the world today.Horses are herd animals, with a clear hierarchy of rank, led by a dominant individual, usually a mare. They are also social creatures that are able to form companionship attachments to their own species and to other animals, including humans. They communicate in various ways, including vocalizations such as nickering or whinnying, mutual grooming and body language. When confined with insufficient companionship, exercise, or stimulation, individuals may develop stable vices, stereotypies of psychological origin, that include wood chewing, wall kicking, "weaving" (rocking back and forth), and other problems.Horses are also prey animals with a strong fight-or-flight response. Their anatomy enables them to make use of speed to escape predators. Their first reaction to threat is to startle and usually flee, although they will stand their ground and defend themselves when flight is impossible or if their young are threatened. They also tend to be curious; when startled they will often hesitate an instant to ascertain the cause of their fright, and may not always flee from something that they perceive as non-threatening.Related to this need to flee from predators is an unusual trait: horses are able to sleep both standing up and lying down. In an adaptation from life in the wild, horses are able to enter light sleep by using a "stay apparatus" in their legs, allowing them to doze without collapsing. Horses sleep better when in groups because some animals will sleep while others stand guard to watch for predators. A horse kept alone will not sleep well because its instincts are to keep a constant eye out for danger. Unlike humans, horses do not sleep in a solid, unbroken period of time, but take many short periods of rest. Horses must lie down to reach REM sleep. If a horse is never allowed to lie down, after several days it will become sleep-deprived, and in rare cases may suddenly collapse as it involuntarily slips into REM sleep while still standing.Horses are grazing animals, and their major source of nutrients is good-quality forage from hay or pasture. They can consume approximately 2% to 2.5% of their body weight in dry feed each day.The horses' senses are based on their status as prey animals, where they must be aware of their surroundings at all times. They have the largest eyes of any land mammal, and are lateral-eyed, meaning that their eyes are positioned on the sides of their heads. This allows horses to have a range of vision of more than 350°, with approximately 65° of this being binocular vision and the remaining 285° monocular vision. Horses have excellent day and night vision, but they have two-color, or dichromatic vision; their color vision is similar to red-green color blindness in humans where certain colors, especially red and related colors, appear as a shade of green.Their sense of smell, while much better than that of humans, is not quite as good as that of a dog. It is believed to play a key role in the social interactions of horses as well as detecting other key scents in the environment.A horse's hearing is good, and the pinna of each ear can rotate up to 180°, giving the potential for 360° hearing without having to move the head. Noise impacts the behavior of horses and certain kinds of noise may contribute to stress.Horses have a great sense of balance, due partly to their ability to feel their footing and partly to highly developed proprioception - the unconscious sense of where the body and limbs are at all times. A horse's sense of touch is well developed. The most sensitive areas are around the eyes, ears and nose. Horses are able to sense contact as subtle as an insect landing anywhere on the body.Horses have an advanced sense of taste, which allows them to sort through fodder and choose what they would most like to eat. Their prehensile lips can easily sort even small grains. Horses generally will not eat poisonous plants.Female horses, called mares, carry their young for approximately 11 months, and a young horse, called a foal, can stand and run shortly following birth. They reach full adult development by age five, and have an average lifespan of between 25 and 30 years.Horses are highly intelligent animals. They perform a number of cognitive tasks on a daily basis, meeting mental challenges that include food procurement and identification of individuals within a social system. They also have good spatial discrimination abilities.They excel at simple learning, but also are able to use more advanced cognitive abilities that involve categorization and concept learning. They can learn using habituation, desensitization, classical conditioning, operant conditioning and positive reinforcement. Domesticated horses may face greater mental challenges than wild horses because they live in artificial environments that prevent instinctive behavior while also learning tasks that are not natural.The wild horse (Equus ferus) is a species of the genus Equus, which includes as subspecies the modern domesticated horse (Equus ferus caballus) as well as the undomesticated Tarpan (Equus ferus ferus), now extinct, and the endangered Przewalski's horse (Equus ferus przewalskii). The Przewalski's Horse was saved from the brink of extinction and reintroduced successfully to the wild. The Tarpan became extinct in the 19th century. Since the extinction of the Tarpan, attempts have been made to reconstruct its phenotype, resulting in horse breeds such as the Konik and Heck horse. However, the genetic makeup and foundation bloodstock of those breeds is substantially derived from domesticated horses, and therefore these breeds possess domesticated traits.The term "wild horse" is also used colloquially to refer to free-roaming herds of feral horses such as the Mustang in the United States, the Brumby in Australia, and many others. These feral horses are untamed members of the domestic horse subspecies (Equus ferus caballus).Horses are exploited by the unethical horse racing industry. Commercial horse racing is a ruthless industry motivated by financial gain and prestige. Cruelty, slaughter, injuries and accidental deaths are common. Horses are pushed to their physical limits and beyond, all for profit. Some horses are raced when they are under three years old, leading to fractures. Horses are drugged so they can compete with injuries, or given prohibited performance enhancing drugs. Jockeys often whip horses. The racing industry breeds thousands of horses looking for its next champion, contributing to an overpopulation crisis. Loosing and winning horses are commonly sent to the slaughterhouse when their careers have ended.While no horse slaughterhouses currently operate in the United States, American horses are still trucked over borders to slaughtering facilities in Mexico and Canada. Horses suffer horribly on the way to and during slaughter, often shipped for more than 24 hours at a time without food, water or rest. Horses are often injured even before arrival due to overcrowded conditions during transport. The methods used to kill horses rarely results in quick deaths: they often endure repeated stuns or blows, and sometimes remain conscious during their slaughter.Horses are forced to pull oversized loads by the animal entertainment industry. Carriage horses are forced to perform in all weather extremes. They face the threat and stress of traffic, often working all day long. The horses suffer from respiratory ailments from exhaust fumes, and develop debilitating leg problems. Carriage horses also face the threat of heatstroke from summer heat and humidity. Living conditions for these animals are often deplorable. When the horses grow too old, tired, or ill they may be slaughtered and turned into food for dogs or zoo animals, or shipped overseas for human consumption.The animal entertainment industry also uses horses in rodeos. They are abused with electrical prods, sharp spurs and "bucking straps" that pinch their sensitive flank area. During bucking events, horses may suffer broken legs or run into the sides of the arena causing serious injury and even death.Each year, hundreds of wild (feral) horses are rounded up by United States government agencies using inhumane methods. The horses are put in holding pens where, for a small fee, anyone can "adopt" them. The lucky ones are adopted by people who love and care for them, but many are traded or sold at auctions. Some are sent to Canada or Mexico to be slaughtered for their meat.The Horse Protection Act is a federal law that prohibits sored horses from participating in shows, exhibitions, sales or auctions. Soring is a cruel and abusive practice used to accentuate a horse’s gait. It is accomplished by irritating or blistering a horse’s forelegs with chemical irritants (such as mustard oil) or mechanical devices. The Horse Protection Act also prohibits drivers from transporting sored horses to or from any of these events.
Which shark species holds the record for most teeth? Most of these enormous predatory fish have around five rows of teeth along each jaw, typically containing 20 to 30 triangular teeth per row. The bull shark holds the dental record with a terrifying 50 rows of sharp teeth, and with up to 1,500 teeth at any time, many argue it is the most dangerous shark in the ocean. This is one award-winning smile you should probably avoid, with the bull shark being responsible for the 3rd most attacks on humans, as it likes to loiter in shallow waters where they are most likely to come across people. A shark’s teeth aren’t anchored by a root like human teeth, and at least one tooth falls out of its mouth per week. Holes are quickly filled with a new tooth from one of the many hidden inside the jaw membrane. Teeth move forward like a conveyer belt to occupy gaps and keep a shark’s bite as deadly as possible. The shape of these teeth is dictated by the shark’s diet. Catsharks, for example, have thick plate-like teeth for scooping up shelled crustaceans from the ocean floor. Great white and tiger sharks have serrated teeth for tearing through seal flesh and mako sharks have needle-shaped teeth to immediately immobilise slippery fish. Take a look at this shark teeth video to learn more: Image from www.flickr.com/photos/stuutje
Earth’s average temperature has remained more or less steady since 2001, despite rising levels of atmospheric carbon dioxide and other greenhouse gases—a trend that has perplexed most climate scientists. A new study suggests that the missing heat has been temporarily stirred into the relatively shallow waters in the western Pacific by stronger-than-normal trade winds. Over the past 20 years or so, trade winds near the equator—which generally blow from east to west—have driven warm waters of the Pacific ahead of them, causing larger-than-normal volumes of cool, deep waters to rise to the surface along the western coasts of Central America and South America. (Cooler-than-average surface waters are depicted in shades of blue, image from late July and early August 2007.) Climate simulations suggest that that upwelling has generally cooled Earth’s climate, stifling about 0.1°C to 0.2°C in warming that would have occurred by 2012 if winds hadn’t been inordinately strong, the researchers reported online yesterday in Nature Climate Change. Both real-world observations and the team’s simulations reveal that the abnormally strong winds—driven by natural variation in a long-term climate cycle called the Interdecadal Pacific Oscillation—have, for the time being, carried the “missing” heat to intermediate depths of the western Pacific Ocean. Eventually, possibly by the end of this decade, the inevitable slackening of the trade winds will bring the energy back to the ocean’s surface to be released to the atmosphere, fueling rapid warming, the scientists contend.
This pit in the Moon's Marius Hills is big enough to fit the White House completely inside and was photographed by NASA's Lunar Reconnaissance Orbiter. Credit: NASA/ LROC/ ASU Photographs of enormous pits on the moon, some hundreds of feet deep, from unmanned probes have given scientists a tantalizing glimpse into the lunar interior. Some of the moon holes are wide enough to fit the White House and scientists think they are openings to underground tunnels that had been formed by rivers of lava. "They could be entrances to a geologic wonderland," said lead researcher Mark Robinson at Arizona State University. "We believe the giant holes are skylights that formed when the ceilings of underground lava tubes collapsed." First seen in close detail by Japan's Kaguya spacecraft last year, the lunar pits were also seen by NASA's Lunar Reconnaissance Orbiter using same high resolution camera that photographed the lander portion of the Apollo spacecraft and astronaut footprints in moon dust. [10 Coolest Moon Discoveries] Lunar lava tubes and trails The existence of tunnels in the moon was first proposed by scientists in the 1960s, when early photographs showed that hundreds of long, narrow channels trailed across the lunar plains. Taken as evidence of past volcanic activity, the grooves ? known as rilles ? had pointed to the possibility of underground channels similar to lava tubes found on Earth. Lava tubes form when the upper portion of a river of molten lava cools and solidifies while the rest of the lava continues to stream beneath it. The insulated molten rock can retain enough liquid warmth to flow for miles, carving out tubular channels and complex labyrinths. Images from Japan's Kaguya spacecraft depict gaping holes on the same plain, or lunar maria, as the winding rilles. One particular pit appears in the middle of the channel, leading scientists to believe it represents the collapsed roof of an underground tube. Researchers speculate that the tunnels, if unclogged, could serve as passages and livable lunar lairs for humans. "The tunnels offer a perfect radiation shield and a very benign thermal environment," Robinson said in a statement. "Once you get down to 2 meters under the surface of the moon, the temperature remains fairly constant, probably around -30 to -40 degrees C." Explorers would be sheltered from daily temperatures that swing from 212 degrees Fahrenheit (100 degrees Celsius) during midday to minus 238 degrees Fahrenheit (minus 150 degrees Celsius) at night, as well as from possible asteroids. But further exploration would be needed before the tubes could be used. "Hold off on booking your next vacation at the Lunar Carlsbad Hilton," said Paul Spudis of the NASA-funded Lunar and Planetary Institute in Texas. "Many tunnels may have filled up with their own solidified lava." Viewed though entrances, the blackness of the enormous pits for now remains a tantalizing wall. "We just can't tell, with our remote instruments, what the skylights lead to," said Spudis. "To find out for sure, we'd need to go to the moon and do some spelunking." Relaying how a lava flow mapping expedition in Hawaii revealed a surprising system of vents similar to the skylights photographed, Spudis left open the possibility of a lunar labyrinth. "It turned out that there was a whole new cave system that was not evident from aerial photos ? Who knows? The moon continually surprises me," he said. - The 10 Coolest New Moon Discoveries - Gallery ? Visions of Future Moon Bases - Amazing Moon Facts
When health care providers want to help their patients remain healthy, they look for ways to screen them for potentially dangerous conditions even before the patient develops symptoms. If a patient is identified as being at risk by screening, he or she may have further tests. If the results are definitive, preventative treatment will be recommended. Screening tests are most useful when they are applied to a select group of patients who are most likely to have the troubling condition, rather than screening everyone. They are less useful when they identify too many people who, on further evaluation, actually don't have the condition. Deciding which patients to screen is the key to effective testing and physicians look for the characteristics of a patient's personal health or health history that increase the likelihood that a test will be helpful. Screening tests are only recommended when there is an available treatment for the condition or when knowing that the patient is at risk will change their medical care and follow−up. The American Academy of Pediatrics recommends both dietary and medication treatment for overweight children with abnormal total cholesterol and low density lipoproteins. Abnormal Lipids From Childhood to Adulthood Abnormal blood lipids (cholesterol, high and low density lipoproteins and triglycerides) are associated with coronary artery disease in adults, and adults are routinely screened for their blood levels as part of preventative care. It is known is that abnormal lipid levels in childhood often persist into adulthood and that plaques and abnormalities associated with atherosclerosis can be seen in the cells lining the coronary arteries in adolescents and young adults. Overweight and obese children and adolescents have an increased risk of abnormal lipids, but universal lipid screening for children and adolescents remains controversial. The U.S. Preventive Services Task Force stated that there is not enough evidence to decide whether screening children and adolescents is or isn't useful. However, the American Academy of Pediatrics and the American Heart Association recommend targeted screening on specific groups of children and teens who are thought to be at increased risk. Currently, the AAP guidelines for screening pediatric patients for abnormal lipids consider whether the child’s parents have elevated cholesterol or early cardiovascular disease (occurring before 55 years in men and before 65 years in women) as well as whether the child has a personal history of high blood pressure, diabetes, smoking or overweight or obesity. Children whose Body Mass Index (BMI) is between the 85 and 95 percentile for their age and sex are considered overweight, and those whose BMI is greater than 95 percentile for age and sex are considered obese. The AAP uses the 85 percentile or greater as a threshold for starting lipid screening. Not Such a Useful Screening Tool A recent study published in the August issue of Archives of Pediatric and Adolescent Medicine has raised concerns that the AAP weight screening guidelines are not accurately predicting the group of children and adolescents who are most likely to be helped by the test. Using information on blood lipid levels and body mass index obtained by the National Health and Nutrition Examination Survey (NHANES) the researchers looked at 9338 children, ages 3−18 years. By looking at the lipid values for these children and teens at a variety of BMI measurements, they were able to determine whether the measurement would have been helpful in identifying the children whose lipids were likely to be abnormal. They found that if they used the AAP body mass index recommendations for screening, they would have missed a significant number of children with elevated total cholesterol and decreased LDL which the AAP recommends be considered for treatment with medication. The BMI was somewhat more helpful for abnormal HDL and triglycerides, which are treated with dietary management, just as one would with any overweight or obese pediatric patient. They found that the BMI is not an effective way to identify those children and adolescents who would benefit from screening for abnormal lipids because it is least accurate for the group that might most benefit from medication as well as dietary management. As a result they concluded that a BMI− based screening test that misses as many as half of children and adolescents with the problem isn't useful. It is far easier to prevent a child from becoming overweight than getting an overweight child to lose weight. Adopting a healthy diet and exercise program early in life, decreasing time spent sitting indoors, and reinforcing healthy food choices are critical if parents want to positively impact their children's health throughout their life. Parents may wish to discuss their children's diet with their primary care provider, or they may want to request a referral to a nutritionist to determine the saturated fat and cholesterol sources in their family's daily diet and learn how to reduce them. If parents are concerned that their child is at risk for abnormal blood lipids, they should ask their child's doctor whether a blood test would be helpful.