content
stringlengths
275
370k
Stroke is documented as the third cause of death and the leading cause of disabilities worldwide. Each year, 15 million of the world population suffered from stroke. Two-thirds among these people experience disabilities due to stroke, or death. Stroke is a sudden disorder in brain function due to disrupted blood flow to the brain. Around 85 % of stroke cases are Ischemic Stroke, occurring due to obstruction in the cerebral blood flow, and the other 15 % are Hemorrhagic Stroke, occurring due to ruptured cerebral blood vessel. Stroke could occur in anyone. However, some people have the risk factors and become more prone to stroke attack. Some of the risk factors for Stroke are as follows : - Over 55 years of age - History of stroke in the family - High blood pressure - Increased lipid levels - Lack of physical activity Aging and history of stroke in the family are unchangeable risk factors, while other risk factors related to lifestyle are controllable risk factors. TIA (Transient Ischemic Attack) TIA is a mild and temporary stroke attack. The symptoms only last for several minutes and recover in less than 24 hours. Around 10% of TIA cases will develop to stroke in 3 months. Time is Brain (the crucial 3 hours) In obstructive type of stroke, the best time to obtain maximum recovery from stroke or to reduce risk of disability as minimum as possible is treatment within 3 hours from the onset of attack. Recognize the Symptoms of Stroke : - Weakness or paresis of the arms and legs - Numbness, decreased sensation, tingling in the face, arms, trunk, and legs - Imbalance, vertigo, difficulty to walk - Visual disturbance (sudden blindness, loss of vision in one or both eyes) or decreased visual field - Severe headache without any apparent cause The above symptoms usually occur all of a sudden. Stroke is a medical emergency When you experience or find stroke symptoms, contact the doctor immediately for medical help.
"Functional tenses” are aspects and moods. Huddleston and Pullam, in The Cambridge Grammar of the English Language, describe two systems of tense, one system of aspect, and one system of mood in English. The primary system of tenses are present and preterite (past), both inflected. The secondary tense system is the perfect: the non-perfect is unmarked, and the perfect is formed with have + past participle (e.g., has gone). The aspect system is the progressive. Non-progressive verbs are unmarked. Progressive ones are formed with have + gerund-participle (e.g., is going). Moods are usually marked with modal auxiliaries (will, shall, can, may, must) + infinitive (e.g., can go). But the construction of moods in English is complex, and can’t be quickly summarized. Some grammars will describe a future tense, formed with will/shall, but Huddleston and Pullam argue that there are numerous ways to express future time in English and the will/shall forms are analogous in every respect to the other moods, so this construction of futurity is more accurately described as a mood.
Plants and shrubs in the Arctic tundra have turned into small trees in recent decades due to the warming Arctic climate. If the trend continues on a wider scale, it would significantly accelerate global warming, said scientists from Finland and Oxford University in a new study published in the journal Nature Climate Change. The researchers investigated an area of 100,000 square kilometers from western Siberia to Finland known as the northwestern Eurasian tundra. Surveys of the area’s vegetation, using data from satellite imaging, fieldwork, and expert observations from indigenous reindeer herders, revealed that willow (Salix) and alder (Alnus) plants have grown into trees over 6 feet tall in 8 to 15 percent of the area during the last 30-40 years, the researchers said. “It’s a big surprise that these plants are reacting in this way,” said Marc Macias-Fauria of Oxford University, lead author of the research. Experts had previously believed such colonization would take centuries, he added. Indeed, previous studies suggested that advancement of the forest into the Arctic tundra could raise Arctic temperatures by an extra 1.8 to 3.6 degrees Fahrenheit (1-2 degrees Celsius) by the end of the 21st Century. “But what we’ve found is that the shrubs that are already there are transforming into trees in just a few decades,” Macias-Fauria said. Temperatures in the Arctic are warming at nearly twice the rate as temperatures in the rest of the world, the scientists said. As reflective snow and ice recede, soil or water is exposed, presenting darker colors that absorb more of the sun’s heat. The same phenomenon occurs when trees are tall enough to rise above the snowfall, exhibiting dark surfaces that absorb sunlight. This growth from shrubs to forest is significant as it changes the albedo effect – the amount of sunlight reflected by the surface of the Earth, said Professor Bruce Forbes of the Arctic Centre, University of Lapland, co-author of the study. The increased absorption of the Sun’s radiation, combined with microclimates created by forested areas, accelerates global warming, making an already-warming climate even more so, he said. “The speed and magnitude of the observed change is far greater than we expected.” While additional Arctic warming may generate new oil and gas development in the area, and will likely attract herds of reindeer that feed on willow shrubs, a warming planet will also cause severe droughts and flooding in other places throughout the world, scientists say. Macias-Fauria acknowledged that the area researched in the study is only a small part of the massive Arctic tundra, and an area that is already warmer than the most of the Arctic, likely due to the impact of warm air from the Gulf Stream. “However, this area does seem to be a bellwether for the rest of the region, it can show us what is likely to happen to the rest of the Arctic in the near future if these warming trends continue.” The research was published online June 3 in the journal Nature Climate Change. The full report can be viewed here.
The recent UN climate conference in Doha has demonstrated again that the UN climate negotiations are proving too slow in reducing global greenhouse gas emissions. The latest climate science released by the Global Carbon Project for the Doha conference indicates the planet is currently on track for a rise in temperature of between 4 and 6 degrees later this century. Unless urgent action is taken to reduce global emissions our children and grandchildren will not thank us for the volatile and dangerous climate we deliver to them. The lack of progress in reducing global greenhouse gas emissions is making some scientists argue that humanity needs to prepare an emergency strategy to cool the planet. This emergency strategy, so the argument goes, could be rolled out when serious climate change impacts start to bite in coming decades. One emergency strategy that is currently attracting attention is geoengineering. Geoengineering is the use of human technology to manipulate and control the climate on a large scale. It may sound like cheap science fiction, however, various methods of geoengineering are currently being researched in the UK, US and Canada. In 2009 Britain's leading scientific body, The Royal Society, produced a major report, Geoengineering the climate: science, governance and uncertainty, on the prospects of geoengineering as a response to climate change. Geoengineering has moved out of the fringe of climate policy discussion into the mainstream. 'Stratospheric particle injection' is a geoengineering technique that aims to mimic the cooling effect of volcanic eruptions by injecting sulphate particles high into the stratosphere. There, the particles act like a giant sunshade, reflecting a percentage of sunlight away from the earth. This method of geoengineering is being investigated by a collaboration of scientists in the UK known as the SPICE project. It has been proposed to test technology to deliver particles into the atmosphere using a hot air balloon with a hose attached. While the technology may sound simple, the chemistry of the atmosphere is very complex. The particles would likely change the physical appearance of the sky by making it whiter during the day and more colourful at sunset. The particles may also cause damage the ozone layer, a layer of the atmosphere that filters ultraviolet radiation from reaching the earth. The particles could also significantly change global rainfall patterns that are relied upon by billions of people. Stratospheric particle injection may cool the planet in the short term, but it is little more than a bandaid measure. It does nothing to address the key driver of climate change, which is the rising level of greenhouse gas in the atmosphere. Ocean fertilisation is a geoengineering technique that aims to remove greenhouse gases from the atmosphere. In the same way we add fertiliser to our gardens to make them grow, ocean fertilisation adds nutrients to the ocean to encourage the growth of plankton. Plankton consumes carbon dioxide, and could draw down enough greenhouse gases from the atmosphere to lessen climate change. In October this year a US businessman dumped an estimated 100 tons of iron sulphate off the coast of Canada in an attempt at ocean fertilisation. The experiment did not have the authorisation from the Canadian Government and potentially breached international bans in the London Convention on Ocean Dumping. Ocean fertilisation could cause damage to ocean ecosystems, increase ocean acidification and deplete the ocean of oxygen. As with stratospheric particle injection, the probability and nature of the risks of ocean fertilisation are uncertain and require further scientific investigation. Technical ability to attempt geoengineering is already here. In the coming years, it will be difficult for countries to resist experiments in geoengineering as it has the allure of being a relatively inexpensive and quick response to climate change impacts. It is therefore essential that geoengineering technology is developed and used responsibly and that it is effectively regulated at an international level. If countries deploy geoengineering hastily, without understanding the risks involved, they will be rolling a dice on causing further damage to the atmosphere and the environment. International regulation is also important to ensure that the interests of all countries are considered. It would be unfair for one country to deploy geoengineering technology for its own benefit but in doing so cause significant harm to others. Finally, geoengineering must not allow countries to take their eye off reducing greenhouse gas emissions. A rapid reduction in greenhouse gas emissions over coming decades is crucial for us to provide our children and grandchildren with a safe climate in which to live well. Discuss in our Forums See what other readers are saying about this article! Click here to read & post comments. 52 posts so far.
VIII - MULTIPLICATION OF FRACTIONS - CONTINUED Care should be exercised in the choice of fractions. If the pupil has been thoroughly taught cancellation, possibly he will lost a little in his attempt to learn the explanation, provided he is able to do more or less canceling. In some instances we have thought it advisable to treat the multiplication of a fraction by a fraction as a case in cancellation. We are inclined to think this might be done successfully, and this such an explanation as we have already offered would be unnecessary. After the pupils have worked out step by step, the explanation of a dozen or more problems, a generalization may be made. The pupil is led to observe that in multiplying 4-5 by 3-7, he has, in reality, multiplied the numerators of the two fractions for a new numerator, and the denominators of the two fractions for a new denominator. He thus makes for himself the rule that is commonly stated in our books for multiplying a fraction by a fraction. Again we take the liberty to emphasize the importance of compelling pupils to work a very large number of problems involving the preceding cases, having the pupils bear in mind that the last case is of no special value except as it becomes part of the preceding cases. We are aware that this work is not especially attractive to the teacher. We do ask, however, for a thoughtful and thorough test of the plan herein set forth. DIVISION OF FRACTIONS The same simplicity cannot be insisted upon in division of fractions. For his first case the pupil may be asked to divide 450 3-4 by seven, or any other number that he can use as a divisor, and work, as he commonly terms it, by short division. The teacher should not underestimate the importance of doing this work. The truth of the matter is, the majority of problems in division of fractions involve just this case. Already the reader has thought out what the second case should be, provided a rigid attempt is made to be logical. The pupil would next be asked to divide 420 by 7 2-3. This he can no easily do, nor can he be taught to do it without learning something that is radically new. Even the effort that some teachers make to introduce decimal fractions would not in this problem be of any value. We have found the easiest plan to be the following: The pupil is led to change both dividend and divisor to thirds. Then his problem is a problem in integers. He has applied a principle of simple division. He has multiplied both dividend and divisor by the same number, and knows that his quotient is not thereby changed. We would be very glad to have every reader write us in relation to this particular case. We would be very glad to ascertain what business men do, as a rule, in a problem like this. If this plan is adopted and followed generally there are no more difficulties to be encountered in division of fractions, because the dividing of an integer and a fraction by an integer and a fraction would fall under this method. There are readers, however, who will present the question that is so often asked at Institutes: "In division of fractions do you teach pupils to invert the terms of the divisor?" This depends upon how you wish the pupil to divide an integer and a fraction by an integer and a fraction, or how you wish the pupil to divide a fraction by a fraction. The thoughtful reader will at once observe that he can change both dividend and divisor to the same denomination and proceed as in integers; or if, for any reason, the teacher wishes to adopt the method suggested in many of our books, he may proceed as follows: Divide 4-5 by 3-7. According to preceding principle, to divide a fraction by an integer divide the numerator or multiply the denominator. In this case we will multiply the denominator. Doing this we have the result 4-15, but we were told to divide 4-5 by 3-7, and consequently we have used a divisor 7 times too large. Hence the quotient is 1-7 of the correct quotient. TO correct this, multiply 4-15 by 7. According to preceding principle to multiply a fraction by an integer we must multiply the numerator or divide the denominator. In this instance we will multiply the numerator, and we have as a result 28-15, and, therefore, 4-5 divided by 3-7 equals 28-15. If, after solving a dozen or more problems in this way, the pupil is required to generalize, the following will be the result: To divide a fraction by a fraction multiply the numerator of the dividend by the denominator of the divisor for a new numerator, and multiply the denominator of the dividend by the numerator of the divisor for a new denominator. Having made this generalization the teacher may take the next step, provided he thinks it is desirable, and lead the pupil to see this result is precisely the same as it would have been had he inverted the terms of his divisor and proceeded as in multiplication. Too much time is devoted to this so-called case. An immense amount of time is wasted, worse than wasted. We do not wise to be misunderstood in our treatment of dividing a fraction by a fraction, or in our treatment of multiplying a fraction by a fraction. We are aware that other explanations can be offered. We present these explanations because they are based upon the "principles of fractions." We have been careful to estimate the value of these cases. We have been careful to ask the teacher to look to those applications of multiplication and division of fractions that the business man constantly employ. Let the reader bear in mind that nothing is lose in mental discipline, nothing in time; everything works for economy and discipline. We shall not consume the valuable time of our readers in presenting what are commonly termed complex fractions. In advanced classes there may be found a place for presenting complex fractions which involve addition, subtraction, multiplication and division in long and detailed combinations. Beyond question, there is a disciplinary value that can be attached to the solution of this class of problems. Our experience down not justify us in recommending the rank and file of the teacher profession to recognize these problems as at all essential to the mastery of fractions as required in the majority of our schools. We have purposely omitted those problems in which sign language is a conspicuous factor. Now infrequently in examinations for teachers, this requirement has been emphasized. The result is that what might very properly be called a "dead language" has been called into use, a language that the counting room knows nothing about, needs to know nothing about, doesn't wish to know anything about, would dislike to know anything about. Notwithstanding the fact that in addition, subtraction, and multiplication of fractions many applications have been made to business transactions, a large number of problems should now be given that involve the use of reason. Reason, in order to procure results, will have to employ addition, subtraction, multiplication, and division. If the pupils have been taught fractions thus far mechanically, they will long for the day to come when they will have finished what is commonly termed "miscellaneous examples." If they have taken delight in using their minds, if they are quick and accurate, they will ask for a larger number of miscellaneous problems than they are given in the ordinary arithmetic. The best problems should be selected from three or four of the very best arithmetics. When a pupil can solve and explain these problems readily, he may rest assured that he has mastered the essential difficulties of arithmetic. It is true that new things will arise, but they will consist largely of new applications to the usages of the business world, and these applications will be made with ease and dispatch provided he masters the language of the new case involving a new business. Within the past three weeks, from a careful study of two hundred pupils, most of them pupils who have taught school, we find that their difficulties in percentage sustain a close relation to their ignorance of common fractions. In every instance the student who is thoroughly familiar with fractions, who can handle them rapidly and accurately, who can analyze, has little or no difficulty in any of the cases of percentage. In concluding our work on fractions, we would, therefore, again ask teachers to emphasize the mental exercise. Whenever unsurmountable difficulties present themselves in written work, fall back upon the mental. Insist upon verification in the solution of every problem. We asked for this in one of our articles on mental arithmetic. The business world demands verification, will not recognize the calculator who does not verify. Let the teacher avoid hurry, avoid superficial work in leading young people to master the subject of fractions. This subject has not been over emphasized. Its value has not been exaggerated. Reform in this work is needed. The writer of these articles invites criticism upon what he has had to say thus far. Let teachers who have had years of experience in teaching fractions express. The writer will be very glad to consider the difficulties encountered by his co-workers.
SHAKESPEARE, WILLIAM° (1564–1616), English playwright and poet. The Merchant of Venice (1596) has been claimed as the play in which Shakespeare found himself "in the fullest sense." As with other major comedies of his so-called second period, the main emphasis was to have been on The mainstream view is that Shylock is the type of the monstrous, bloodthirsty usurer of medieval legend. Gobbo, his comic servant, tells us that his master is "the very devil incarnation," and later, when Shylock appears, one of the characters remarks that the devil "comes in the likeness of a Jew." From time to time Shylock is "demythologized," especially in his famous speech, beginning "I am a Jew. Hath not a Jew eyes? Hath not a Jew hands, organs, dimensions, senses, affections, passions?" Many modern critics believe that Shakespeare's depiction of Shylock is much more ambiguous than was previously held, and must be contrasted with the two-dimensional portrayals of Jews in English drama up to that time. Shylock is seen as marking a stage in Shakespeare's evolution as a writer. The Merchant of Venice was probably written three or four years after Richard III, with its unquestionably evil protagonist, and paved the way for the more ambiguous depictions in Shakespeare's later works. Much about The Merchant of Venice poses as yet unanswerable questions: how and where did Shakespeare meet any Jews, since they were legally barred from living in England? Did he visit Venice, which the play describes with the apparent knowledge of an eyewitness? From what source did the name "Shylock," unknown in Jewish usage, derive? It is generally believed that the trial and execution in 1594 of Queen Elizabeth's *Marrano physician, Rodrigo *Lopez, suggested some features of the Shylock story. This episode provoked a good deal of antisemitic feeling in England at the time. In England, Edmund Kean's portrayal of Shylock in 1814 was notable for its tragic intensity, while Sir Henry Irving in 1879 acted the part in a radically idealized form, muting the evil qualities of Shylock. The play has often been translated into Hebrew and has been performed in Israel several times. The Merchant of Venice, and Shakespeare's views of Jews, have attracted a wide range of comment and analysis, which have certainly not diminished in recent years. Recent studies of these topics include Martin D. Yaffe, Shylock and the Jewish Question (1997), and James Shapiro, Shakespeare and the Jews (1997). For better or worse, Shylock probably remains the most famous depiction of a Jew in English literature. L. Prager, in: Shakespeare Quarterly, 19:2 (Spring, 1968), 149–63, includes bibliography; Z. Zylbercweig, in: Ikkuf Almanakh 1967, ed. by N. Meisel (1967), 327–46; M.J. Landa, The Jew in Drama (19692), 70–85, index; G. Friedlander, Shakespeare and the Jew (1921); J.L. Cardozo, Contemporary Jew in the Elizabethan Drama (1925), 207–53; T. Lelyveld, Shylock on the Stage (1961), index; S.A. Tannenbaum, Shakspeare's The Merchant of Venice, a Concise Bibliography (1941); M. Roston (ed.), Ha-Olam ha-Shekspiri (1965); M. Halevy, in: Jewish Quarterly (Spring, 1966), 3–7; (Winter, 1966), 10–16; J. Bloch, in: JBA, 14 (1956/57), 23–31. ADD. BIBLIOGRAPHY: J. Berkowitz, Gained in Translation: Shakespeare on the American Yiddish Stage (2002); D. Abend-David, "Scorned My Nation," A Comparison of Translations of the Merchant of Venice into German, Hebrew and Yiddish (2003); J. Gross, Shylock: Four Hundred Years in the Life of a Legend (1992); J.M. Landau, Studies in the Arab Theater and Cinema (1958), index.
It’s a frustrating problem for learners because even if you learn how to articulate every English sound correctly, you still won’t necessarily pronounce an entire word correctly on the first try. You need to know which letters make which sounds, and unfortunately in English, certain combinations of letters can make a number of different sounds. My answer to this question used to be to “look it up”. In other words, go to an online dictionary like dictionary.com and press the little speaker icon to hear the word pronounced. I grew up with my mom saying this to me, and I think this answer bugged my clients as much as it did me as a child. The other problem with this answer was that it was just ‘giving the client a fish’ instead of ‘teaching the client to fish’, as the old saying goes. Sure, you can look up every new word, but isn’t the goal to be able to identify and read new words correctly without always having to consult a dictionary? As I put more thought into how my clients could better identify the correct pronunciation of new words, I began to realize the important connection between reading and speaking. We actually need to go back to phonics. Phonics teaches the correlation between our letter system and the sounds each letter (or group of letters) can produce. Phonics programs have become very common in schools as a means to help children learn how to read. It’s this link between reading and speaking that determines whether someone can see a new word and read it with the correct pronunciation on the first try. It’s because of phonics that I know that a word ending in an ‘e’ will usually have a long vowel sound and the ‘e’ will be silent, as in pole, cape, or cute. By learning common patterns used in English spelling, you can have a better chance of pronouncing a new word correctly. This doesn’t mean you’ll always get it right, but no one does – not even native speakers. Unfortunately, there are not a lot of phonics classes offered for adults, and phonics is taught even less than pronunciation (which is already under-represented) in English language classes. So how can an adult go about learning how to pronounce new words correctly? Here are a couple ways you can become more aware of English spelling patterns. 1. Listen and read at the same time. Pronunciation is a skill which is learned by listening. You learn how to say a new word when you hear someone else say it first. Listen to as much English as you can. Even more important than just listening, is learning how to spell the words you hear. If you are watching a DVD, put on the English subtitles so that you can read along as you listen. You are not only learning pronunciation, but you are also learning how different sounds are spelled and are becoming more aware of spelling patterns which you can use later. If you use an iPhone or other smart phone, see if you can find any apps from magazines that read articles out loud. One of my clients who works in finance downloaded an app from The Economist and listens to the articles being read by professional broadcasters as she reads along. You can extend this type of practice to audio books as well. If you have the audio version and the print version of a book, read along in the print version as you listen to the audio recording. 2. Focus on phonics. When you are practicing pronunciation and focusing on a particular sound, be sure to take note of the different ways that sound can be spelled. For example, if you are working on the long ‘e’ sound found in words like beef, cheat, and sheep, make a list of the different ways this sound can be spelled. There are several combinations of letters that make this particular sound: ee as in free, feet, street, cheek ea as in beat, cheat, meat, easy e as in be, these And less commonly: i as in police eo as in people ei as in ceiling, seize ie as in piece, chief ey as in key You can see from this list that knowing how to pronounce a new word isn’t very easy! I recently found a great resource developed by Spencer Learning which lists all the different spellings of different sounds. They also have a larger download with word and sentence lists and an entire phonics course you can follow. I’ve started referring to their phonics lists with my clients and they’ve been quite useful. 3. Have a sense of humor I really believe that humor can lessen any load, and when you are trying to learn a new language or improve in a language you already speak well, you need to be able to laugh at yourself. You need to be able to see the humor inherent in the language itself, and also the humor in how people use and misuse it. A fantastic poem, written by G. Nolst Trenité pokes fun at the difficulty of learning English pronunciation and the craziness of the English spelling system. Here is just a short excerpt: Dearest creature in creation, Study English pronunciation. I will teach you in my verse Sounds like corpse, corps, horse, and worse. I will keep you, Suzy, busy, Make your head with heat grow dizzy. Tear in eye, your dress will tear. So shall I! Oh hear my prayer. Just compare heart, beard, and heard, Dies and diet, lord and word, Sword and sward, retain and Britain… You will not get every word right every time. You will still make mistakes. There are plenty of native speakers mispronouncing new (and old) words every single day. Businessmen, politicians, celebrities, newscasters – everyone flops once and a while. But believe me, once a mispronunciation has been brought to your attention, you’ll never make the mistake again! You can either let mispronunciations bring you down and cause you to lose your self esteem and courage to speak in English, or you can laugh off your mistakes and move on. My advice would be to do the latter. Have you found other ways to help you pronounce new words? Please share your ideas in the comments!
Get exclusive member benefits & effect social change. Join Today Dengue fever is a disease spread by the Aedes aegypti mosquito and caused by one of four dengue viruses that are closely related. The viruses that cause dengue fever are related to those that cause yellow fever and West Nile virus infection. Every year, it is estimated that at least 100 million cases of dengue fever occur across the globe. Tropical regions remain heavily affected. Areas that have the greatest risk of infection include: Very few cases occur in the United States. Most of the cases that are diagnosed occur in individuals who contracted the disease while traveling abroad. However, risk of infection is increasing for residents of Texas that live in areas that share a border with Mexico. Additionally, cases have been on the rise in the Southern United States. As recently as 2009, an outbreak of dengue fever was identified in Key West, Florida. Dengue fever is transmitted via the bite of a mosquito harboring the dengue virus. Person-to-person transmission does not occur. If you contract dengue fever, symptoms usually begin about four to seven days after the initial infection. In many cases, symptoms will be mild. They may be mistaken for symptoms of the flu or another infection. Young children and people who have never experienced infection may have a milder illness than older children and adults. Symptoms generally last for about 10 days and can include: A small percentage of individuals who have dengue fever can develop a more serious form of disease, dengue hemorrhagic fever. Dengue Hemorrhagic Fever The risk factors for developing dengue hemorrhagic fever include: This rare form of the disease is characterized by: The symptoms of dengue hemorrhagic fever can trigger dengue shock syndrome. Dengue shock syndrome is severe, and can lead to massive bleeding and even death. Doctors use blood tests to check for viral antibodies or the presence of infection. If you experience dengue symptoms after traveling outside the country, you should see a healthcare provider to check if you are infected. There is no medication or treatment specifically for dengue infection. If you believe you may be infected with dengue, you should use over-the-counter pain relievers to reduce your fever, headache, and joint pain. However, aspirin and ibuprofen can cause more bleeding and should be avoided. Your doctor should perform a medical exam, and you should rest and drink plenty of fluids. If you feel worse after the first 24 hours of illness — once your fever has gone down — you should be taken to the hospital as soon as possible to check for complications. There is no vaccine to prevent dengue fever. The best method of protection is to avoid mosquito bites and to reduce the mosquito population. When in a high-risk area, you should: Reducing the mosquito population involves getting rid of mosquito breeding areas. These areas include any place that still water can collect, such as birdbaths, pet dishes, empty planters/flower pots/cans or any empty vessel. These areas should be checked, emptied, or changed regularly. If a family member is already ill, it is important to protect yourself and other family members from mosquito bites. To help prevent the disease from spreading, consult a physician anytime you experience symptoms of dengue fever. Written by: Bree Normandin Published on: Jul 18, 2012on: Jan 19, 2016 Enter your symptoms in our Symptom Checker to find out possible causes of your symptoms. Go. Enter any list of prescription drugs and see how they interact with each other and with other substances. Go. Enter its color and shape information, and this tool helps you identify it. Go. Find information on drug interactions, side effects, and more. Go. From companies that meet the high standards of service and quality set by AARP. Members save 10% on the monthly service charge of qualified AT&T wireless plans. Members pay $9.50 for Regal ePremiere Tickets purchased online. Members earn points on select Walgreens-brand health and wellness products. Join or renew today! Members receive exclusive member benefits & affect social change.
When someone has a heart attack, their heart is permanently left with a section of non-beating scar tissue. Even though the rest of the organ may still function, that one bit can disrupt its rhythm, potentially leading to disorders such as arrhythmia. Several groups are developing “heart patches” to help address the situation, with one of the latest coming from Australia’s University of New South Wales (UNSW) and Britain’s Imperial College London. Unlike some others, it can be attached to the heart without the use of stitches. The base material of the flexible patch is chitosan, a polysaccharide derived from crustacean shells. On top of that is a layer of polyaniline, an electrically-conductive polymer. Added to it is phytic acid, a plant-derived chemical that keeps the polyaniline in its conductive state. This is an important addition, as most conductive polymers ordinarily lose their conductivity shortly after being exposed to body fluids.
I continue working with co-operative groups in class. Our latest activity practicing the preterit and imperfect paid off very well. My students started the year with some background in the conjugation of the preterit tense, but it has only been in the last six weeks that they learned about the imperfect, and how to use both tenses together to tell a story or narrate events. I have been giving traditional notes using Powerpoint presentations, and my students have been practicing both tenses in context with partner activities, conversations and writing activities. I was concerned that they really were not grasping the distinction between the two tenses, so last week my lesson plans involved complete immersion of this grammar point. I started our the lesson with an individual pre-test. Students read a short narration and had to choose between the two tenses and provide the conjugated verb in the corresponding blanks. They were allowed to use their notes which contained verb conjugation charts and notes on tense usage. As soon as the students took the pre-test, I asked them to get together in their co-operative groups and re-take the same test as a group. A grade was given for the group test, but the individual test was a baseline score for me. The second day of our preterit vs imperfect immersion week, the class viewed the short animated film by Pixar called Jack-Jack Attack. It was perfect for the thematic vocabulary covered in our text. As they watch the film I would stop it during key points, and asked the students to consider the tense they would use to re-tell the story. After seeing the film for the first time, I gave each student a storyboard organizer. With their partners, they brain stormed vocabulary in Spanish that was used in the animated short. We saw the film a second time, and this time they took notes on their storyboards in Spanish on the action of the film. The next time we met, students worked with their co-operative groups and used Google Docs, their storyboard and their class notes to re-tell the story of Jack-Jack Attack in Spanish using the preterit and imperfect appropriately. I then took 3 of the student created summaries, and turned them into an assessment in the same format of the pre-test. The first part of class the next day was spent in student co-operative groups with one version of the assessment. As a group they figured out what verb tense was needed in each blank. If they had questions, they were directed to each other or their class notes. After all groups finished, we went over the activity together as a class, giving any students who did not get their questions answered in their co-operative group a chance to ask me or the rest of the class for clarification. I then gave the students an individual assessment in the same format of the practice activity. Although the test grades were very high, what I found to be better evidence of student learning was the discussion I over heard when they were working together as a group. I heard students giving specific examples to support their opinion, students clarifying the grammar point to their classmates and students correcting each other’s work. I believe the group work and peer discussion did more to help my students understand the concepts than all the Powerpoint presentations I used previously.
|This article does not cite any references or sources. (June 2014)| Las Siete Leyes (The Seven Laws) were a series of constitutional instruments that fundamentally altered the organizational structure of the young first Mexican Republic. They were enacted under President Antonio López de Santa Anna on 15 December 1835 to centralize and strengthen the federal government at a time when the very independence of Mexico was in question. 3. The 58 articles of the third law established a bicameral Congress of Deputies and Senators, elected by governmental organs. Deputies had four-year terms; Senators were elected for six years. 4. The 34 articles of the fourth law specified that the Supreme Court, the Senate of Mexico, and the Meeting of Ministers each nominate three candidates, and the lower house of the legislature would select from those nine candidates the President and Vice-president, 5. The fifth law had an 11-member Supreme Court elected in the same manner as the President and Vice-President. 6. The 31 articles of the sixth Law replaced the federal republic's "states" with centralized "departments", fashioned after the French model, whose governors and legislators were designated by the President. 7. The seventh law prohibited reverting to the pre-reform laws for six years.
How to Develop Research Skills of Learners in Online Instruction To develop research skills of learners is preparing them to be better problem-solvers. And instructors shoulder a big responsibility to help learners learn to utilize the ways and means of researching. Learn how to develop research skills of learners when teaching online. Few textbooks have journeyed with me over multiple cross-country moves from student days at Cornell University to my current bookshelf. Models of Teaching is one I kept, and then updated to a more current edition (Weil, Joyce, & Calhoun, 2009, pp. 86-87). The models that have intrigued me all these years focus on creating communities of learners through engaging social-learning approaches. Yes, you could say their work represents a social constructivist perspective. And while I have given a lot of thought to social constructivism in the online world (Salmons, 2009, 2015), here I want to look specifically at inquiry models of teaching and how we can use them online to build deeper levels of comprehension. What are inquiry models? Inquiry models of teaching and to stimulate students’ and participants’ curiosity and build their skills in finding, analyzing, and using new information to answer questions and solve problems. Instead of transferring knowledge, we aim to build new knowledge. Instead of providing facts, we create an environment where students are encouraged to look for new ways of looking at an understanding problems, discern important and relevant concepts, and inductively develop coherent answers or approaches. As Weil et al. (2009) explain: “Humans conceptualize all the time, comparing and contrasting objects, events, emotions – everything. To capitalize on his natural tendency we raise the learning environment to give test the students to increase their effectiveness. Working in using concepts, and we hope that consciously develop your skills for doing so. (p. 86)” They suggest 3 guidelines for designing this kind of learning experience (Weil et al., 2009): 1. Focus: Concentrate on an area of inquiry they can master. 2. Conceptual control: Organize information into concepts, and gain mastery by distinguishing between and categorizing concepts. 3. Converting conceptual understanding to skills: Learn to build and extend categories, manipulate concepts, and use them to develop solutions or answers to the original questions. How can we use inquiry models online? Online research activities can be incorporated into e-learning or hybrid instruction in formal or informal educational settings that reflect the Weil et al.’s guidelines: 1. Focus: Assignments can begin with a research plan or design- what information is needed to answer what question? What are the parameters for this assignment, including time constraints? 2. Conceptual Control: Approaches to gathering information can include online interviews with practitioners, experts, or individuals with experience in the topic at hand. Assignments can include observation of online activities, including social media, communities, and posted discussions. Or, assignments can include research and analysis of documents or visual records available online or in digital libraries or archives. Once information and data has been collected, participants organize, prioritize and describe relationships between key ideas. 3. Converting conceptual understanding to skills: The above activities are of little use unless students can synthesize and make sense of what they’ve studied. What can they do with what they’ve learned– either to further academic study or to develop practical solutions using these new findings? A first step may be a discussion that where individuals or teams share what they’ve learned and invite new insights from others in the class. At this point they may identify new questions or topics for future inquiry. Why are inquiry models important today? Educators engage learners when learners are engaged in true inquiry. In the digital age we overwhelmed with information, some of it vetted by editors or reviewers, but much of it made freely available by anyone with a point of view and a smart phone. It is ever more important to develop the skills needed to focus on specific questions and discern relevant and credible evidence as needed to address them. Research activities invite students to build critical thinking skills at the analysis and synthesis level of Bloom’s Taxonomy. Whether students or participants are preparing to be scholars or professionals, research skills are essential and modern life. By integrating research experiences into content courses across the curriculum (rather than offer them in methods classes exclusively), students can learn to research and research to learn. Where can I learn more? Join my session Learning to Research, Researching to Learn at CO16 – Global Virtual Online Conference on February 5. Click on ‘save my spot’ button to participate in this and other sessions with educators from across the globe. Learn how to develop research skills of your learners when teaching online. *This post has been originally published on Vision 2 Lead.
What is EIA? Environmental Impact Assessment Environmental Impact Assessment (EIA) provides the information needed to allow full consideration of environmental interests in decisions on plans and projects, likely to have significant environmental impact. The central goal of environmental assessment is to ensure that environmental information is of good quality and timely available, so it can be effectively utilised in the decision making procedure. Strategic Environmental Assessment Strategic Environmental Assessment (SEA) focuses on consideration of environmental consequence for plans and programmes, with specific emphasis on environment in the strategic phase and possible negative effects on European protected Nature 2000-area’s. On 28 September 2006, the Dutch government implemented the EU SEA-directive. As is the case in the EU directive, SEA is obligatory for statutory or compulsory administrative plans: - that form the framework for future decisions subject to EIA or - that require an appropriate assessment on the basis of the Dutch Nature Conservation Act. Netherlands Commission for Environmental Assessment The Netherlands Commission for Environmental Assessment (NCEA) prepares mandatory and voluntary advisory reports for government (national, provincial and local) on the scope and quality of environmental assessments.
The conventional definition of temperature seasonality is the difference between the annual maximum and minimum temperatures (figure to the left). Plant growth in the North does not depend on these two extreme values - it depends on the seasonal cycle of temperature above some threshold (figure to the right). Therefore, we define temperature seasonality as the inverse of the cumulative value (shaded region) in the figure to the right. Both definitions when based on temperature values averaged over a latitudinal belt around the Earth (50N to 65N in the figures above) are a function of Sun-Earth geometry only, i.e. Earth's axial tilt and Orbit around the Sun. Thus, we expect no time trends in seasonality during our study period (100+ years) due to changes in sunlight and day length.
The SN1 reaction is a substitution reaction in organic chemistry. "SN" stands for nucleophilic substitution and the "1" represents the fact that the rate-determining step is unimolecular , . It involves a carbocation intermediate and is commonly seen in reactions of secondary or tertiary alkyl halides or, under strongly acidic conditions, with secondary or tertiary alcohols. With primary alkyl halides, the alternative SN2 reaction occurs. Among inorganic chemists, SN1 is referred to perhaps more accessibly as a dissociative mechanism. Diagram of SN1 Mechanism for hydrolysis of an alkyl halide The SN1 reaction between a molecule A and a nucleophile B takes place in three steps: 1. Formation of a carbocation from A by separation of a leaving group from the carbon; this step is slow. 2. Nucleophilic attack: B reacts with A. If the nucleophile is a neutral molecule (i.e. a solvent) a third step is required to complete the reaction. When the solvent is water, the intermediate is an oxonium ion. 3. Deprotonation: Removal of a proton on the protonated nucleophile by a nearby ion or molecule. An example reaction: (CH3)3CBr + H2O → (CH3)3COH + HBr This goes via the three step reaction mechanism described above: 1. (CH3)3CBr → (CH3)3C(+) + Br(−) 2. (CH3)3C(+) + H2O → (CH3)3C-OH2(+) 3. (CH3)3C-OH2(+) + H2O → (CH3)3COH + H3O(+) In contrast to SN2, SN1 reactions take place in two steps (excluding any protonation or deprotonation). The rate determining step is the first step, so the rate of the overall reaction is essentially equal to that of carbocation formation and does not involve the attacking nucleophile. Thus nucleophilicity is irrelevant and the overall reaction rate depends on the concentration of the reactant only. rate = k[reactant] In some cases the SN1 reaction will occur at an abnormally high rate due to neighbouring group participation (NGP). NGP often lowers the energy barrier required for the formation of the carbocation intermediate. The SN2 reaction is a type of nucleophilic substitution, where a nucleophile attacks an electrophilic center and bonds to it, expelling another group called a leaving group. Thus the incoming group replaces the leaving group in one step. Since two reacting species are involved in the slow, rate-determining step of the reaction, this leads to the name bimolecular nucleophilic substitution, or SN2. The somewhat more transparently named analog to SN2 among inorganic chemists is the interchange mechanism. The reaction most often occurs at an aliphatic sp3 carbon center. The breaking of the C-X bond and the formation of the new C-Nu bond occur simultaneously to form a transition state in which the carbon under nucleophilic attack is pentavalent, and approximately sp2 hybridised. The nucleophile attacks the carbon at 180° to the leaving group, since this provides the best overlap between the nucleophile's lone pair and the C-X σ* antibonding orbital. The leaving group is then pushed off the opposite side and the product is formed. If the substrate under nucleophilic attack is chiral, this leads to an inversion of stereochemistry, called the Walden inversion. SN2 reaction of bromoethane with hydroxide ion SN2 reaction of bromoethane with hydroxide ion In an example of the SN2 reaction, the attack of OH− (the nucleophile) on a bromoethane (the electrophile) results in ethanol, with bromide ejected as the leaving group. SN2 attack occurs if the backside route of attack is not sterically hindered by substituents on the substrate. Therefore this mechanism usually occurs at an unhindered primary carbon centre. If there is steric crowding on the substrate near the leaving group, such as at a tertiary carbon centre, the substitution will involve an SN1 rather than an SN2 mechanism, (an SN1 would also be more likely in this case because a sufficiently stable carbocation intermediary could be formed.) The rate of an SN2 reaction is second order, as the rate-determining step depends on the nucleophile concentration, [Nu−] as well as the concentration of substrate, [RX]. J = k[RX][Nu−] This is a key difference between the SN1 and SN2 mechanisms. In the SN1 reaction the nucleophile attacks after the rate-limiting step is over, whereas in SN2 the nucleophile forces off the leaving group in the limiting step. In cases where both mechanisms are possible (for example at a secondary carbon centre), the mechanism depends on solvent, temperature, concentration of the nucleophile or on the leaving group. SN2 reactions are generally favoured in primary alkyl halides or secondary alkyl halides with an aprotic solvent. They occur at a negligible rate in tertiary alkyl halides due to steric hindrance. It is important to understand that SN2 and SN1 are two extremes of a sliding scale of reactions, it is possible to find many reactions which exhibit SN2 and some SN1 character in their mechanisms. For instance, it is possible to get a contact ion pairs formed from an alkyl halide in which the ions are not fully separated. When these undergo substitution the stereochemistry will be inverted (as in SN2) for many of the reacting molecules but a few may show retention of configuration. An elimination reaction is a type of organic reaction in which two substituents are removed from a molecule in either a one or two-step mechanism. Either the unsaturation of the molecule increases (as in most organic elimination reactions) or the valence of an atom in the molecule decreases by two, a process known as reductive elimination. An important class of elimination reactions are those involving alkyl halides, or alkanes in general, with good leaving groups, reacting with a Lewis base to form an alkene in the reverse of an addition reaction. When the substrate is asymmetric, regioselectivity is determined by Zaitsev's rule. The one and two-step mechanisms are named and known as E2 reaction and E1 reaction, respectively. In the 1920s, Sir Christopher Ingold proposed a model to explain a peculiar type of chemical reaction: the E2 mechanism. E2 stands for bimolecular elimination and has the following specificities. * It is a one-step process of elimination with a single transition state. * Typical of secondary or tertiary substituted alkyl halides. It is also observable with primary alkyl halides if a hindered base is used. * The reaction rate both influenced by the alkyl halide and the base is second order. * Because E2 mechanism results in the formation of a Pi bond, the two leaving groups (often a hydrogen and a halogen) need to be coplanar. An antiperiplanar transition state has staggered conformation with lower energy and a synperiplanar transition state is in eclipsed conformation with higher energy. The reaction mechanism involving staggered conformation is more favourable for E2 reactions. * Reaction often present with strong base. * In order for the pi bond to be created, the hybridization of carbons need to be lowered from sp3 to sp2. * The C-H bond is weakened in the rate determining step and therefore the deuterium isotope effect is larger than 1. * This reaction type has similarities with the SN2 reaction mechanism. Saturated (sp3-hyrbridized) carbons will not react as readily with E2 and it will with E1 due to the steric hindrince. If SN1 and E1 are competing for the reaction, the E2 can be achieved by increasing the heat. The reaction fundamental elements are * Breaking of the carbon-hydrogen and carbon-halogen bonds in one step. * Formation of a carbon=carbon Pi bond. Scheme 1. E2 reaction mechanism An example of this type of reaction in scheme 1 is the reaction of isopropylbromide with potassium ethoxide in ethanol. The reaction products are isobutylene, ethanol and potassium bromide. E1 is a model to explain a peculiar type of chemical elimination reaction. E1 stands for unimolecular elimination and has the following specificities. * It is a two-step process of elimination ionization and deprotonation. o Ionization, Carbon-halogen breaks to give a carbocation intermediate. o Deprotonation of the carbocation. * Typical of tertiary and some secondary substituted alkyl halides. * The reaction rate is influenced only by the concentration of the alkyl halide because carbocation formation is the slowest, rate-determining step. Therefore first order kinetics apply. * Reaction mostly occurs in complete absence of base or presence of only weak base. * E1 reactions are in competition with SN1 reactions because they share a common carbocationic intermediate. * Deuterium isotope effect is absent. * accompanied by carbocationic rearrangement reactions Scheme 2. E1 reaction mechanism An example in scheme 2 is the reaction of tert-butylbromide with potassium ethoxide in ethanol. E1 eliminations happen with highly substituted alkyl halides due to 2 main reasons. * Highly substituted alkyl halides are bulky, limiting the room for the E2 one-step mechanism; therefore, the two-step E1 mechanism is favored. * Highly substituted carbocations are more stable than methyl or primary substituted. Such stability gives time for the two-step E1 mechanism to occur.
In Depth: Vertebra of the Neck The cervical spine consists of seven vertebrae, and they are the smallest of the spinal column. Together, the vertebrae support the skull, move the spine, and protect the spinal cord, a bundle of nerves connected to the brain. All seven cervical vertebrae are numbered. The C1, the first vertebra in the column closest to the skull, is also known as the atlas, and the C2, the vertebra below it, is also known as the axis. The “C” stands for “cervical.” Many ligaments, or bands of connective tissue, wrap around the spinal column and connect its vertebrae. These ligaments also prevent excessive movement that could damage the spinal column. Each vertebra has a protrusion on its backside called the spinous process. It extends backward and slightly downward. This is where ligaments and muscles attach to the vertebra. Several muscles support the vertebrae of the spine. The spinalis moves the spine and helps maintain correct posture. It is divided into three parts: - Spinalis cervicis: This muscle begins in the middle region of the spin and travel up to the axis. It may begin at the lower cervical vertebrae or the upper thoracic vertebrae. It helps extend the neck. - Spinalis dorsi: This muscle begins at the upper thoracic vertebrae and extends down to the lower back. - Spinalis capitis: This muscle is inseparably connected with another muscle in the neck, the semispinalis capitis. The Longus colli muscle begins at the spinous process of the atlas at the top of the vertebral column and extends past the cervical spine to the third thoracic vertebra. This muscle is broad in the middle but narrow where it connects to vertebrae. It helps move and stabilize the neck. The Longus colli is the most commonly injured muscle in car accidents when whiplash—the sudden jerking of the head at impact—occurs.
Supernova observations may explain cosmic rays Move over CERN: a pattern of X-ray 'stripes' in the remains of a supernova may provide the first direct evidence that supernovae can accelerate particles to energies a hundred times higher than the Large Ladron Collider. The discovery comes from observations of the Tycho supernova remnant with NASA's Chandra X-ray Observatory - and could explain how cosmic rays are produced. "We've seen lots of intriguing structures in supernova remnants, but we've never seen stripes before," said Kristoffer Eriksen, a postdoctoral researcher at Rutgers University who led the study. "This made us think very hard about what's happening in the blast wave of this powerful explosion." The team believes that magnetic fields become highly tangled, and the motions of the particles very turbulent, near the front edge of the expanding supernova shock wave. High-energy charged particles can bounce back and forth across the shock wave repeatedly, gaining energy with each crossing. Theoretical models of the motion of the most energetic particles – which are mostly protons – are predicted to leave a messy network of holes and dense walls corresponding to weak and strong regions of magnetic fields, respectively. The X-ray stripes discovered by the Chandra researchers are thought to be regions where the turbulence is greater and the magnetic fields more tangled than surrounding areas, and may be the walls predicted by the theory. Electrons become trapped in these regions and emit X-rays as they spiral around the magnetic field lines. However, the regular and almost periodic pattern of the X-ray stripes was not predicted by the theory. "It was a big surprise to find such a neatly arranged set of stripes," said co-author Jack Hughes, professor of physics and astronomy at Rutgers. "We were not expecting so much order to appear in so much chaos. It could mean that the theory is incomplete, or that there's something else we don't understand." Supernova remnants have long been considered a good candidate for producing the most energetic cosmic rays in our Galaxy. The protons can reach energies that are hundreds of times higher than the highest energy electrons, but since they don't radiate efficiently like the electrons, there's been no direct evidence until now for the acceleration of cosmic ray protons in supernova remnants.
Earth Observations from Space: The First 50 Years of Scientific Achievements (November 2007)Report in Brief Over the past 50 years, thousands of satellites have been sent into space on missions to collect data about the Earth. Today, the ability to forecast weather, climate, and natural hazards depends critically on these satellite-based observations. At the request of the National Aeronautics and Space Administration, the National Research Council convened a committee to examine the scientific accomplishments that have resulted from space-based observations. This report describes how the ability to view the entire globe at once, uniquely available from satellite observations, has revolutionized Earth studies and ushered in a new era of multidisciplinary Earth sciences. In particular, the ability to gather satellite images frequently enough to create "movies" of the changing planet is improving the understanding of Earth's dynamic processes and helping society to manage limited resources and environmental challenges. The report concludes that continued Earth observations from space will be required to address scientific and societal challenges of the future.
A Brief Ecological History of Hawaiʻi Beyond being a nice place to park one’s beach chair and umbrella, what is the ecological reality behind postcard Hawaiʻi? Gazing out at the varied landscapes around us, what are we looking at? How did the islands form? How did things become as they are? In terms of ecological health, are the islands thriving, or suffering? In a place that has been so transformed by the hand of humanity in recent centuries, how does one begin to get oriented ecologically, and what does “natural” even mean here? The answer to “What makes Hawaiʻi, Hawaiʻi?” depends on who you ask, for Hawaiʻi, like beauty, lies in the eye and the mind of the beholder. On a quest for local ecological literacy we turn first to the original authorities – the indigenous Hawaiians. From Polynesian roots a unique Hawaiian culture grew and blossomed for over a millennium, with its own language, cosmology, mythology, and arts. Native planters, fisherfolk, and healers developed a rich knowledge of the island ecosystems, along with values and skills that allowed them to thrive here. They named and recognized the landforms and waterways, the thousands of native and cultivated plants and animals, stars, breezes, and rains. Everything had a name and a meaning, and all was part of a larger, cohesive story that seamlessly wove together human life with land and sea. As a complement to indigenous knowledge we can also turn to the diverse specialists of today, the geologists, marine biologists, botanists, ornithologists, archaeologists, and various others who delve into the details. And not to be overlooked are the contributions of everyday experts who pay close attention to local nature: kupuna, fishermen, hunters, snorkelers and divers, nature photographers and writers, shell collectors, lei-makers. From all these sources a new story emerges of how the islands came to be, how they have changed over the eons and decades, how they fit into the larger Epic of Evolution, and where things stand today. If there is one thing that distinguishes Hawaiʻi in the planetary context it is its role as one of Earth’s most remote evolutionary outposts. The islands’ isolation bred originality – not the most species overall, compared to somewhere like the Amazon basin, but the highest percentage of unique species, packed into a tiny area. In Hawaiʻi, the alchemical workings of evolution were amplified and compressed in both time and space. The resulting mosaic of species and ecosystems begs comparisons, all inadequate: an intricate symphony, an artistic masterpiece, a living art museum. Yet the complex ecological mosaic that evolved over millions of years has changed in the blink of an eye. The first wave of changes came a millennium ago with the arrival of Polynesian voyagers, and the plants and animals they brought with them. The second wave, with more sweeping changes, began just over 200 years ago when the Western world “discovered” Hawaiʻi. Since then the ecological transformation of the islands has accelerated. Large areas of native forests have been replaced by monocrop agriculture and cattle pasture. Common plant and animal species from around the world have been imported and have displaced many of the island rarities, gaining Hawaiʻi the title of “extinction capital of the world”. While much of the islands’ biodiversity has been lost, and much of the rest has been pushed to the margins, Hawaiʻi remains a precious yet imperiled ecological treasure. As we sit perched on the threshold of unprecedented planetary change, big questions arise: How can we live in this place without further damaging it? How can we reinhabit the islands, defending and restoring native ecosystems, while also working with the ʻaina, the living land, to sustain us? A crucial first step is to open our eyes and develop a deeper understanding of where on Earth we are. Origins in the dark depths Starting at the foundation, there are two keys to understanding the Hawaiian Islands’ formation: plate tectonics, and the Hawaiian Hot Spot. First, plate tectonics. The Earth’s rocky crust, both dry land and the seafloor, is divided into a number of irregularly shaped sections or plates – like a spherical jigsaw puzzle. For billions of years these plates have ‘drifted’ in slow motion on the planet’s semi-fluid mantle. Along some of their margins the plates diverge, and magma seeps upward to form new areas of rocky crust. Elsewhere plates converge and collide, forming buckled and folded mountain ranges. Or one plate may slide beneath an adjacent plate, consumed as it descends into Earth’s fiery interior. Hawaiʻi lies at the center of the Pacific plate. This plate is continually being created along its southeastern edge at the East Pacific Rise. At the plate’s northwestern margin it descends into the Aleutian Trench, where its crust is recycled into the Earth’s hot mantle layer. As a result, the Pacific plate and Hawaiʻi along with it, are moving to the northwest at around 3 inches per year (or roughly 50 miles every million years). The second key is the Hawaiian Hot Spot, a plume of magma that emerges from deep within the Earth’s interior. For over 80 million years lava has oozed from the magma plume onto the seafloor, where it piles up to form volcanoes. The name of Hawaiʻi’s native creation chant, the Kumulipo, translates as “beginning in the depths of darkness”. And so began the history of every Hawaiian Island past and present, in the darkest depths of the ocean, some 15,000 feet beneath the surface. As the Pacific plate slowly creeps over the Hawaiian Hot Spot new volcanoes and islands continually form, and the older ones are carried off to the northwest. The result is the Hawaiian-Emperor volcano chain, a 3600-mile ridge of seamounts, coral atolls, banks, reefs, and tall volcanic islands that extends halfway across the northwestern Pacific. New volcanoes and islands are created at the southeastern end of the chain, at the hot spot’s multiple vents. The youngest island in the chain, Hawaiʻi (the Big Island), is still being formed as the Kilauea Volcano spills fresh lava day and night. Younger still is the Lōʻihi seamount 22 miles off the southeastern coast of the Big Island, which is not projected to rise above the sea surface for another 50,000 years or more. The rise and fall of Hawaiʻi’s volcanoes The Hawaiian archipelago has been shaped by opposing forces, construction and destruction. The islands are built up papier-mâché style, one thin lava flow at a time. The result of thousands of overlapping lava flows is a massive broad dome, a shield volcano. Yet simultaneous with a shield volcano’s growth and ascent are the downward forces of landslides, subsidence, and erosion. Enormous landslides – among the largest documented on Earth – have resulted in the steep slopes of Molokaiʻs northeastern coast and the eastern face of Oʻahu’s Koʻolau Range Beneath the ocean surface, fans of rock debris from submarine avalanches extend for over 100 miles away from the islands. In some cases the catastrophic landslides are believed to have caused massive tsunami waves of up to 1000 feet!. Subsidence occurs when the ocean floor actually sags under the weight of a volcanic island. This is understandable, since the Hawaiian shield volcanoes are some of the most massive structures on Earth. In fact, the Big Island’s Mauna Kea and Mauna Loa, in addition to Maui’s Haleakalā, are actually taller than Mt. Everest when they are measured from the seafloor! Erosion begins as soon as a volcanic island rises above the sea surface. Waves pound the island’s shores, eventually forming features that include sea stacks and arches. Flowing water carves gullies that will eventually become stream valleys. Today the islands have a fantastic array of landforms, from the ‘young’ shield volcanoes of the Big Island, to the heavily eroded mountains, canyons, and knife-edge ridges of the older islands. On a larger scale, erosion and subsidence have changed the islands’ overall configuration. For example, Maui is the second-youngest and currently the second-largest of the Hawaiian Islands. Yet for most of its existence Maui was part of a much larger land mass referred to as Maui Nui (Big Maui). At its largest extent, 1.2 million years ago, ancient Maui Nui was 40% larger than today’s Big Island, and also encompassed the islands of Lanaʻi, Molokaʻi, and Kahoʻolawe. For a time Oʻahu was even connected to Molokaʻi. In the intervening millennia erosion and subsidence have led to the breakup of Maui Nui. During the alternating glacial and interglacial periods of the Pleistocene, Maui, Lanaʻi, and Molokaʻi were repeatedly unified and separated as sea levels fell and rose. The most recent Maui Nui land bridge is estimated to have occurred 18,000 years ago at the height of the last Ice Age. Eventually the mountainous islands of today will be worn down to sea level just like the northwestern Hawaiian Islands are today. And for as long as plate tectonics and the Hawaiian Hot Spot are functioning, new islands will be forming all the time to replace the older ones. The arrival of life Had you been a giant pterosaur cruising over just the right spot in the central Pacific some 80 million years ago or more, you may have noticed a faint smell of hydrogen sulfide or some other gaseous cocktail on the breeze. Investigating more closely, you may have discovered a plume of steam and ash rising thousands of feet from the ocean’s sputtering and explosive surface, marking the emergence of the first Hawaiian Island from the blue depths. Having already risen from the seafloor for a million years or more, the slopes of the first Hawaiian seamount were likely discovered by marine life long before any hot lava rock breached the surface. For marine currents carry tiny creatures to even the most remote parts of the oceans, things like larval crustaceans, bits of coral, or seaweeds. As for the original life on dry land, no one knows which species arrived first in Hawaiʻi. Some insight into the island’s colonization by terrestrial lifeforms can be gleaned from the planet’s recently formed islands, like Surtsey, which formed in 1963 off the southern coast of Iceland. By 1965 the first vascular plant had colonized the bare lava, followed in subsequent years by mosses, lichens, and fungi. Early visits by birds brought seeds and valuable nutrients for soil development, and by 2008, 30 plant species had become established on the island. Marine species like starfish, urchins, and seaweed abound, as well as larger species like seals. While this sheds some light on island colonization, it is important to keep in mind that Surtsey lies only 20 miles or so from the Icelandic coast, compared to Hawaiʻi’s 2500 miles from the nearest continent or high island group. The odds of any species reaching Hawaiʻi are therefore much smaller. The original plant and animal species that arrived in Hawaiʻi (or the earlier Emperor chain) did so via wind, water, or on the wing. The lucky arrivals came from all directions, with some arriving from Asia and North America, and 85% or more from the Indo-West Pacific region. Wind brought fungal and algal spores, and helped insects and even birds reach the islands. Birds brought plant seeds, either in their gut or embedded in muddy feathers or feet. Marine species were carried by ocean currents, while others likely hitched a ride on mats of floating vegetation. Successful plant and animal colonizations in Hawaiʻi were extremely rare. But what was challenging for colonization – vast travel distances and small target islands -- was good for evolution. Each new island that emerged above the waves provided a blank canvas for life’s creative expression. For the few species that were able establish a foothold in the islands, the opportunities for populating new niches and evolving into new species were vast. It is estimated, for example, that Hawaiʻi has around 1,000 species of native flowering plants, and 90% of those are endemic, having evolved from only a handful of founder species. In some cases plant evolution truly went wild in the islands, the most impressive being that of the Lobeliad plant family. With at least 126 species, the Hawaiian lobelias are the largest plant species group to inhabit any archipelago in the world, and all are derived from a single founder species that arrived in the islands around 13 million years ago. Inhabiting moist and wet forests on volcanic slopes, the largest lobeliad genus, Cyanea, has its maximum species diversity on Maui. Hawaiian lobeliads are a prime example of adaptive radiation. Under this type of evolution, the single founder species spread out into varied habitat types at different elevations, and gradually developed new growth forms – including trees, succulents, vines, and epiphytes – with a variety of specialized flowers and other features. As the Hawaiian lobeliads filled new niches they also became “keystone mutualist species”, instigating further adaptive radiation in their pollinator species, including birds, hawk moths, and flies. Other exceptional examples of evolution in Hawaiʻi include: The Hawaiian Honeycreepers. With well over 50 species emerging from a single ancestor over the span of 5 million years, the honeycreepers represent the most spectacular example of rapid bird species evolution anywhere in the known universe. Darwin’s famous Galapagos finches were a group of only 15 species and were comparatively similar to one another. The honeycreepers’ explosive creativity resulted in an array of beak shapes and functions: small thin beaks for hunting insects among the foliage; heavy beaks for cracking hard seeds; specialized beaks for peeling back tree bark; the twig-snapping grub-grasping beak of the Maui parrotbill (kiwikiu); the scarlet iʻiwi’s sickle shaped bill, perfect for sucking nectar from tubular Lobelia flowers. Haleakalā’s round-waisted predatory beetles. Thought to be descended from a single Australian species these Mecyclothorax beetles have radiated into 239 Hawaiian species in less than 2 million years – a riotous rate of evolution. Their center of diversity is found in the rain forests, ravines, and bogs of windward Haleakalā on Maui, and 116 species are found on that one volcano alone. Twenty species are found in an area just over one square mile making them one of the most geographically dense collections of similar species in the world. They can be found in clumps of moss on branches of 'ōhi'a trees, underneath bark, or in leaf litter on the forest floor. Unlike their continental cousins, these beetles do not fly, having only vestigial wings. Drosophila flies. Famous as the go-to ‘fruit fly’ species for laboratory studies in genetics, Hawaiʻi has over 800 drosophilids, and more than 90 percent of those are single-island species. Among these are the Hawaiian picture-wing flies, known for their ‘stained glass’ wing patterns and elaborate courtship displays involving dancing, singing, and head-butting. A number of Hawaiian drosophilids are documented “recolonizers” – lineages that colonized Hawaiʻi, evolved into new species, and then reversed the trend by colonizing other island groups and continents Hawaiian Hyposmocoma moths. Here is another remarkable example of insect evolution, with over 350 species found only in the islands. Also known as “fancy-cased caterpillars”, the moth larvae are known for the diverse protective cases (like cocoons) they spin from silk. These colorful cases have forms that have been categorized as cigars, cones, bugles, and burritos. On Maui and Molokaʻi, certain Hyposmocoma caterpillars have developed the ability to entrap snails in a silky web. From there the caterpillars crawl into the snail’s shell where they eat their prey alive. Even more jaw-dropping are the handful of species whose larvae have acquired amphibious abilities. They spend up to a month underwater in rushing streams, absorbing oxygen directly into their soft bodies – true amphibious caterpillars! Terrestrial Snails. Known as “jewels of the forest” for their dazzling variety of shell colors, patterns, and shapes, Hawaiian land and tree snails once numbered over 750 species. Weekend shell collecting expeditions were a common pastime in the latter half of the 19th century. Sadly an estimated 90% have already been driven extinct by habitat loss, overcollection, and invasive carnivorous snail species. Reef fishes. While shallow reefs have around 20% endemic species (found only in Hawaiʻi), deeper reefs such as those of the northwestern islands’ Papahānaumokuākea Reserve have 50% or more endemic species – a higher rate than anywhere else in the world. Hawaiian endemics include such rock stars as the bandit angelfish, scarface blenny, fantail filefish and psychedelic wrasse. Hawaiʻi’s mosaic of natural landscapes Hawaiʻi’s diversity of species is mirrored by its diversity of ecosystems, with over 175 distinct natural community types identified. From snow-capped alpine summits, to cloud forests, to unique coastal pools, the islands’ natural landscapes are a patterned mosaic, shaped by climate, elevation, and topography. Prevailing northeasterly trade winds bring copious rainfall to the windward slopes of Maui and the Big Island, and the summits of Molokaʻi, Oʻahu, and Kauaʻi. Here are found some of the wettest places on Earth, supporting a variety of rainforest communities and boggy areas. Trees such as koa and ʻōhiʻa lehua thrive on the moist and wet mountain slopes, with moss-covered tree trunks, tree ferns, and rare plants in the understory. The leeward slopes and lower islands such as Lanaʻi and Kahoʻolawe were originally cloaked with a tropical dry forest. This forest type once supported the highest tree species diversity in the islands, but is now among the most threatened of Hawaiian landscapes. Extensive dry forests were once widespread on leeward “rain shadow” slopes of the high islands. Forests with forty or more tree species, including wiliwili, lama, and naio, once supported Hawaiian honeycreepers and extinct flightless birds such as the goose-like moa-nalo. With increasing elevation on the volcanic slopes, closed forest grades into open parkland or subalpine shrubland. From 6000 feet to the treeline at 9000 feet, the mixed shrublands occupy a drier zone than the wet windward forests. Shrubs of this zone include mamane, pukiawe, ‘ohelo, and aʻaliʻi. In some areas shrubs are replaced by native Deschampsia grasslands. Other natural communities of the islands include montane bogs and coastal wetlands, sand dune plant communities, lava tube caves, and wet and dry cliff ecosystems. Intact freshwater streams support a number of native fish and crustacean species that spend the juvenile portion of their life in the open ocean, before returning upstream for adulthood. Intertidal and marine environments provide a multitude of habitats: tide pools on lava benches, wave surge zones, seagrass beds, sandy beaches, reef flats, and deepwater rubble slopes. With so many types of climate and ecological zones packed into such a small area, Maui and the other large Hawaiian islands truly resemble miniature continents, microcosms of the living planet. Enter the Polynesians Imagine being the very first Polynesian seafarers to reach Hawaiʻi over a thousand years ago. It must have been like arriving in a mythical enchanted place. Seas teeming with life, flocks of noisy seabirds, breaching whales. Beaches that had never seen a human footprint. Pristine forests and rivers in their wild splendor. Large flightless birds without fear of humans. Soon enough the landscape would begin to show the imprint of humankind – and the various other species that arrived on the voyaging canoes. The discoverers came prepared, bringing over 20 so-called ‘canoe plants’ that would provide food, fiber, medicine, containers, and building materials. Among the food plants brought to the islands were taro, yam, sweet potato, sugar cane, banana, and mountain apple, as well as multi-use plants like coconut, ti leaf, and noni, gourds and bamboo. The candlenut or kukui tree provided oil-rich seeds that burned like candles, and wauke (paper mulberry) was cultivated as a source of tapa cloth. A number of these plants became naturalized in the lowland forests, where they are still common today. The early Polynesian navigators also brought select animal species like dogs, chickens, and pigs, plus some that likely arrived as stowaways including Polynesian rats and a few lizard species. The ecological impacts of pigs, chickens, and dogs in the early Polynesian period are thought to have been relatively minor compared with later waves of introduced animals. The Polynesian rat, however, may have had a significant impact on native ecosystems by eating plant seeds and seedlings, as well as preying on native ground birds and their eggs. Many of these birds -- flightless ducks, geese, ibis, and others – went extinct after humans arrived, likely due to the combined impacts of hunting, rats, and the destruction of lowland forests. Early Hawaiian agriculture is thought to have used swidden (slash and burn) methods, where small plots were cultivated for a time, and then left to go fallow. In this period, sustenance would have largely been provided by the abundant marine life, including reef fish, shellfish, and seaweeds. With time, Hawaiʻi’s native population increased, and agricultural methods and land use were intensified. In wet valleys, streamflow was diverted into a complex system of irrigation works and flooded plots (loʻi) for wetland taro cultivation. Prime upland areas were also planted with sweet potatoes, dryland taro, and other crops such as banana and ʻulu (breadfruit). In addition to the irrigation works, Hawaiians’ engineering skills also extended to coastal wetlands and nearshore shallows, where fishponds were built. These ponds were enclosed by hand-built stone walls and had sluice gates that regulated the passage of fish between the ponds and ocean. Beyond their cultivated areas, it is also thought that Hawaiians changed the islands’ native landscapes through their intentional use of fire. Early European visitors noted that Hawaiians would set fires to encourage the regrowth of pili grass, which was used as a roof thatch material. This activity, plus cutting trees for house construction, canoe building, and especially firewood, appears to have gradually had a substantial impact on lowland forests and coastal vegetation. Higher elevation ecosystems, however, were likely minimally impacted prior to European contact, and even in many lowland areas, at least some examples of native ecosystems remained. The extent to which native Hawaiian ecosystems and species continued to thrive is all the more impressive when one considers that the islands’ pre-European population has been estimated at 300,000-800,000 or more people. This was made possible in part by the Hawaiians’ lack of grazing livestock, but also by their specialized agricultural knowledge and techniques. For crops such as taro – traditionally referred to by Hawaiians as their ancestor or elder brother – native farmers were said to have grown up to 300 varieties, many of which were developed in Hawaiʻi. For taro, sweet potato and other crops, there were crop varieties, cultivation techniques, and planting calendars specific to the islands’ varied microclimates. This level of horticultural mastery was made possible by the Hawaiians’ highly refined observation, taxonomy, and plant breeding skills, as well as their knowledge of island soil and weather conditions. Native Hawaiians’ food and resource self-sufficiency was rooted in an overall approach to landscape management called the ahupuaʻa system. The ahupuaʻa were land areas that usually corresponded to the watershed of a given stream, often resulting in land units shaped like slices of a pie. In this way each ahupuaʻa encompassed a spectrum of ecological resource zones including alpine, montane, lowland, coastal and even offshore areas, and provided the subsistence base for an extended family or community. Knowing that the survival of one’s family depended on the ecological health of a particular area certainly promoted soil and water conservation, as well as an overall sense of responsibility, reverence, and caring for that place. This combination of knowledge, skills, and values allowed the Hawaiians to thrive in island ecosystems for an entire millennium. The Euroamerican Transformation of Hawaiʻi Captain Cook arrived in the islands in 1778, and for both the Hawaiians and their land, changes rapidly followed. In addition to introducing goats to the islands, Cook’s expedition also brought venereal diseases to the natives. Cook was followed in turn by increasing numbers of Europeans who brought smallpox, measles, whooping cough, and influenza. These diseases led to the decimation of the Hawaiian population over the next century, and the decline of cultural practices that had kept native ecosystems relatively intact. In the first half of the 19th century the biggest impacts to Hawaiian ecosystems came from introduced livestock. The native forests evolved without hoofed mammals, the largest indigenous herbivore being the moa-nalu – a 3-foot high goose with a jagged beak that went extinct centuries before the Europeans arrived. This changed suddenly when George Vancouver brought Hawaiʻi’s first cattle to the Big Island in 1793. The long-horned cattle were allowed to roam and reproduce freely for their first 10 years in Hawaiʻi, and before long, large herds of feral cattle were free-ranging on the main Hawaiian Islands. The cattle degraded montane forests that had been scarcely impacted by native Hawaiians, compacting soil, trampling roots, and damaging native plants in the understory. Goats were equally harmful, from lowland dry forests all the way up to alpine environments. They were capable of denuding native vegetation, causing extensive erosion and siltation of reefs and fishponds. In addition to goats and cattle, sheep were also introduced to Hawaiʻi in the 1790s. Raised extensively on Lanaʻi, Niʻihau, the Big Island, and Kahoʻolawe, sheep removed woody vegetation, leading to erosion of topsoil and converting soils to a sun-baked red hardpan. As the native Hawaiian population plummeted during the 19th Century, the ahupua’a system of land stewardship faded, and extractive land uses took over. The early part of the century was marked by logging for the sandalwood trade, in which shiploads of the fragrant wood were exported to China. Additional pressure was put on the islands’ dry and moist forests during Hawaiʻi’s whaling era (1820-70). The whalers needed large quantities of firewood to render whale oil, and as a result the hillsides around Lahaina and other port towns were deforested. As harmful as the effects of livestock and logging were, the wholesale destruction of Hawaiian forests only accelerated with the arrival of commercial export agriculture. The catalyst came in 1848, when the “Great Mahele” land reform enabled non-Hawaiians to acquire and own land in Hawaiʻi. The cultivation of sugar accelerated in the 1870s, and pineapple around the turn of the century. Large-scale irrigation works allowed vast areas to be cultivated as monocrop systems. Forests were extensively cleared for cane and pineapple fields, and additional areas were deforested to provide fuel for the wood-fired boilers in sugar mills. Plantation workers were imported, mostly from Asia. Also around the turn of the century several large cattle ranches were established in the islands. Native forests were cleared, replaced by alien grasses and legumes such as broomsedge and kikuyu grass. In response to feral ungulates destroying forests, fence-building and reforestation efforts were underway as early as the late 19th and early 20th centuries. After 1904, forest reserves were created to protect watersheds, but the species planted were often fast-growing invasive tree species. The last hundred years in Hawaiʻi have been marked by the widespread introduction of alien species into the islands: - Alien plants. Several hundred species of non-native plants have become naturalized, including dozens of problem invasives. Plants including banana poka, miconia, gingers, strawberry guava, grasses, koa haole, and lantana have been able outcompete native plants for water, sunlight, and nutrients; - Black rats. Arriving in the islands in the late 1800s, they cause major harm to native plants, girdling branches and eating seeds, as well as killing native birds or eating their eggs. - Asian mongoose. Imported in an attempt to control rats in the sugarcane fields, these predators instead had a major impact on ground-nesting native birds. - Axis deer. Originally from India, they were introduced to Molokaʻi and Oʻahu in the late 1800s. Introduced to Maui in the 1950s, their population continues to swell. - Alien bird species. Over 50 non-native bird species reproduce in the wild in Hawaiʻi, and compete with the native bird population for food and habitat. Non-native birds also serve as a disease vector for avian malaria, which is fatal to native birds. While non-native birds spread weed seeds, there is also evidence that they help distribute seeds of native plant species; - Alien insects. Problem varieties include moths, weevils, bees, flies, and ants. Some compete with or prey on native insects, and others can spread fungus and other plant diseases. Habitat degradation and the arrival of alien species can often have domino effects. For example, the spread of invasive species like strawberry guava and non-native earthworms provides additional food sources for feral pigs, allowing them to expand their range deeper into native forests. The pigs, in turn, eat native plant shoots, transport alien plant seeds, and create muddy trails that serve as seedbeds for the alien plants. Pools of standing water created by rooting pigs also create habitat for the mosquitoes that carry avian malaria. On a positive note, many of these issues are now well understood, and conservationists are hard at work to address them. Large areas of native forests are being managed to exclude invasive species, and collaborative watershed protection has made great strides in recent years. Efforts are underway to restore Hawaiian ecosystems, and bring native plants, birds, and other species back from the brink of extinction. Meanwhile new challenges continue to arise. A recent concern that has so far evaded solution is rapid ʻōhiʻa death, a disease affecting the islands’ most important native tree species, the ʻōhiʻa (or ʻōhiʻa lehua) tree. First identified in 2014, the fungal disease has been spreading rapidly on the Big Island, where tens of thousands of acres have been affected. The ʻōhi'a is a vital source of food and habitat for several native birds and insects. The loss of ʻōhiʻa and replacement with non-native trees would have widespread impacts on island biodiversity. Efforts are underway to contain the disease, but there is no known remedy. A War on the ʻAina Hawaiʻi’s population has steadily grown over the past hundred years, and has doubled over the past 50 years to 1.4 million. This increase has been coupled with urban and suburban sprawl, especially on Oʻahu where forests and agricultural lands have been converted to housing subdivisions, military installations, and commercial areas. From the 1960s onward, the tourism industry expanded on all the main islands, bringing additional development to coastal areas, and replacing natural habitats with hotels, restaurants, parking lots, and golf courses. The decades after World War II saw the industrialization of commercial agriculture in Hawaiʻi, with increased use of chemicals fertilizers, herbicides and pesticides. Direct effects included contaminating groundwater, harming pollinators, and killing soil microorganisms. Monocrop agriculture also impacted ecosystems and human health beyond field boundaries via herbicide overspray, soil erosion, and reef sedimentation. Decades of burning sugar cane fields on Maui – complete with plastic irrigation lines – polluted the island’s air. Barren and contaminated soil was repeatedly exposed and removed as red clouds of dust that settled on neighboring communities and nearshore waters. Soil food webs are the basis for terrestrial ecosystems, where nutrients are recycled and made available for plants and ultimately animals, including humans. Commercial agriculture has effectively conducted a war on Hawaiʻi’s soils, leaving them compacted, contaminated, and stripped of microorganisms and nutrients. With the decline of pineapple and sugarcane cultivation, large areas of degraded lands remain. No longer irrigated, these lands are colonized by drought-resistant invasive grasses and shrubs, and are highly fire-prone. In recent decades Hawaiʻi has seen a growing presence of major transnational seed and agrochemical companies, including Monsanto, Dow, Syngenta, BASF, and DuPont-Pioneer. These companies are using Hawaiʻi as an open-air laboratory for genetically modified crops, largely corn and soy. According to a 2015 report by the Center for Food Safety, after sugarcane and pineapple, seed crops take up 72% of the remaining planted area in the islands. In addition to the potential dangers of genetically modified crops for human health and the environment are the dangers posed by using a wide variety of toxic agrochemicals at high volumes. Most of the seed crop research is aimed at creating herbicide resistant crops, and the experiments require exposing crops to an intense regimen of pesticide spraying. Along with glyphosate (the active ingredient of Monsanto’s Roundup), several restricted-use chemicals are used, including alachlor, atrazine, chlorpyrifos, methomyl, metolachlor, paraquat, and permethrin. Atrazine and paraquat are both banned in Europe, yet millions of tons are still being applied in Hawaiʻi each year. These chemicals end up in the islands’ soils, streams, groundwater, and oceans – as well as drifting into nearby neighborhoods and schoolyards. According to an American Academy of Pediatrics study cited in the Center for Food Safety report, pesticides have been linked to “childhood cancers, neurobehavioral and cognitive deficits, adverse birth outcomes, and asthma”. On Maui there is currently concern that the end of the island’s sugarcane industry will open up additional acreage for the expansion of the seed crop industry, as happened on Kauaʻi following sugar’s decline. Hawaiian ecosystems under a rapidly changing climate Beyond the immediate threats to Hawaiʻi’s natural environment are those looming impacts associated with global climate change. The potential changes will affect all areas from high elevation forests to beaches, and from coral reefs to the open ocean. On land, climate models predict warmer and drier conditions – at least in the already dry parts of the islands. Wet areas may become wetter, and episodes of more intense rainfall are expected, making flooding more common. Warmer and drier conditions will pose a challenge to native plants and animals that are unable to adapt quickly enough or relocate. Warmer temperatures will allow the spread of avian malaria higher up the volcanic slopes, impacting endangered forest birds that hold no immunity to the imported disease. This is especially a concern on Kauaʻi where there are no high elevation mountains for birds to take refuge in. Endangered forest birds are also vulnerable to the increasingly intense storms that are likely to reach the islands. Tropical storm and hurricane-force winds impact forests and make them more susceptible to invasive plants, further degrading bird habitat. Kauaʻi’s native thrush, the kamaʻo, has not been seen since Hurricane Iniki damaged the island’s forests in 1992. Rising sea levels threaten Hawaiʻi’s beaches and varied coastal habitats. Recent measurements of polar melting have led to a doubling of predicted sea level rise, up to 3-6 feet by 2100. If greenhouse gas emissions continue at current rates, several dozen feet of sea level rise can be expected in the next few centuries. In places where coastal ecosystems are squeezed between the sea and developed areas, rising sea levels will leave nowhere for plants and animals to go. Contamination from the inundation of coastal cities and infrastructure – roads, gas stations, power plants, etc – is likely to become an increasing concern. Waikiki and much of downtown Honolulu, for example, may well be under water by century’s end. Debates are already underway over the logic of trying to temporarily protect highways, condos, and homes with expensive seawalls that degrade coastal habitats. Up to this point most of the heat that has been trapped in the Earth system by global warming– around 93% -- has been stored in the oceans. Rising ocean temperatures threaten the survival of coral reefs globally and in Hawaiʻi. When waters become too warm, the relationship between coral polyps and symbiotic algae breaks down, resulting in “coral bleaching”, and leaving reefs in a vulnerable state. Record heat in 2015-16 led to a major global coral bleaching event that impacted Hawaiʻi and the western Pacific, where over 90% of Australia’s Great Barrier Reef was affected. Future warming is predicted to make bleaching events more common, and globally corals and the many reef species that depend on them are at risk of extinction in this century. By burning fossil fuels and dumping greenhouse gases into the atmosphere, industrial civilization is altering not only the global climate, but also the chemical composition of the planet’s atmosphere and oceans. Atmospheric carbon has been increasing more rapidly than at any other time for millions of years, and much of this carbon is being dissolved into the oceans. This results in ocean acidification, a process that is damaging to shellfish, marine plants, and plankton that are vital to marine ecosystems. Ocean acidification also reduces coral’s ability to regrow after bleaching events. Ocean acidifcation has been identified as a likely cause of a number of previous mass extinction events in Earth’s history. Among these events was “the Great Dying” at the end of the Permian geological period, 252 million years ago. In that episode an estimated 96% of all marine species went extinct, along with 70% of terrestrial vertebrate species. If we continue burning fossil fuels and clearing forests, humanity runs the risk of causing a Permian-style extinction event in the years ahead. In addition to ocean acidification, pumping more carbon into the oceans and raising ocean temperatures is also predicted to alter the metabolic activities of key ocean bacteria responsible for maintaining the ocean’s nitrogen cycle and atmospheric sulphur cycles. Warming oceans are also less able to retain dissolved oxygen, further impacting marine food webs, and promoting the growth of bacteria that produce deadly hydrogen sulfide gas. Thus despite its isolation, when it comes to climate change Hawaiʻi is in the same boat as the rest of the planet. Industrial civilization is pushing Earth toward a number of ecological thresholds or tipping points, and is threatening to render the planetary environment inhospitable to higher life forms, humans included. After a long journey, a crossroads This tale started 80 million years ago in the dark depths of the seafloor, where red hot magma rose from Earth’s interior. From that fiery foundation the first Hawaiian island grew until it emerged above the Pacific swells, and soon afterward, the pioneering seeds, spores, and winged visitors arrived. So began one of Earth’s most spectacular expressions of evolution, where a few founder species would become a dazzling variety of plants and animals that filled every niche on land and sea. The first Polynesian seafarers settled the archipelago a thousand years or more ago, and gradually refined and adapted their culture to the island ecosystems. Then a mere 200 years ago the islands were incorporated into the global economy, and their ecological transformation continues today. In Hawaiʻi and around the world, humanity now stands at a crossroads, faced with the biggest challenge and the toughest choices in our history as a species. How do we want the next chapters in this story to unfold? One path is that of business as usual. Continue treating the planet as a storehouse of goods to be plundered. Continue burning fossil fuels and placing unrestrained consumption and corporate profit above all else. Continue telling ourselves that we will deal with the problem later, and that some technological breakthrough will absolve us. Yet none of this will stop the trends or change the facts. Humanity is conducting a radical experiment with the planetary ecology, and anything resembling business as usual will result in unimaginable ecocide and human suffering. Another path is that of reinhabiting the Earth, place by place. This means learning to live with restraint, in ways that do not degrade either local or distant ecosystems. This will entail joining with others to localize our food and energy production, protect watersheds, restore native ecosystems, and defend the full diversity of species. This path is rooted in ecological literacy, love for the living world, and a willingness to get real about the implications of our actions for our children, grandchildren and all future generations. The challenge before us is to find our own ways to contribute, to step up as volunteers, as voters, and as leaders in our lives. This is the Great Work of our times, and will take our commitment both locally, and as a planetary species, united. For nature lovers, Hawaiʻi’s story can be disheartening. Much harm has already been done, and much has been lost that can never regained. Yet there is so much left to celebrate. Whale songs still fill the winter waters. Spiny lobsters ply the reefs. Flashes of red and yellow feathers still dart among the trees of mountain forests. In the depths of rarely visited valleys, obscure little creatures continue to evolve, finding new and better ways to do what they do. May we do the same – and with aloha. AAP (2012) Pesticide Exposure in Children. Policy Statement, Council on Environmental Health of the American Academy of Pediatrics. Pediatrics 130(6): e1757-e1763. http:// pediatrics.aappublications.org/content/130/6/e1757.full.html Carson HL. Feb 1982. Evolution of Drosophila on the newer Hawaiian volcanoes. Heredity (Edinb).;48(Pt 1):3-25. Chinen, Jon J. 1966. The Great Mahele: Hawaii's Land Division of 1848. (Honolulu): University of Hawaii Press. Cox, Barry, Richard Ladle, Peter D. Moore. 2016. Biogeography: An Ecological and Evolutionary Approach Cowie, Robert H, Brenden S Holland. 27 October 2008. Molecular biogeography and diversification of the endemic terrestrial fauna of the Hawaiian Islands Phil. Trans. R. Soc. B 2008 363 3363-3376 Cuddihy, Linda and Charles Stone. 1990. Alteration of Native Hawaiian Vegetation: Effects of Humans, their Activities, and Introductions. Freese, Bill, et al. 2015. Pesticides in Paradise: Hawaiʻi's Health and Environment at Risk. Center for Food Safety. http://www.centerforfoodsafety.org/reports/3901/pesticides-in-paradise-hawaiis-health-and-environment-at-risk Fritz, Angela. March 30, 2016. Scientists say Antarctic melting could double sea level rise. Here’s what that looks like. https://www.washingtonpost.com/news/capital-weather-gang/wp/2016/03/30/what-6-feet-of-sea-level-rise-looks-like-for-our-vulnerable-coastal-cities/?tid=a_inl Givnish, Thomas J et al. 7 February 2009. Origin, adaptive radiation and diversification of the Hawaiian lobeliads (Asterales: Campanulaceae) Proc. R. Soc. B 2009 276 407-416; Givnish T. October 2010. Ecology of plant speciation TAXON 59: 1326–1366 Goldfarb, Audrey. July 18, 2016. Rapid 'Ohi'a Death Threatens Habitat for Hawaiʻi's Forest Birds. https://abcbirds.org/rapid-ohia-death-threatens-habitat-for-hawaiis-forest-birds/ Grigg , Richard. 2014. Archipelago, The Origin and Discovery of the Hawaiian Islands. Handy, Handy and Pukui. Native Planters in Old Hawaii: Their Life, Lore, and Environment. Hawaii Snail Extinction Prevention Program: http://dlnr.hawaii.gov/ecosystems/hip/sep/ Kanahele, George Hu'eu Sanford. 1986. Ku Kanaka, Standing Tall, A Search for Hawaiian Values. UH Press. Kaua'i Forest Bird Recovery Project 2016. Threats to Native Forest Birds Koberstein, Paul (June 16, 2014). GMO companies are dousing Hawaiian island with toxic pesticides. http://grist.org/business-technology/gmo-companies-are-dousing-hawaiian-island-with-toxic-pesticides/ Liebherr JK (2015) The Mecyclothorax beetles (Coleoptera, Carabidae, Moriomorphini) of Haleakala-, Maui: Keystone of a hyperdiverse Hawaiian radiation. ZooKeys 544: 1-407. Moore, J.G., et al. (1989) Prodigious Submarine Landslides on the Hawaiian Ridge. Journal of Geophysical Research. 94:17,465-17,484 NOAA. “Fish species unique to Hawaii dominate deep coral reefs of the Northwestern Hawaiian Islands 3/13/14. http://sanctuaries.noaa.gov/news/press/2014/pr031314.html O’Grady, Patrick, and Rob DeSalle. “Out of Hawaii: The Origin and Biogeography of the Genus Scaptomyza (Diptera: Drosophilidae).” Biology Letters 4.2 (2008): 195–199. PMC. Web. 1 Sept. 2016 Olson, Steve. (2004) Evolution in Hawaii: A Supplement to 'Teaching About Evolution and the Nature of Science'. National Academy of Sciences. Chapter: An Adaptive Radiation Has Led to a Dramatic Diversification of the Drosophilids in Hawaii. https://www.nap.edu/read/10865/chapter/7 Pala, Christopher. 2015, August 23. Pesticides in paradise: Hawaii's spike in birth defects puts focus on GM crops. The Guardian. https://www.theguardian.com/us-news/2015/aug/23/hawaii-birth-defects-pesticides-gmo Price, Jonathan P. "Floristic Biogeography of the Hawaiian Islands: Influences of Area, Environment and Paleogeography." Journal of Biogeography 31.3 (2004): 487-500. Web. Schmitz, P. and Rubinoff, D. (2011), The Hawaiian amphibious caterpillar guild: new species of Hyposmocoma (Lepidoptera: Cosmopterigidae) confirm distinct aquatic invasions and complex speciation patterns. Zoological Journal of the Linnean Society, 162: 15–42. USGS Hawaiian Volcano Observatory. 2003. Once a big island, Maui County now four small islands. http://hvo.wr.usgs.gov/volcanowatch/archive/2003/03_04_10.html Wikipedia: Surtsey. https://en.wikipedia.org/wiki/Surtsey Wilmshurst, Janet M. et al. 26 October 2015. "High-precision radiocarbon dating shows recent and rapid initial human colonization of East Polynesia", PNAS, vol. 108 no. 5, doi: 10.1073/pnas.1015876108
For those of us that started our relationships with birds tentatively and with the slightest tinge of fear accompanying awe, noting the beak on a macaw is an automatic first instinct. And why wouldn't we? They're something amazing. The most conservative estimate would have macaw beaks exerting a pressure of over 500 pounds per square inch, easily crushing a Brazil nut. If that doesn't intrinsically reinforce being cautious, nothing will.There are a lot of uses for beaks that come quickly to mind, in addition to crushing nuts or lunging at an overeager acquaintance to protect their boundaries. Fighting, foraging, killing prey, feeding their young, using tools, luring potential mates-- things we have come to expect. But perhaps we are missing a more delicate feature of beaks. When comparing the native sparrows, finches, and jays that visit Pandemonium Aviaries to the enormous-beaked exotic birds that enamor visitors, the idea that birds from warmer climates have larger beaks than those from colder climates is obvious. But why? What accounts for the huge beaks of the toucans, for example, that one can see when taking a trip to South America? One reason for these beak sizes may be that toucans have shown the ability to regulate their temperatures with their beaks. Considering that birds already operate at a higher metabolic rate than mammals, keeping cool is a very important process and these thermal windows are critical. Luckily enough, toucan beaks are richly lined with blood vessels. By being able to modify the blood flow to their beaks, they can control how they will radiate body heat. When the toucans overheat, blood rushes to their beaks; when the weather is colder, they restrict the flow. In infrared pictures, you can see the toucan's beak light up like an incandescent bulb when they get warmer than their liking. In fact, regulating blood flow in their beaks can account for 30% to 60% of their body's total heat loss, and it is estimated that toucans can lose as much as four times its resting heat through their beaks. While the beaks of tropical birds may register with us first and foremost for their power and strength, we must also recognize that they are even more complicated than we may have anticipated. And while the thermoregulation studies haven't quite panned through with macaws to the same extent as they have with toucans, I can't help but admire at the macaw smiles at Pandemonium Aviaries when they take a break from dancing. By Iva Petrovchich, Pandemonium Aviaries Intern
Last week, we asked whether astronomers could be wrong about dark matter, the invisible stuff that seems to help hold galaxies together. Is it possible that dark matter doesn’t really exist? This week, we’ll investigate whether there are viable alternatives to the idea of dark energy, the mysterious stuff that astrophysicists believe is pushing our universe apart. In every direction we look, galaxies are hurtling away from us. That isn’t surprising in itself—after all, the Big Bang sent space and everything in it flying apart. One would expect that the gravitational pull of all the “stuff” in the cosmos would gradually slow down this expansion, bringing it to a dead stop or even collapsing everything back together in a “Big Crunch.” Yet instead, astronomers see that the galaxies in our universe are rushing apart faster and faster. What could be causing this acceleration? Physicists call it dark energy, and it could make up more than 70 percent of the cosmos. But so much remains unknown about dark energy that some scientists are asking whether it exists at all. What if, instead of a mysterious unseen energy, “there is something wrong with gravity?” asks Sean Carroll, a theoretical physicist at the California Institute of Technology. Einstein’s theory of general relativity represents gravity as the curvature of space and time. Perhaps this idea “is still right, but we’re not solving the equations correctly,” suggests Carroll. “We’re used to thinking of the universe expanding perfectly smoothly, and we know it isn’t, and maybe these deviations are important.” If we accounted for how the universe is clumpy instead of smooth, it might turn out that the gravitational pull of clusters of galaxies and other large agglomerations of matter alter spacetime more than previously appreciated. Distant objects would thus appear to be farther away than they actually are, leading to the false conclusion that the universe’s expansion is accelerating. The problem with this kind of model, Carroll says, is that while it suggests that these clumped-up astronomical bodies might distort our view of the universe more than suspected, gravity still remains the weakest of the known fundamental forces of nature. Also, these astronomical clumps would evolve in size and gravitational strength over time. In contrast, the mysteries that dark energy was invoked to solve require something with a lot of energy that changes less over time. Another approach is to modify the laws of gravity to do away with dark energy. This tack suggests that “the laws of gravity as we know them work better on relatively small scales such as our solar system,” says Carroll, but perhaps they need “tweaks” to work on cosmic scales. Carroll and other theorists have developed alternative descriptions of gravity that could explain why the universe evolved as it did. One set of scenarios suggests that the strength of gravity increases over time and has different values depending on the distances involved. But critics argue that, to avoid contradicting well-established features of general relativity, these models are unacceptably contrived. Another family of alternative gravity models analyzes how gravity behaves if there are extra dimensions of reality, as suggested by string theory. But this approach has problems of its own: It leads to empty space “decaying” into particles in potentially detectable ways, Carroll says. To avoid modifying gravity, some theorists have suggested that our galaxy and its neighborhood might lie within a giant void, an emptier-than-average region of space roughly 8 billion light years across. With so little matter to slow down its expansion, the void would expand faster than the rest of the universe. If we lived near the heart of this void, our observations of accelerating cosmic expansion would be an illusion. “The advantage of giant void models is that they don’t require any new physics to explain the apparent acceleration of the universe, like the existence of some weird dark energy or a modified theory of gravity,” says theoretical cosmologist Phil Bull at the University of Oxford. Still, “there are lots and lots of problems with void scenarios,” says theoretical physicist Malcolm Fairbairn at King’s College London. “It’s very difficult to get them to fit existing data—for instance, the cosmic microwave background (CMB) radiation usually gets distorted in these models compared to what we actually see.” For the void model to match observations of CMB radiation, we would need to be very close to the center of the void, to within one part in 100 million. That “seems like an unacceptable ‘fine-tuning’ to some people,” says Bull. “Why should we find ourselves so close to the center?” In addition, astronomers using NASA’s Hubble Space Telescope recently found evidence against the existence of such a void. After refining their measurements of the rate at which the universe is expanding, they all but ruled out the possibility that the accelerating expansion is an illusion created by a void. In addition, if we are living inside a void, Bull and his colleagues argue, we should see very strong fluctuations of cosmic microwave background radiation reflected off hot gas in the clusters of galaxies surrounding the void. Yet we do not see any reflections that strong. “This was pretty much the final nail in the coffin for void models,” Bull says. To support the existence of dark energy—or vindicate one of these alternatives—we need giant sky surveys which will clock the speeds of even more galaxies, Fairbairn says. The colorful scenarios that theorists are dreaming up “ultimately show what an interesting and weird universe we live in,” Carroll says. “It’s one where we must keep an open mind as to what the answers may be.” Editor’s picks for further reading COSMOS: Doubts Over Dark Energy Reexamining the evidence for dark energy. New York Times Magazine: Out There In this article, Richard Panek explores the evidence for dark energy. NPR’s 13.7: Dark Energy and the Joy of Being Wrong In this blog post, Adam Frank recounts the history of the discovery of dark energy.
The coulomb is a measure of electrical charge and is named after Charles-Augustin de Coulomb. Electrons may be treated as an electrical charge so the coulomb is a count of electrons and is therefore a dimensionless unit. The coulomb was the base unit of electrical measure until the Standards Institute (SI) made the ampere the base unit of electrical measure in 1960. The coulomb may easily be calculated from the current in an electrical circuit and the time that the circuit is closed. Define the coulomb as the amount of electrical charge that 1 ampere transports in a second. This may be expressed as 1 C = 1 A x 1 s and makes the coulomb equal to approximately 6.24 x 10^18 electrons. Examine an equivalent definition of the coulomb as the charge stored by a one farad capacitance with an electrical potential of one volt. This can be shown mathematically as 1C = 1F x 1V. Use the definition of coulomb to calculate coulombs from current and time. We then have C = As where C is the charge in coulombs, A is the current in amperes and s is the time in seconds. Express Coulomb's law. This is given as F = kq1q2/r^2 where F is the force exerted on charges q1 and q2, k is Coulomb's constant (8.987 x 10^9 newton square meters/coulombs squared) and r is the distance separating q1 and q2. Solve for coulombs using Coulomb's law in Step 4 with q1 and q2 equal. We have F = kq1q2/r^2 => F/k = q^2/r^2 => q^2 = r^2 (F/k) => q = r (F/k)^(1/2). Things You Will Need
This page is a brief introduction to eigenvalue/eigenvector problems (don't worry if you haven't heard of the latter). Before reading this you should feel comfortable with basic matrix operations. If you are confident in your ability with this material, feel free to skip it. Note that there is no description of how the operations are done -- it is assumed that you are using a calculator that can handle matrices, or a program like MatLab. Also, this page typically only deals with the most general cases, there are likely to be special cases (for example, non-unique eigenvalues) that aren't covered at all. Many problems present themselves in terms of an eigenvalue problem: In this equation A is an n-by-n matrix, v is a non-zero n-by-1 vector and λ is a scalar (which may be either real or complex). Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. It is sometimes also called the characteristic value. The vector, v, which corresponds to this value is called an eigenvector. The eigenvalue problem can be rewritten as If v is non-zero, this equation will only have a solution if This equation is called the characteristic equation of A, and is an nth order polynomial in λ with n roots. These roots are called the eigenvalues of A. We will only deal with the case of n distinct roots, though they may be repeated. For each eigenvalue there will be an eigenvector for which the eigenvalue equation is true. This is most easily demonstrated by example then the characteristic equation is and the two eigenvalues are All that's left is to find the two eigenvectors. Let's find the eigenvector, v1, associated with the eigenvalue, λ1=-1, first. so clearly from the top row of the equations we get Note that if we took the second row we would get In either case we find that the first eigenvector is any 2 element column vector in which the two elements have equal magnitude and opposite sign. where k1 is an arbitrary constant. Note that we didn't have to use +1 and -1, we could have used any two quantities of equal magnitude and opposite sign. Going through the same procedure for the second eigenvalue: Again, the choice of +1 and -2 for the eigenvector was arbitrary; only their ratio is important. This is demonstrated in the MatLab code below. >> A=[0 1;-2 -3] A = 0 1 -2 -3 >> [v,d]=eig(A) v = 0.7071 -0.4472 -0.7071 0.8944 d = -1 0 0 -2 The eigenvalues are the diagonal of the "d" matrix The eigenvectors are the columns of the "v" matrix. Note that MatLab chose different values for the eigenvectors than the ones we chose. However, the ratio of v1,1 to v1,2 and the ratio of v2,1 to v2,2 are the same as our solution; the chosen eigenvectors of a system are not unique, but the ratio of their elements is. (MatLab chooses the values such that the sum of the squares of the elements of each eigenvector equals unity).
When Virginia was established as one of America’s original 13 colonies, few white people had ever seen the territory immediately to the west. It was difficult to get to, with the overland route cut off by mountains that included a nearly unbroken 125-mile-long ridge now known as Pine Mountain. But in 1750, an exploring party led by Thomas Walker found the Cumberland Gap through the mountains at Virginia’s southern border. Walker drew maps that helped guide a younger, more famous frontiersman—Daniel Boone. Soon he and other explorers who had gone through the gap or ventured down the Ohio River were returning with reports of a beautiful land with an incredible abundance of wild game, running water, fine timber, and fertile rolling hills. As the American colonies got more crowded, many people looked toward this new Kentucky territory for land and opportunity. French explorers had navigated and mapped the Mississippi River in the 17th century. At the time of Walker’s expedition, France claimed all the land between that river and the British colonies, along with a vast territory west of the Mississippi. But with no French settlements in the area, the main human activity in Kentucky at the time was hunting by Native Americans. The Cherokees, based in what is now Tennessee and North Carolina, had long known about and used the Cumberland Gap; the trail through it, which whites called the Wilderness Road, followed a game trail blazed by buffalo. From the north, the Shawnees, the Chickasaws, and other tribes made forays into Kentucky from villages in the present-day states of Ohio, Indiana, and Illinois. Native Americans typically had no conception of “owning” land in the European sense. Instead, rights to make use of resources were established by agreement, by custom, or by warfare. And rather than a European-style hierarchy, tribes tended to be organized into autonomous villages governed by councils of elders, no one of whom could speak for the whole village—much less the tribe. Some European pioneers attempted to “buy” land by dealing with a person they perceived as a tribal chief. But they soon learned that nothing had actually changed because the concept of buying and selling land and the pioneer’s assumptions about the authority of the chief were both meaningless to the tribe. Disputes between France and Great Britain over their territorial claims in America led to the Seven Years’ War of 1754-1763. (This war was fought in several theaters; the North American component is also known in America as the French and Indian War.) With the white men at war, many Native Americans allied themselves with the French in hopes of driving the more numerous British settlers from their homeland. At the war’s end, France ceded all its territory east of the Mississippi River to England, thus settling the dispute as far as those two nations were concerned—but not, of course, in the eyes of the Natives, who continued to attack forts and settlements. In 1763, the British government issued a proclamation forbidding its colonists from crossing the Alleghenies. But the lure of the new territories was too strong, and pioneers continued to move west, establishing their first permanent settlement in Kentucky in 1774. As the natives fought back, violence and atrocities mounted on both sides. These “Indian Wars” in the Ohio Valley would last for several more decades, even as America won its independence and Kentucky became the first state west of the Alleghenies in 1792. In the early 1800s, the Shawnee leader Tecumseh inspired many Native American warriors to join a united effort against the Americans, and they fought alongside the British in the War of 1812. After Tecumseh’s death at the Battle of the Thames in 1813, the alliance fell apart, and many of the remaining Natives left the Ohio Valley and the Southeast and moved farther west seeking new homes. Beginning in the 1830s, the U.S. government removed others via forced marches, including the Cherokees’ Trail of Tears.
Cherokee Census of 1835 A census of the Eastern Cherokees, sometimes called the Henderson Roll, was taken by the Federal Government in 1835. In total, it enumerated 16, 542 Cherokees living in the states of North Carolina, Georgia, Alabama and Tennessee. The following table summarizes the results of that census: Source: The Cherokees - A Population History, by Russell Thornton (published 1990) The census of 1835 also enumerated the Indians as to whether each was a fullblood, halfblood or quarterblood Cherokee. The Federal Government regarded a person as an Indian only if he/she was of one-quarter Indian blood or more. Most of the Cherokees were listed as being fullbloods, yet most of the Cherokees who held tribal leadership positions were of mixed blood. This fact would cause much stress and lead to sporadic outbreaks of internal violence throughout the time period covered by this website (1819-1880). The family groups selected for analysis at this website were all mixed blood groups and, although above average in socio-economic success, they were also more likely to suffer acts of violence from other Cherokees. The breakdown of the Cherokee portion of the census by blood category is as follows: This page is under construction!
There are many types of friction welding methods that can optimize your manufacturing process. In this article, we will review several different types. Understanding these different types will help you decide which can increase precision and reduce total cost and cycle time for your application. Talk to Pierce about designs and manufacturing more effective industrial rollers. What Is Friction Welding? Before we break down the different types, let’s define the solid state welding process known as friction welding. Solid state welding refers to welding processes that don’t use external heat. Instead, external pressure is applied to a solid state to form the weld. In friction welding, the workpieces to be joined rotate relative to the other. This movement creates friction, which heats the materials at the contact surfaces. A high pressure force is applied until the welding cycle is complete. Friction welding can be used to join a variety of metal (such as steel and aluminum) bars and tubes exceeding 100 mm in diameter. How Friction Welding Works Friction welding works by following the fundamentals of friction. The process uses friction to create a plastic-forming heat at the weld interface. For example, the friction heat created on steel is usually around 900–1300 degree centigrade. After the appropriate temperature is achieved, an external pressure force is increasingly applied until the workpieces form a permanent weld joint. While there are several different friction welding types, they all follow a common working principle. First, one workpiece is placed in a rotor-driven chuck, while the other is held stationary. The rotor allows the mounted workpiece to rotate at high speeds. A pressure force is applied to the stationary workpiece, bringing it into contact with the rotating workpiece. When the workpieces touch, a high friction force is created and generates significant heat on the surfaces in contact until the two materials soften, also referred to as plasticizing. Once the materials reach a plasticized state, a higher forging pressure is applied to the static piece, forcing the two materials to meld together. After the parts meld together and the interface begins to cool, the rotor stops once the temperature reduces and the materials re-solidify.. The forging pressure is maintained for a few seconds and then released, at which time the weld is completed. 5 Friction Welding Types 1. Inertia Friction Welding What is inertia friction welding? Inertia friction welding features different sized flywheels that are attached to the chuck and spindle shaft. A motor is connected to the spindle shaft to rotate the part. At the start of the welding cycle, the motor is connected to the spindle shaft and rotates the part to the desired rotational speed. Once the desired speed is achieved, the motor is disconnected from the spindle shaft. Based on the weight of the part, spindle shaft, chuck, and flywheels, a rotational inertia is created by the free spinning components. At this point the frictional welded process as described above takes place, utilizing the rotational inertia to create frictional heat when the parts are brought together.. Learn why you should combine inertia welding with CNC machining. 2. Direct Drive Friction Welding In this process, the spindle drive motor is permanently attached to the spindle shaft. The motor continues to drive the rotating part as the two pieces are brought together, thus creating the frictional heat. Based on a defined CNC program, the spindle is continuously slowed as the welding process takes place, stopping the spindle at a pre-determined point. This type of friction welding is beneficial when a specific orientation between the welded components is desired. 3. Linear Friction Welding This process is similar to inertia friction welding; however, the moving chuck doesn’t spin. Instead, it oscillates in a lateral motion. The two workpieces are held under pressure throughout the entire process. This process requires the workpieces to feature a high shear strength and involves more complicated machinery than inertia welding. One benefit of this method isit offers the capability to join parts of any shape (instead of just circular interfaces). 4. Friction Stir Welding (FSW) FSW is a solid-state joining process that uses a non-consumable tool to join two facing workpieces. Heat is generated by friction between the rotating tool and the workpiece material, which leads to a softened region at the interface. While the tool is traversed along the joint line, it mechanically intermixes the softened material of the two pieces of metal, and forges the weld interface through mechanical pressure applied by the tool. FSW is used in modern shipbuilding, trains, and aerospace applications. 5. Orbital Friction Welding Orbital friction welding is similar to rotary friction welding, but both of the welded parts are rotated in the same direction and at the same speed, but their axes offset by up to 1/8”. As the weld cycle is completed and the rotation is slowed, the parts are returned to the same axis, and the forging pressure is maintained while the materials re-solidify. Friction welding can be used to build better industrial rollers, tubes, and shafts. The process is often used to manufacture these subassemblies for industrial printers, material handling equipment, as well as automotive, aerospace, marine, and oil applications. Other examples of components include gears, axle tubes, drivelines, valves, hydraulic piston rods, truck roller bushes, pump shafts, drill bits, connection rods, etc. Friction welding is an eco-friendly process that doesn’t create smoke or release other harmful toxins into the atmosphere. Next, it offers a lot of control over the heat-affected zone, which reduces change to material properties. It also doesn’t require filler metal (which saves cost on raw material). Last, friction welding offers simple automation, fast speeds, efficient welds, and the ability to combine a variety of metals. Ready to Start Friction Welding? If friction welding sounds like it could benefit your application, please contact us. During our conversation we can discuss which type of friction welding might be your best option and if we are the best-fit provider to inertia weld your subassemblies. What is Inertia Friction Welding? Inertia friction welding is a solid-state welding process that joins materials by using rotation and friction to generate heat, and lateral force to plastically displace material and fuse the workpieces together. Since its early development,… Manufacturing problems can hinder profitability for industrial printer and material handling OEMs. It is critical to be aware of these problems in your own manufacturing operations before they cost you your business. So what are the top manufacturing issues and… Do you feel like your time, money, and management attention are tied up worrying about quality? One way to avoid buying low quality industrial rollers for your printing and material handling machines is to address friction welding quality control. Start…
HearandPlay.com February 2006 Newsletter Serving 205,000+ Musicians III. Online Classroom: "How to correctly identify intervals! Part 1" Welcome to my February In this month's classroom lesson, we're going to study intervals and how to correctly identify them. I will follow up with this series next month (March) and perhaps the following. Believe it or not, musical intervals are commonly mispronounced and misidentified among all musicians, even the advanced. For example, if you hear someone incorrectly say, "play a major third... that is C# to F," they may not fluently understand intervals and how to correctly name tones. Now, don't get me wrong. Sometimes, when playing by ear, the temptation to just say 'Db in the place of C#' or 'Bb in the place of Ab' is great. I can even admit to not paying close attention to intervals at one point or another. Now back to the As you'll soon learn, C# to F would not be a major third even though it creates the same sound as a major third. Yes, two notes played harmonically (together) can create the same sound as an interval you're used to hearing, but depending on how you name them, can be a TOTALLY DIFFERENT INTERVAL! C# to F is a fourth (generically) and a diminished fourth (specifically) as you'll learn. If you don't understand what I'm talking about right now, that's great because it means that you'll learn a lot below. Since this lesson may seem like it's re-teaching you the way you name chords and intervals, I understand that many questions may result. Simply visit my message board and I'll be sure to answer your question right away! ------------------------------------------------------------------------ Online Classroom: "How to correctly identify intervals! Part 1" ------------------------------------------------------------------------ Note: You might want to print this lesson out for easier reading... I've seen this subject taught by many people. Sometimes, it gets confusing for the starter. Sometimes, it makes perfect sense. As always, it is my goal to break down this concept so clearly that EVERYONE will be able to understand it with minimal questions. First, let's define the term "interval." What is an interval in music? It's simple. A music interval is the relationship between two notes (...basically, the distance between notes). There are two main types of intervals. Melodic intervals (also known as "linear intervals") and harmonic intervals (also known as "vertical intervals"). A melodic interval is the distance between two notes played separately, one after the other. If I play a C, then an E, then an F, these would be melodic intervals because I'm playing each note separately, one after the other. If melodic intervals describe the relationship between two notes played successively, then harmonic interval must describe the relationship between two notes played simultaneously, or at the same time. So, to recap: Melodic = the distance between notes played separately Harmonic = the distance between notes played at the same time The rules I'm going to show you apply BOTH to melodic and harmonic intervals. I just thought it'd be beneficial to cover the "basics" before teaching you the rules of the game. Moving on... You already know that the musical alphabet borrows from the first seven letters of the English alphabet - A, B, C, D, E, F, G Regardless of the type (melodic or harmonic), there are two ways to name intervals: generic and specific. We will cover generic now and specific next month. When you think in terms of generic intervals, you are not concerned with sharps and flats. In fact, when counting generic intervals, you totally ignore sharps and flats and simply use the alphabet (the note names). REMEMBER: The correct name of an interval depends on the names given by its two notes. This will be important later, as you'll learn. It's simple. Starting with any letter of the alphabet (which will be considered the "lower" note of the interval), simply count up each letter until you reach the "higher" note. Now, you'll need to include the first letter in your count as well as the last letter. Also keep in mind that after "G", you start back over with "A" as you'd normally see on a regular piano. So, if I wanted to figure out the interval between A and C, I'd simply count the letters of the alphabet from A to C, including both the starting letter and the ending letter in my count. A is 1 B is 2 C is 3 This means that the interval from "A" to "C" is a third. (Now, if you already understand a little bit about intervals, don't be confused. I haven't specified whether it is a major third or a minor third. When talking generic intervals, we are not concerned with major, minor, perfect, augmented, or any of that right now. We are simply concerned with what type of interval it is. This is the key to CORRECTLY identifying intervals). Now, since it takes 3 alphabet letters to make up this A-C interval, it would be incorrect to label this a second... or to label this a fourth. Believe it or not, many people do this EVERY DAY! Real-life examples may not be as simple as the demonstration above (from A to C) but if you've ever called F# to Bb a major third or even the beginning of a major chord, you've incorrectly labeled intervals and chords before! Don't worry, I'm the first to admit I have! Now, let's go with my example above (F# to Bb). First of all, because we're currently dealing with the GENERIC interval, we'd totally drop any sharps or flats. We don't need them. If we can't determine the UNDERLYING interval, how can we correctly label the specific interval (which you'll learn later). So, let's count the alphabet letters: F is 1 G is 2 A is 3 B is 4. So from F# to Bb is certainly a fourth. Later on, we'll determine specifically what kind of fourth it is. If you're familiar with major chords, you know that FOURTHS don't make up major chords. A major chord is built on a major third interval and a perfect fifth interval. In other words, from C to E is a major third and from C to G is a perfect fifth. Get rid of the duplicate C and you have: C + E + G. This is the c major chord, of course. Basically, what I'm saying is that it would be impossible to form a major chord with F# and Bb because as we've just determined, this interval is a FOURTH. Just based on generic intervals, how then can we correct this problem? How can we make F# to Gb a major third, which can then be correctly used in forming the famous "major chord?" It's simple. Just change one of the notes. Either conform the bottom note to the top note or the top note to the bottom. Right now, there can't be any KIND of F and any KIND of B together or you'll always get a fourth. So, let's transform F#-Bb into a third interval. OPTION #1: Keep the F# and change Bb to A#. Now we have F# and A#. This creates the same exact sound we're looking for in the major chord and is now labeled correctly. But let's count it to make sure this is a generic third interval. Remember, in counting generic intervals, it is not necessary to worry about sharps and flats. You are ONLY dealing with alphabet letters. F is 1 G is 2 A is 3 So F# to A# is now confirmed as a third interval. Later on, we'll determine whether this is a major third, a minor third, or otherwise. This is what we call specific intervals. Right now, we're still in the generic! OPTION #2: Keep the Bb and change the F#. Now we have Gb instead of F# (remember, Gb and F# both make the same sound so nothing is changed about what you hear). They are enharmonic. Uh ohh... new term. Enharmonic just simply means two notes that are equivalent of each other but have different names. C# and Db are enharmonic. To make it even simpler... you'd say "four" and "for" and even "fore" the same way, right? But you spell them differently. They are NOT the same. If you use one for the other, even though they sound the same, you may steer a conversation in a whole different direction. What if I wrote a note to someone saying, "I'll need you for today." That means, I will be needing your assistance today. What if I wrote to the same person, "I'll need you four today," that means something totally different. The person will say, "what four... I don't have three other people to help, just myself." The point is: In music, these things are important. If you use a Gb when you're suppose to say F#, then you could be calling a chord or interval something that it's not. Back to work: If you change F# to Gb and keep the Bb, you have: Gb and Bb Let's confirm that this is, in fact, a third interval: Drop the flats and sharps. Not needed. G is 1 A is 2 B is 3 It confirms. So F# > A# is a third and Gb > Bb is a third. Do you see where I'm going with this? All this stuff is vital. Let's do one more and I'll give you a chart that'll summarize all generic intervals. What is the name of the interval that describes E to D? ___________________________ Answer: Let's count. E is 1 F is 2 G is 3 A is 4 B is 5 C is 6 D is 7 E to D is a seventh. What specific kind of seventh? We'll find out later. But for now, just know that understanding GENERIC INTERVALS is the key to correctly identifying specific intervals. Since the generic name of an interval is not concerned with flats and sharps, you can pretty much say: From some kind of E to some kind of D is a seventh interval. It could be D to E. It could be Db to E. It could be D to Eb It could be Db to Eb. These are all sevenths, generically. Later on, we'll learn how to actually count the number of half steps in between the interval. This will tell us SPECIFICALLY what kind of interval (like major seventh, minor seventh, augmented seventh, etc). Here's a chart that'll make your understanding of this a whole lot easier: Explore these chord types to prepare for future newsletters: Well, I hope you enjoyed this newsletter and I'll be back soon! Take care! This concludes your Online Classroom Lesson If you were intrigued by the online classroom lesson above, then you would definitely benefit from my course! *** “The Secrets to Playing Piano By Ear” 300-pg Course *** With 20 chapters and over 300 pages, the home piano course provides several resources, techniques, tips, principles, and theories to playing the piano by ear. Along with hundreds of chords and scales, you'll also learn how to turn them into gospel, jazz and blues chord progressions and better yet, how to use them to play ABSOLUTELY any song you want ... IN VIRTUALLY MINUTES! Again, don't miss this opportunity. I've even added an additional bonus if you purchase the course this week --- You can read more about the course at: http://www.homepianocourse.com Enjoy this edition? Visit our message board and let us know! https://www.hearandplay.com/board Please Let a friend know about HearandPlay.com! PLEASE FORWARD THIS NEWSLETTER TO YOUR ENTIRE E-MAIL ADDRESS BOOK. Yours Truly, Jermaine Griggs www.HearandPlay.com www.GospelKeys.com
Dogs and other animals use their sense of smell to guide them toward desirable places and to steer them away from the places they need to avoid. Professor Jay Gottfried, a neurobiologist at the University of Pennsylvania, set out to investigate whether humans also rely on odor information to help them navigate through the world. “Each of our five senses plays a unique role, but smell seems to be treated like the black sheep of the family,” said Professor Gottfried. “Obviously, you don’t need your sense of smell to take a test or drive a car, but it has a major impact on our quality of life.” After studying the science of smell for more than 15 years, Professor Gottfried’s research is now focused on olfactory spatial navigation. He designed an experiment using various combinations of pine and banana scents to build a two-dimensional grid, or a “smellscape.” Participants moved between grid points that represented a “start” and an “end” based on the odor mixtures they smelled. The study revealed that, as the individuals navigated, their brain activity created a pattern with hexagonal symmetry that was acting as an actual olfactory map. This pattern resembled grid-like mapping structures that have been previously shown to assist animals in other types of spatial navigation. “Several exciting papers have revealed that using functional imaging techniques, you can find proxies of this grid-like architecture in the human brain,” said Professor Gottfried. “What we did in this study is bring together conceptual ideas about odor navigation with grid-cell models, then used a set of smells to define a two-dimensional space.” Unlike a real-world setting, the volunteers in this experiment were not actively, physically moving through space. Movement involved mental navigation between two odor coordinates in the smellscape. However, the study design turned out to have something else in common with real-world scenarios. “Odor intensity increases with distance to the smell’s source,” said Professor Gottfried. “For example, as you get closer you to your favorite donut shop, the stronger the donut smell becomes. In this way, the odor space we created, where banana and pine smells go from strong to weak, suggests that this design roughly captures what a person might naturally encounter.” In an effort to gain a greater understanding of smell, Professor Gottfried plans to alter his experimental set-up and use a virtual reality computer game in which participants will locate a specific scent as they move through an odor-filled arena. “Imagine everyone gathering around a Thanksgiving table and one person can’t smell the food. They can’t really engage in that conversation or feel connected to the shared experience of the meal. There are a lot of examples like that. The sense of smell serves a very unique purpose and confers one-of-a-kind behavioral advantages that other senses can’t provide.” The study is published in the journal Neuron.
Some scientific phenomena in the ocean or atmosphere, such as wind and current, are normally measured with a pair of variables. The pair can be either magnitude (speed) and direction, where magnitude measures how fast wind or water flows, and direction measures the direction of the flow. These can also be measured as U and V components, where U is the velocity toward east, and V is the velocity toward north. This blog describes the capabilities of visualizing wind and ocean current data in ArcGIS, as well as the workflow for preparing layers for visualization. To visualize wind or water flow, it is common to use vectors to symbolize size, color to represent speed, and symbol angle to represent flow direction. So how can you visualize the two rasters, either magnitude and direction variables, or U and V components? ArcGIS has a vector field data type and special renders for this purpose. Visualize wind data in ArcGIS Pro The map displays wind data on top of a temperature layer for the continental United States. The layer is composed of U and V components of wind, and displayed using the wind barb symbol of the Vector Field renderer. More vector symbol types are supported; see this help topic for more information on how to work with the Vector Field renderer in ArcGIS Pro. Visualize an imagery layer of ocean current data using Map Viewer This imagery layer in ArcGIS Image for ArcGIS Online contains magnitude and direction variables of ocean currents. The map displays using the Vector Symbol renderer in Map Viewer, in which arrows point to the flow direction and color indicates the speed. Since the imagery layer is multidimensional, the time slider can be used to animate wind flow through time. Visualize an imagery layer of wind using Animated Flow Renderer How to prepare data for visualization Wind and ocean current data is often stored in netCDF, GRIB, or HDF format, either as a single file or multiple files. There are a few ways to prepare a layer with the vector field from the data, depending on which product you use. The key is the Vector Field raster function, which composes the two variables that describe speed and direction into a two-band raster with a vector field type. The type is Vector-MagDir if the two variables represent magnitude and direction, or Vector-UV if the two variables represent U and V components. The recommended workflows of preparing data in ArcGIS Pro, ArcGIS Image for ArcGIS Online and ArcGIS Enterprise are outlined below. Prepare data using ArcGIS Pro You can create a raster layer of the vector field and visualize it in ArcGIS Pro, or prepare a dataset with the vector field, and publish a vector field-ready imagery layer from it in ArcGIS Online and ArcGIS Enterprise. Workflow 1: Create a vector field layer directly from the data You can use the following workflow if the two variables are in one multidimensional raster file, such as a netCDF, HDF, GRIB, or CRF file: - Click the Add Data button and choose the Add Multidimensional Raster Layer tool. - Check the check boxes for two variables that represent magnitude and direction (or U and V components). - Choose an option for Output Configuration. In this example, it is Vector Field (Magnitude-Direction), as indicated above. Define the corresponding variable type, as shown below, and click OK to create the layer. A layer with Vector-MagDir type will be added to the map and it will be displayed using the vector field render by default. You can also save the data to a vector field-ready Cloud Raster Format (CRF) using the Copy Raster tool. Note: This layer is a function raster layer containing a Vector Field raster function. You can also add the two variables to the map and create a layer using the Vector Field raster function. If the two variables are stored in two files, you can also use this workflow, adding the two variables as two separate raster layers to create a layer using the Vector Field raster function. Workflow 2: Create a dataset of vector field type from multiple files When your data is stored in multiple files, you can use a mosaic dataset and a vector field processing template to create a vector field-ready dataset. Here, we use NetCDF files as an example to outline the workflow: - Create a mosaic dataset using the Create Mosaic Dataset geoprocessing tool. - Open the Add Rasters to Mosaic Dataset tool and for Raster Type, choose NetCDF. If your data is in other formats, you can choose an appropriate raster type here. - Set the input path containing your data and click the Raster Type button (blue box). The Raster Type Properties page appears. - On the page, click Variables in the left panel and select two variables in the right panel, then define the type for each variable; in this example, it is U and V. Next, click Processing in the left panel and choose Vector Field. - Click OK to add data, and a mosaic dataset will be created with a Vector-UV type. - For better display performance, you can convert the mosaic dataset to CRF format using the Copy Raster tool, and this output CRF will have a Vector-UV raster type. Create a vector field imagery layer in ArcGIS Online There are two ways to create a vector field imagery layer in ArcGIS Online: One is to first create an imagery layer from the source files containing the two variables, then create a vector field imagery layer by composing a function template and running raster analytics. The created imagery layer will have either a Vector-UV or Vector-MagDir type. This workflow is entirely web based and requires that you have image analysis privilege. The other method is to use ArcGIS Pro to create a vector field dataset, such as CRF, from which to create a vector field imagery layer directly in ArcGIS Online. I recommend you use this method if you have ArcGIS Pro, because it will upload only the two required variables in ArcGIS Online which saves storage, and you can publish it directly without having to run raster analytics which requires image analysis privileges. Workflow 3: Create a vector field imagery layer from source files You can create a vector field imagery layer directly from the source files, such as data in netCDF, HDF, GRIB, or CRF format. Before you start, make sure your account has the ArcGIS Image for ArcGIS Online user type extension license with hosted imagery publishing and image analysis privileges. Refer to this help topic for information about privileges to publish a hosted imagery layer, as well as the choice of tiled or dynamic imagery layers. To create a multidimensional imagery layer, complete the following steps: - Click the My Contenttab and click the New item - Choose the Imagery layer option to open the Create Imagery Layers - Choose the Tiled Imagery Layer Both tiled and dynamic imagery layers are supported for visualizing vector field data. - For the layer configuration option, choose One Image if your data is stored in a single file, or choose One Mosaicked Image if your data is stored in multiple files. - In the Select input imagery box, click Browse or drag your files or the whole CRF folder into the box. - Click Next to define your layer name, tags, summary, and folder to save. - Choose Create to start layer creation. It will create a single imagery layer with all the variables available including the two variables for the Vector Field raster function in the next step. Note: If the data is stored in two separate raster files, you can create two layers by going through the workflow steps above twice. To create a vector field imagery layer, complete the following steps: This will take the two variables in the imagery layer in the previous step, and create a vector field imagery layer using the Vector Field raster function. Use Map Viewer Classic if the Raster Function Editor is not yet available in the standard Map Viewer. - Open the multidimensional imagery layer created above in Map Viewer Classic, click the Raster Function Editor button (in blue box). 2. In the Raster Function Editor, search for the Multidimensional Filter function and Vector Field function in the System category and create a function chain the same as the one in the following diagram. You can save the raster function template, or directly bring it to the Raster Analysis pane without saving. Click OK to continue and prepare for raster analysis. Note: If U and V (or Magnitude and Direction) are created in two separate imagery layers, Match Variables option in Multidimensional Rules needs to be unchecked. Multidimensional Rules are located in Edit Properties, the first button at top-right corner of the Raster Function Editor. 3. Define the input parameters for the template. a. In the first Multidimensional Filter function, select the first variable, for U or magnitude. b. In the second Multidimensional Filter function, select the second variable, for V or direction. c. Choose an option for Input Data Type from the combo box. It is either U-V or Magnitude-Direction; make sure the type matches the selected variables. And define the Output Data Type to be Magnitude-Direction. d. Provide an output name and click Run Analysis to the create imagery layer. The output will be a vector field-ready imagery layer. Workflow 4: Create an imagery layer from a vector field-ready CRF Sometimes you may have a CRF with a vector field type created in ArcGIS Pro. You can create an imagery layer from this CRF using the first portion of workflow 3 above, described in the ArcGIS Online workflow section. The layer created will be vector field ready. Publish an imagery layer for ArcGIS Enterprise To publish a vector field imagery layer for ArcGIS Enterprise, you can either publish from your portal or publish using ArcGIS Pro. Workflow 5: Publish from your portal To publish from your portal, first make sure your ArcGIS Enterprise has image server configured as a hosting server, then create an imagery layer from your data in your local folder or in a user-managed data store. The workflow is the same as the workflows described in the ArcGIS Online workflow section. You can also learn more about publishing hosted imagery layers here. Workflow 6: Publish an imagery layer from ArcGIS Pro To publish an imagery layer from ArcGIS Pro, you will need to create a mosaic dataset or CRF file from the data using the same workflow 2 described in the ArcGIS Pro workflow section. When the mosaic dataset is created, right-click the dataset in the Catalog pane and choose Share as Web Layer to publish. See Publish an image service from ArcGIS Pro for more information. With these workflows, you can create informative maps to visualize your vector field data using ArcGIS Pro, publishing a service in your organization’s portal, or creating an imagery layer in ArcGIS Online. Have fun creating your maps!
How to Create a Breaker Box Diagram Template The breaker box diagram template is an essential tool for any home or business that requires the use of breakers. It allows users to create an easy-to-follow diagram of the electrical system, ensuring that all connections are properly set up and maintained. In order to understand the importance of using a breaker box diagram template, it is first important to understand what a breaker box is and how it works. A breaker box is an electrical control panel located in the interior of a house or commercial building. It houses the circuit breakers, which act as miniature switches that control the amount of electricity flowing through each branch of the electrical system. The circuit breakers cut off power when certain levels of electricity are exceeded, protecting people and property from potential electrical hazards. When constructing a breaker box diagram, it is important to ensure that all necessary components are included and placed in their correct positions. Understanding the Components of a Breaker Box Before beginning to construct a Breaker Box Diagram Template, it is important to have an understanding of the components can be found in a typical breaker box. A breaker box typically contains several parts, including: - Main circuit breakers, which protect the entire electrical system from power surges. - Sub-circuit breakers, which protect specific circuits in the system. - Ground fault circuit interrupter (GFCI), which is a type of circuit breaker designed to protect people from shock. - Arc fault circuit interrupter (AFCI), which is a type of circuit breaker designed to detect and reduce the risk of fire. - Switchgear, which acts as a switch to turn the power on or off in the system. - Circuit identification labels, which clearly identify which circuit breaker is associated with each part of the electrical system. Creating a Breaker Box Diagram Template Once you have a full understanding of the components of a breaker box, it is important to begin designing a breaker box diagram template. The best way to do this is to have the electrical system in front of you and draw the schematic by hand. You can then use the diagram as a reference or modify it for your own unique needs. To make the process easier, several tips and techniques can be employed. - Start with a blank sheet of paper or graph paper and draw a simple rectangle representing the breaker box. Label each corner with the correct components. - Draw each breaker as a small circle, and label it with the appropriate information. - Draw in the wiring diagram, marking each wire and connection point. - Label the main circuit breaker, sub-circuit breakers, GFCI, AFCI, and switchgear - Include all applicable circuit identification labels Creating a breaker box diagram template can be a complex process, but following these steps can help simplify the process. With a clear and organised template, you will have a comprehensive overview of your electrical system, ensuring that all components are correctly wired and functioning properly. Bmw X1 Fuse Box Panel Relay Diagram Explained Circuit Breaker Wiring Diagrams Do It Yourself Help Com Panels And Formulas Autodesk Community Revit Products Wiring Diagram Electrical Wires Cable Block Fuse Mitsubishi Galant Gto Angle Text Png Pngwing 19 Panel Schedule Templates Doc Pdf Free Premium Electrical Panel Labels Family Handyman Electrical Panel Label Template Excel Fill Online Printable Fillable Blank Pdffiller Free Electrical Panel Label Template Printable Templates L Square D By Schneider Electric Cover Directory Label Replacement Farnell Netherlands 42 Fillable Panel Schedule Templates Excel Word ᐅ Templatelab Circuit Panel Id Chart Kit Breaker Seton Leviton Lstik Turtle Hughes Does Anyone Want A Free Template For Circuit Breaker Labels The Garage Journal How To Quickly Label A Home Electrical Panel Directory Everyday Old House Square D Schneider Electric L Load Centers Stoneway Supply Paneltronics Standard Line Of Electrical Panel Hospitalized Patient List Name Tag Horizontal Type For 20 People As One Lists Monotaro Taiwan Custom Safety Label Circuit Breaker Lcb555
What Does Being Colorblind Mean? Just because someone is colorblind, it doesn’t mean they see the world like a black-and-white movie. That type of color blindness does exist and is called monochromacy, but it is by far the rarest form of color blindness. There are different ways it can happen, just like there are different ways the more common types of color blindness happen. How Our Genes Affect the Colors We See The main cause of color blindness is a recessive gene on the X chromosome. This is why men are far more likely to be colorblind than women. As long as a woman has one copy of the gene for normal color vision, she’ll have normal color vision even if she carries the color blindness gene, but her kids could end up colorblind or as carriers themselves. Men, on the other hand, either have the gene on their one X chromosome and are colorblind or they don’t have it and have normal color vision. The daughters of colorblind men will always be at least carriers of the gene, unless they get the colorblind gene from their mothers, in which case they will also be colorblind. How Color Vision Works Our vision comes from specialized cells in our retinas called rods and cones, and the cones are the ones responsible for seeing in sharp detail and color. In someone with normal color vision, the cones absorb light at three different wavelengths: short (blue), medium (green), and long (red), and they work together so that we can see all the colors in between. It’s a bit like the way old TVs worked, with pixels divided into red, green, and blue stripes. Color Blindness Comes in Many Forms There are a lot of different ways color vision can go wrong. The most common is red-green color blindness, which could be because the red cones aren’t working (protanomaly) or the green cones aren’t working (deuteranomaly). Either way, the outcome is a landscape of dull, brownish-yellow colors. 8% of all men and .5% of all women have red-green color blindness. A rarer form of color blindness is blue-yellow color blindness (tritanopia), which happens when the blue cones aren’t working. The result is a palate of teals, pinks, and browns. Only 5% of colorblind people are blue-yellow colorblind. Even rarer, as we mentioned above, is monochromacy. It could be because none of the cones work, only one type of cones work, or there’s a problem with the way the visual cortex processes images. Beyond seeing in black and white, monochromacy often comes with symptoms like severe light sensitivity, involuntary eye movements, and weak central vision. Some Cases of Color Blindness are Treatable Even when it has the same result, color blindness doesn’t always work the same way. Being dichromatic means you are completely missing one of the three cone types, which can’t be treated. However, being an analogous trichromat means that you have all the cones, but some of them respond to a wider range of wavelengths than they should. If the overlap isn’t too great, this kind is actually treatable. If you’ve ever seen videos of colorblind people trying on special glasses and becoming very emotional as they take in colors they’ve never been able to see before for the first time, you’ve seen the treatment for analogous trichromacy. These glasses work by blocking wavelengths between what two different types of cones are supposed to see, which increases contrast and allows them to see these new colors. When Was Your Last Eye Appointment? Being colorblind can make some everyday tasks much harder, but there are many resources available to help. An important early step is to diagnose the type and severity of the color blindness. If you think you might be color blind, schedule an appointment with us!
In today’s complex and rapidly changing world, critical thinking skills are essential for students to navigate challenges and make informed decisions. As educators, it is our responsibility to equip students with the ability to think critically and analytically. In this post, we will explore effective strategies that educators can employ to teach and develop critical thinking skills in the classroom. - Encourage Questions and Inquiry: Create a classroom environment that values curiosity and encourages students to ask questions. Encourage students to explore multiple perspectives, challenge assumptions, and seek evidence to support their ideas. By fostering a climate of inquiry, educators can stimulate critical thinking and empower students to think beyond the surface level. - Teach Problem-Solving Techniques: Teach students problem-solving techniques, such as identifying the problem, brainstorming solutions, evaluating potential outcomes, and selecting the best course of action. Provide opportunities for students to engage in real-life problem-solving scenarios that require them to analyze information, think creatively, and make reasoned decisions. Scaffold the process initially and gradually allow students the independence to solve problems on their own. - Practice Analysis and Evaluation: Teach students how to analyze and evaluate information critically. Facilitate discussions and debates that require students to examine evidence, identify biases, assess credibility, and draw logical conclusions. Provide examples of flawed arguments or misleading information, and guide students in deconstructing them. Encourage students to justify their reasoning and support their ideas with evidence. - Promote Metacognition: Introduce metacognitive strategies that enable students to think about their thinking. Teach students how to reflect on their own reasoning processes, identify any biases or assumptions they may have, and consider alternative viewpoints. Encourage the use of self-questioning techniques like “How did I come to this conclusion?” or “What evidence supports my thinking?” This metacognitive awareness enhances students’ ability to think critically and evaluate their own thoughts and actions. - Incorporate Real-World Examples: Connect classroom content to real-world examples that demonstrate the relevance of critical thinking skills. Engage students in analyzing and discussing current events, case studies, or ethical dilemmas. Encourage open-ended discussions and debates that require students to apply their critical thinking skills to real-life contexts. This cultivates their ability to think critically in various situations beyond the classroom. - Foster Collaboration and Communication: Encourage collaborative learning experiences that provide opportunities for students to engage in critical thinking together. Group projects, debates, and problem-solving activities can foster collaborative critical thinking. Provide guidance on effective communication and active listening techniques, emphasizing the importance of respectfully challenging ideas, considering different perspectives, and working together to reach logical conclusions. Teaching critical thinking skills is essential in preparing students for success in their academic and professional lives. By incorporating strategies that foster questioning, problem-solving, analysis, evaluation, metacognition, and collaboration, educators can empower students to become adept critical thinkers. As educators, let us embrace these strategies and support the development of critical thinking skills in our students, enabling them to face challenges with confidence, make informed decisions, and thrive in a complex world.
There are hundreds of millions of asteroids in our solar system, which means new asteroids are discovered quite frequently. It also means close encounters between asteroids and Earth are fairly common. Some of these close encounters end up with the asteroid impacting Earth, occasionally with severe consequences. A recently discovered asteroid, named 2023 BU, has made the news because today it passed very close to Earth. Discovered on January 21 by amateur astronomer Gennadiy Borisov in Crimea, 2023 BU passed only about 3,600 km from the surface of Earth (near the southern tip of South America) six days later on January 27. That distance is just slightly farther than the distance between Perth and Sydney and is only about 1 percent of the distance between Earth and our Moon. The asteroid also passed through the region of space that contains a significant proportion of the human-made satellites orbiting Earth. All this makes 2023 BU the fourth-closest known asteroid encounter with Earth, ignoring those that have impacted the planet or our atmosphere. How does 2023 BU rate as an asteroid and a threat? 2023 BU is unremarkable, other than that it passed so close to Earth. The diameter of the asteroid is estimated to be just 4–8m, which is on the small end of the range of asteroid sizes. There are likely hundreds of millions of such objects in our solar system, and it is possible 2023 BU has come close to Earth many times before over the millennia. Until now, we have been oblivious to the fact. In context, on average a 4-metre-diameter asteroid will impact Earth every year and an 8-metre-diameter asteroid every five years or so Asteroids of this size pose little risk to life on Earth when they hit because they largely break up in the atmosphere. They produce spectacular fireballs, and some of the asteroids may make it to the ground as meteorites. Now that 2023 BU has been discovered, its orbit around the Sun can be estimated and future visits to Earth predicted. It is estimated there is a 1 in 10,000 chance 2023 BU will impact Earth sometime between 2077 and 2123. So, we have little to fear from 2023 BU or any of the many millions of similar objects in the Solar System. Asteroids need to be greater than 25m in diameter to pose any significant risk to life in a collision with Earth; to challenge the existence of civilisation, they’d need to be at least a kilometre in diameter. It is estimated there are fewer than 1,000 such asteroids in the Solar System and could impact Earth every 5,00,000 years. We know about more than 95 per cent of these objects. Will there be more close asteroid passes? 2023 BU was the fourth closest pass by an asteroid ever recorded. The three closer passes were by very small asteroids discovered in 2020 and 2021 (2021 UA, 2020 QG and 2020 VT). Asteroid 2023 BU and countless other asteroids have passed very close to Earth during the nearly five billion years of the Solar System’s existence, and this situation will continue into the future. What has changed in recent years is our ability to detect asteroids of this size, such that any threats can be characterised. That an object roughly 5m in size can be detected many thousands of kilometres away by a very dedicated amateur astronomer shows that the technology for making significant astronomical discoveries is within reach of the general public. This is very exciting. Amateurs and professionals can together continue to discover and categorise objects, so threat analyses can be done. Another very exciting recent development came last year, by the Double Asteroid Redirection Test (DART) mission, which successfully collided a spacecraft into an asteroid and changed its direction. DART makes plausible the concept of redirecting an asteroid away from a collision course with Earth if a threat analysis identifies a serious risk with enough warning.
The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, on/off, or +/− are also widely used. The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device. A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is a nibble. In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As a unit of information, the bit is also known as a shannon, named after Claude E. Shannon. The symbol for the binary digit is either "bit" as per the IEC 80000-13:2008 standard, or the lowercase character "b", as per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte. The encoding of data by discrete bits was used in the punched cards invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Herman Hollerith, and early computer manufacturers like IBM. A variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes and stock ticker machines (1870). Ralph Hartley suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word "bit" in his seminal 1948 paper "A Mathematical Theory of Communication". He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit". Vannevar Bush had written in 1936 of "bits of information" that could be stored on the punched cards used in the mechanical computers of that time. The first programmable computer, built by Konrad Zuse, used binary notation for numbers. A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc. For devices using positive logic, a digit value of 1 (or a logical value of true) is represented by a more positive voltage relative to the representation of 0. Different logic families require different voltages, and variations are allowed to account for component aging and noise immunity. For example, in transistor–transistor logic (TTL) and compatible circuits, digit values 0 and 1 at the output of a device are represented by no higher than 0.4 volts and no lower than 2.6 volts, respectively; while TTL inputs are specified to recognize 0.8 volts or below as 0 and 2.2 volts or above as 1. Transmission and processing Bits are transmitted one at a time in serial transmission, and by a multiple number of bits in parallel transmission. A bitwise operation optionally processes bits one at a time. Data transfer rates are usually measured in decimal SI multiples of the unit bit per second (bit/s), such as kbit/s. In the earliest non-electronic information processing devices, such as Jacquard's loom or Babbage's Analytical Engine, a bit was often stored as the position of a mechanical lever or gear, or the presence or absence of a hole at a specific point of a paper card or tape. The first electrical devices for discrete logic (such as elevator and traffic light control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques. In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic-core memory, magnetic tapes, drums, and disks, where a bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards. In modern semiconductor memory, such as dynamic random-access memory, the two values of a bit may be represented by two levels of electric charge stored in a capacitor. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes, bits are encoded as the thickness of alternating black and white lines. Unit and symbol The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit. However, the lower-case letter 'b' is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte. Multiples of bits Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer and for this reason it was used as the basic addressable element in many computer architectures. The trend in hardware design converged on the most common implementation of using eight bits per byte, as it is widely used today. However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits. Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the 21st century, retail personal or server computers have a word size of 32 or 64 bits. The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (103) through yotta (1024) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit). Information capacity and information compression When the information capacity of a storage system or a communication channel is presented in bits or bits per second, this often refers to binary digits, which is a computer hardware capacity to store binary data (0 or 1, up or down, current or not, etc.). Information capacity of a storage system is only an upper bound to the quantity of information stored therein. If the two possible values of one bit of storage are not equally likely, that bit of storage contains less than one bit of information. If the value is completely predictable, then the reading of that value provides no information at all (zero entropic bits, because no resolution of uncertainty occurs and therefore no information is available). If a computer file that uses n bits of storage contains only m < n bits of information, then that information can in principle be encoded in about m bits, at least on the average. This principle is the basis of data compression technology. Using an analogy, the hardware binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer—when information is more compressed—the same bucket can hold more. For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy. In the 1980s, when bitmapped computer displays became popular, some computers provided specialized bit block transfer instructions to set or copy the bits that corresponded to a given rectangular area on the screen. In most computers and programming languages, when a bit within a group of bits, such as a byte or word, is referred to, it is usually specified by a number from 0 upwards corresponding to its position within the byte or word. However, 0 can refer to either the most or least significant bit depending on the context. Other information units Similar to torque and energy in physics; information-theoretic information and data storage size have the same dimensionality of units of measurement, but there is in general no meaning to adding, subtracting or otherwise combining the units mathematically, although one may act as a bound on the other. Units of information used in information theory include the shannon (Sh), the natural unit of information (nat) and the hartley (Hart). One shannon is the maximum amount of information needed to specify the state of one bit of storage. These are related by 1 Sh ≈ 0.693 nat ≈ 0.301 Hart. Some authors also define a binit as an arbitrary information unit equivalent to some fixed but unspecified number of bits. - Integer (computer science) - Primitive data type - Trit (Trinary digit) - Qubit (quantum bit) - Entropy (information theory) - Bit rate and baud rate - Binary numeral system - Ternary numeral system - Shannon (unit) - Coded Character Sets, History and Development (1 ed.). Addison-Wesley Publishing Company, Inc.. 1980. p. x. ISBN 978-0-201-14460-4. https://books.google.com/books?id=6-tQAAAAMAAJ. Retrieved 2016-05-22. - "Why is a byte 8 bits? Or is it?". Computer History Vignettes. 2000-08-08. http://www.bobbemer.com/BYTE.HTM. "[…] With IBM's STRETCH computer as background, handling 64-character words divisible into groups of 8 (I designed the character set for it, under the guidance of Dr. Werner Buchholz, the man who DID coin the term "byte" for an 8-bit grouping). […] The IBM 360 used 8-bit characters, although not ASCII directly. Thus Buchholz's "byte" caught on everywhere. I myself did not like the name for many reasons. […]" - Understanding Information Transmission, 2006 - Digital Communications, 2006 - IEEE Std 260.1-2004 - "Units: B". https://www.unc.edu/~rowlett/units/dictB.html#bit. - Information theory and coding. McGraw-Hill. 1963. - "A Mathematical Theory of Communication". Bell System Technical Journal 27 (3): 379–423. July 1948. doi:10.1002/j.1538-7305.1948.tb01338.x. http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf. "The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey.". - "A Mathematical Theory of Communication". Bell System Technical Journal 27 (4): 623–666. October 1948. doi:10.1002/j.1538-7305.1948.tb00917.x. - A Mathematical Theory of Communication. University of Illinois Press. 1949. ISBN 0-252-72548-4. http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf. - "Instrumental analysis". Bulletin of the American Mathematical Society 42 (10): 649–669. 1936. doi:10.1090/S0002-9904-1936-06390-1. http://projecteuclid.org/euclid.bams/1183499313. - National Institute of Standards and Technology (2008), Guide for the Use of the International System of Units. Online version. - "7. The Shift Matrix". The Link System. IBM. 1956-06-11. pp. 5–6. Stretch Memo No. 39G. http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/06-07/102632284.pdf. Retrieved 2016-04-04. "[…] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long […] the Shift Matrix to be used to convert a 60-bit word, coming from Memory in parallel, into characters, or "bytes" as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. […] The Adder may accept all or only some of the bits. […] Assume that it is desired to operate on 4 bit decimal digits, starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0-3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on. […] It is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. […]" - "The Word "Byte" Comes of Age...". Byte Magazine 2 (2): 144. February 1977. https://archive.org/stream/byte-magazine-1977-02/1977_02_BYTE_02-02_Usable_Systems#page/n145/mode/2up. "[…] The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch. A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8 bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter. The first published reference to the term occurred in 1959 in a paper "Processing Data in Bits and Pieces" by G A Blaauw, F P Brooks Jr and W Buchholz in the IRE Transactions on Electronic Computers, June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch), edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows: Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.) System/360 took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. […]". - Buchholz, Werner, ed. (1962), "Chapter 4: Natural Data Units", Planning a Computer System – Project Stretch, McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA., pp. 39–40, http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/Buchholz_102636426.pdf, retrieved 2017-04-03 - "A proposal for a generalized card code of 256 characters". Communications of the ACM 2 (9): 19–23. 1959. doi:10.1145/368424.368435. - Information in small bits Information in Small Bits is a book produced as part of a non-profit outreach project of the IEEE Information Theory Society. The book introduces Claude Shannon and basic concepts of Information Theory to children 8 and older using relatable cartoon stories and problem-solving activities. - "The World's Technological Capacity to Store, Communicate, and Compute Information" , especially Supporting online material , Martin Hilbert and Priscila López (2011), Science, 332(6025), 60-65; free access to the article through here: martinhilbert.net/WorldInfoCapacity.html - Digital Communication. Tata McGraw-Hill Education. 2005. ISBN 978-0-07059117-2. https://books.google.com/books?id=0CI8bd0upS4C&pg=PR20. - Bit Calculator – a tool providing conversions between bit, byte, kilobit, kilobyte, megabit, megabyte, gigabit, gigabyte - BitXByteConverter – a tool for computing file sizes, storage capacity, and digital information in various units Original source: https://en.wikipedia.org/wiki/Bit. Read more
ITHACA, N.Y. Aug. 14, 2019 – As methane concentrations increase in the Earth’s atmosphere, chemical fingerprints point to a probable source: shale oil and gas, according to new Cornell University research published in Biogeosciences, a journal of the European Geosciences Union. The research suggests that this methane has less carbon-13 relative to carbon-12 (denoting the weight of the carbon atom at the center of the methane molecule) than does methane from conventional natural gas and other fossil fuels such as coal. This carbon-13 signature means that since the use of high-volume hydraulic fracturing – commonly called fracking – shale gas has increased in its share of global natural gas production and has released more methane into the atmosphere, according to the paper’s author, Robert Howarth, the David R. Atkinson Professor of Ecology and Environmental Biology at Cornell. About two-thirds of all new gas production over the last decade has been shale gas produced in the United States and Canada, he said. While atmospheric methane concentrations have been rising since 2008, the carbon composition of the methane has also changed. Methane from biological sources such as cows and wetlands have a low carbon-13 content – compared to methane from most fossil fuels. Previous studies erroneously concluded that biological sources are the cause of the rising methane, Howarth said. Carbon dioxide and methane are critical greenhouse gases, but they behave quite differently in the atmosphere. Carbon dioxide emitted today will influence the climate for centuries to come, as the climate responds slowly to decreasing amounts of the gas. Unlike its slow response to carbon dioxide, the atmosphere responds quickly to changes in methane emissions. “Reducing methane now can provide an instant way to slow global warming and meet the United Nations’ target of keeping the planet well below a 2-degree Celsius average rise,” Howarth said, referring to the 2015 Paris Agreement that boosts the global response to climate change threats. Atmospheric methane levels had previously risen during the last two decades of the 20th century but leveled in the first decade of 21st century. Then, atmospheric methane levels increased dramatically from 2008-14, from about 570 teragrams (570 billion tons) annually to about 595 teragrams, due to global human-caused methane emissions in the last 11 years. “This recent increase in methane is massive,” Howarth said. “It’s globally significant. It’s contributed to some of the increase in global warming we’ve seen and shale gas is a major player. “If we can stop pouring methane into the atmosphere, it will dissipate,” he said. “It goes away pretty quickly, compared to carbon dioxide. It’s the low-hanging fruit to slow global warming.” This research was funded by the Park Foundation and the Atkinson Center. For additional information, see this Cornell Chronicle story.
Light Shed on How the Brain Forms and Stores Long-Term Memory Complete the form below to unlock access to ALL audio articles. Wanting to better understand how the brain forms and stores long-term memory, an international team of scientists undertook a study of the brain's circuits. Their work sheds a new, updated light on the way the circuits in the brain work, providing fresh insights into the brain's long-term memory formation and storage. Their work was published in the journal Cell Reports on January 20, 2023. "In order to understand how we form memory, store memory, and recall memory, it is essential to unravel the complicated wiring of the memory circuits composed by the hippocampus and the entorhinal cortex," said Shinya Ohara, an assistant professor in the Graduate School of Life Sciences at Tohoku University. The hippocampus is the region of the brain that is primarily related to memory. The entorhinal cortex is the area of the brain that serves as a kind of network hub for navigation, memory, and perception of time. It is part of the brain's hippocampal memory system and serves as a gateway between the hippocampal formation and the neocortex, that part of the brain that controls higher brain function. Want more breaking news? Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.Subscribe for FREE Scientists have long understood the general organization of this hippocampal and entorhinal circuit. In the early 1990s, scientists identified the basic wiring of this brain circuit. With these earlier studies, scientists thought that the hippocampus and the entorhinal cortex were connected by parallel identical circuits. However, the research team's findings bring new understanding of the memory circuits in the hippocampus and entorhinal cortex. The team conducted their study using anterograde tracing and in vitro electrophysiology in rodents. They discovered that the ventral hippocampus efficiently sends out information to the neocortex via the medial entorhinal cortex. Their study revealed that the ventral hippocampus - that part of the hippocampus related to stress and emotion - sends information to the medial entorhinal cortex layer Va neurons. The entorhinal cortex consists of six layers - this Va layer is one of the deep layers. When this information is received in the entorhinal cortex, it processes the information to the neocortex. This connectivity indicates that the ventral hippocampus controls the signal flow from the hippocampus to the neocortex, which supports long-term memory formation and storage. Since the ventral hippocampus is well known for processing emotional information, the research team hypothesizes that this circuit may play an important role in memorizing emotional events. "For example, in our daily lives, we remember happy events or sad events very well. The neural mechanism of how the emotional events are memorized is largely unknown. The circuit which we identified in this study may play an important role in processing such emotional memories," said Ohara. Looking ahead, the team's next step is to test their hypothesis. They plan to selectively inactivate this pathway of the ventral hippocampal to medial entorhinal cortex layer Va while the animal performs a memory task. Generally, animals will approach a place where they experienced happy events while avoiding the places that caused bad memories. "We think that the animal will not be able to form such emotional memory when the ventral hippocampal-medial entorhinal cortex circuit is inactivated," said Ohara. Reference: Ohara S, Rannap M, Tsutsui KI, Draguhn A, Egorov AV, Witter MP. Hippocampal-medial entorhinal circuit is differently organized along the dorsoventral axis in rodents. Cell Rep. 2023;42(1):112001. doi: 10.1016/j.celrep.2023.112001 This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.
The attractive or repulsive interaction between any two charged objects is an electric force. … The magnitudes of the forces are then added as vectors in order to determine the resultant sum, also known as the net force. The net force can then be used to determine the acceleration of the object. What is net force explain with an example? The total sum of forces acting on a body is known as net force. Eg. If the wheels of a car push it with a force of 5 newton and drag it 3 newton the net force is 3 newton. What is the formula for force of attraction? The mathematical formula for gravitational force is F=GMmr2 F = G Mm r 2 where G is the gravitational constant. What is net force Class 9? The net force is the force which is the sum of all the forces acting on an object simultaneously. Net force can accelerate a mass. Some of the other forces act on anybody either at rest or motion. What is a real life example of net force? You are walking : You are exerting a net force on the road with your feet and the road as a result of friction exerts and equal and opposite force which helps you to walk. What is G in F MG? F=mg – F is force, m is mass, g is acceleration due to gravity. It would be mass (m) times gravity (g) = Force (F) According to Newton’s Law of Univeral Gravitation – Gravitation is the force that attracts objects toward each other. What is value of g in physics? In the first equation above, g is referred to as the acceleration of gravity. Its value is 9.8 m/s2 on Earth. That is to say, the acceleration of gravity on the surface of the earth at sea level is 9.8 m/s2. What is the use of coulombs law? The Coulomb’s law equation provides an accurate description of the force between two objects whenever the objects act as point charges. A charged conducting sphere interacts with other charged objects as though all of its charge were located at its center. What is net force 8th grade? The net force acting on an object is the combination of all of the individual forces acting on it. If two forces act on an object in opposite directions, the net force is the difference between the two forces. … If two forces act on an object in the same direction, the net force is the sum of the two forces. What is an example of friction speed and net force in your life? The driver plants his foot on the accelerator and with the force applied by the engine, the car starts accelerating. But the faster it goes, the more friction there is. The wind pushes harder against it, the friction in the drivetrain increases. So the net force applied by the engine decreases. What are 10 examples of balanced forces? Examples of balanced forces: - The weight of an object and the normal force acting on a body are balanced. … - A car that is pushed from opposite sides with equal force. … - A lizard on a wall in a vertical position. … - A ball hanging by a rope. … - A weighing balance where the weight in both of the pans is exactly equal.
Sanskrit, meaning ‘perfected’ or ‘refined’, is one of the oldest, if not the oldest, of all attested human languages. It belongs to the Indo-Aryan branch of the Indo-European family. The oldest form of Sanskrit is Vedic Sanskrit that dates back to the 2nd millennium BCE. Known as ‘the mother of all languages,’ Sanskrit is the dominant classical language of the Indian subcontinent and one of the 22 official languages of India. It is also the liturgical language of Hinduism, Buddhism, and Jainism. Scholars distinguish between Vedic Sanskrit and its descendant, Classical Sanskrit, however, these two varieties are very similar and differ mostly in some points of phonology, grammar, and vocabulary. Originally, Sanskrit was considered not to be a separate language, but a refined way of speaking, a marker of status and education, studied and used by Brahmins. It existed alongside spoken vernaculars, called Prakrits, which later evolved into the modern Indo-Aryan languages. Sanskrit continued to be used as a first language long after it was no longer spoken.
The book is intended to impart to the young student the easier principles Of the French language, and to give him a good knowledge Of the regular verbs, and of those irregular verbs which may be classified; in short, to form an Introduction to the New Method or Larger Course. The aim Of the author, in the whole course Of the work, has been to give Simple precepts, such as children may easily understand, and to illustrate the same by copious examples, easy to be imitated. Repetition in the rules has not been avoided where such repetition would render the meaning more intelligible. The frequent repetitions in the vocabularies are also intentional; and after the nouns, in these, the gender is indicated. This method has been preferred to that Of placing the article before such nouns. |Category||Learning Foreign Languages|
The frequency inverter is an electronic device used to control the speed of a three-phase motor. The frequency that arrives at the motor input determines the speed at which it will operate. Three-phase motors have the principle of operation based on the rotating electric field. The field that appears when an alternating power system is applied to the poles of an engine, 120 ° out of phase. The speed at which the motor works is provided by the rotating electric field, this speed is called the synchronous speed. It is determined as a function of the number of poles of the motor (constructive characteristic) and as a function of the frequency at which the motor enters. Mathematically speaking, the synchronous speed (Ns) is the product of 120 times the frequency (f) in Hz, divided by the number of poles (p) of the motor. From this formula, it is clear that the higher the frequency that reaches the engine, the higher the working speed of the engine and the reverse also influences the speed and the engine’s lower speed. And it is this change that the frequency inverses make, it performs this intervention before the motor input. Frequency is a quantity, measured in Hertz (Hz). It corresponds to the number of oscillations or cycles per second that occur in the electric current. Using a frequency inverter has a number of advantages, such as: controlling the motor speed, without large torque losses; smooth acceleration through programming; direct braking on the engine, without the need for mechanical brakes; speed programming according to the need; automation; flexibility; safety; simple installation; greater precision; etc. In order to understand how this change is made in the frequency supplied by the network to the motor input, it is first necessary to know the parts of a frequency inverter. - Input circuit (bridge rectifier): This block rectifies the alternating energy available for supplying the inverter. The most common configuration is a full-wave diode bridge and at the output a capacitor that filters the voltage obtained. - Power inverter: This part turns the DC voltage of the previous block into a three-phase voltage to supply the motor. Transistors (IGBTs) are used that switch the voltage from the PWM (Pulse Width Modulation) generator signals. When these signals generate an inductive load such as the three-phase motor, they take an almost sinusoidal shape, despite being generated as pulse trains. In this circuit, waves are formed that determine the speed and power applied to the engine. The control block generates pulses that act on the switching transistors. - Surge protection: The voltage of the power grid is not perfect and may contain surges and transients, to protect the circuit, elements such as varistors, TVS and similar elements are used in the frequency inverses. - Internal protection: This block analyzes the voltages present at the inverter output so that if they present any disturbance, the control block is activated to take the necessary measures, such as interrupting the process. - Driver’s board (IGBT trip, power supplies, etc.): Signal generator block for excitation of output power transistors. This block analyzes the conditions of the load, determining what voltage should be applied to it to generate the necessary torque. The panel that presents general information and is also where the inverter is programmed. - Interface (I / O): Through this block, the inverter communicates with external devices, such as computers. In this block, decisions are made according to schedules, and internal or external signals. The frequency inverter is connected to the mains, and at its output, there is a load that will receive the frequency modified by the inverter. In the first stage, the inverter uses the rectifier circuit to transform the alternating voltage into continuous. After that, the second stage does the reverse, transforms voltage C into AC voltage (converter), and with the desired frequency. In the network, the frequency is fixed, usually, 60 Hz, and the voltage is transformed by the input rectifier into pulsed continuous (full-wave rectification). The capacitor (filter) transforms it and pure direct voltage. This continuous voltage is connected to the output terminals by the inverter’s semiconductor devices, the transistors, which act as a static switch. The control system controls the action of these semiconductors, to achieve a pulsed voltage, with fundamental frequencies out of phase 120º. The voltage is chosen so that the voltage/frequency ratio is constant, resulting in constant flow operation, and maintaining maximum motor overload capacity. Scalar Inverter X Vector Inverter Scalar inverters are used in simpler tasks such as controlling start and stop and maintaining speed at a constant value (regulation). The control logic used is the constant voltage/frequency ratio. The vector inverter is more complex compared to the scalar inverter. Basically, it promotes the decoupling between flow control and speed control through the transformation of variables. By this control technique, these inverters are used in more complex tasks, which require great precision. The biggest difference between these inverters and the operation mode of each one is the ability to invert the factorials. As can be seen, the scalar inverter changes the frequency according to the voltage/frequency ratio, while the vector inverter does this in a more complex way, making changes in the parameters that influence these quantities.
Types of Capacitor Capacitors are one of the primary electrical components, acting as stores of charge in circuits. Capacitance is measured in Farads, or more usefully, microfaradsThree common types of capacitor are illustrated below. Variable capacitors are commonly used in radio tuning sets. They consist of semicircular aluminium or brass plates separated by air. One set of plates is fixed and the other is rotated by a knob to change the overlap hence the capacitance. Mansbridge capacitors consist of two long strips of tinfoil separated by thin waxed paper or polyester film. These are rolled up and sealed inside a metal box to prevent entry of moisture. Electrolytic capacitors take the form of two sheets of aluminium foil separated by muslin soaked in a type of ammonium borate, rolled up and sealed in an insulating container. Wires attached to the foil strips are then connected to a battery. Electrolysis takes place and a thin film of aluminium oxide forms on the positive foil. This is an insulator and serves as the dielectric. Very large and compact capacitances may be made this way.
Much of moral philosophy construes morality as a search for truths and moral knowledge, and much of everyday moralizing consists of handwringing over “justice,” “human rights,” etc. This article disabuses morality of these delusions by examining differences between descriptive and normative mental models, leading to the implication that truth and falsity apply to the former, but not the latter. This in turn implies practices for dealing with moral sentiments, including the rejection of moral constructs as fictitious. Table of Contents - Descriptive versus Normative Mental Models - Separation of Descriptive and Normative Elements - Moral Constructs - Conflict between Moral Skepticism and Moral Realism - Practices of Moral Skepticism - Further Reading Descriptive versus Normative Mental Models Truth or Falsity The words “true” and “false” are symbols, and symbols can be used in a variety of ways. The definition of “truth” used here is different than that used by those who say “speak your truth” to denote things that are subjectively important to someone. Instead, “truth” is used here to describe a relationship between a mental model and the external world.1 A mind might have a belief, which is a representation of how the mind perceives the external world. When a belief accurately models the external world, it is a true belief. When a belief and the external world are not in agreement, this is a false belief.2 Beliefs are, in practice, rarely true or false in isolation. In order to examine truth or falsity, many interrelated beliefs are examined at once. For instance, if someone believes that a certain backpack weighs approximately 10 kilograms, other beliefs must also be held in order to examine whether this belief about the backpack is true or false. These other beliefs include, for instance, the belief that heavier objects tip a balance lower when placed opposite lighter objects. Because of this, instead of truth or falsity being discussed as properties of individual beliefs, this inquiry examines truth and falsity as properties of a mental model. A mental model includes a particular belief and all the other related beliefs necessary in order to test this belief. This kind of mental model is descriptive, in that it attempts to model how things actually are. A common definition of knowledge is justified, true belief. Thus, if a mental model is true, and if the mind that has the mental model has a justification for it in the form of evidence, then the mind knows something about the external world. No mind ever has complete and perfect knowledge of the external world. Therefore, it is prudent for a sound mind to allow for revision of its mental models. Descriptive mental models try to explain and predict phenomena in the external world. However, the explanations and predictions of false mental models are flawed, and a sound mind will always try to amend false mental models in order to make them true. For instance, mental models illustrated in Figure 2 are flawed. In the first case, an individual might abstain from attempting to carry a backpack that was within the individual’s ability to carry. In the second case, an individual might attempt to carry a backpack that is greater than the individual’s capacity. On the other hand, the individual in either of the cases illustrated in Figure 1 can make correctly informed decisions about the backpack. The totality of the external world cannot, in practice, be observed by any given mind. Some specific circumstances of the external world can, however, be partially observed. These observations constitute empirical evidence. When a mind encounters empirical evidence that is consistent with a descriptive mental model, then it is prudent for the mental model to persist. When a mind encounters evidence that contradicts a descriptive mental model, then it is prudent for the mental model to be amended in order to better agree with the external world, such as in Figure 3. Thus, empirical evidence prompts the revision of false beliefs to better approximate true beliefs. Morality can also be construed as a kind of mental model. However, moral sentiments do something other than simply try to represent the external world. Mere modeling of how things are does not constitute morality. For instance, merely believing that there is murder in the world as a matter of fact as in Figure 4 does not constitute a moral sentiment. Morality models how things should be. Moral sentiments occur when there is a judgment or prescription about the external world. For instance, a moral mental model does not merely believe that there is murder in the world as in Figure 4, but judges murder to be wrong as in Figure 5. Moral mental models are normative, rather than descriptive. They do not have implications that explain or predict phenomena, but instead judge the worth of things (e.g., “murder is bad” or “charity is good”), prescribe certain behavior (e.g., “be kind to one another”), or proscribe behavior (e.g., “do not steal”). While both true-or-false beliefs and moral sentiments constitute kinds of mental models, an important difference occurs in how these kinds of mental model react to contradiction with observations of the external world. Earlier it was seen that when a descriptive mental model and the external world disagree, it is prudent to amend the mental model to fit the external world. However, with moral sentiments, when a normative mental model and the external world disagree, the mind tries to amend the external world, not the mental model. It is contrary to the purpose of morality to have the moral sentiment that murder is wrong, observe an incident of murder, and then conclude, “I guess I was mistaken. Murder is righteous,” as in Figure 6. Rather, someone who has the moral sentiment that murder is wrong and who observes an incident of murder is expected to try to prevent the murder, as in Figure 7. One might also censure the act, call the authorities, report the act, or, with a longer view, investigate the root causes of murder in society and attempt to remedy them. In other words, the purpose of a normative mental model is to amend the external world. Thus, descriptive mental models and normative mental models are diametrically opposed in how they handle contradictory empirical evidence. Such contradictions lead to a change of the mental model in the descriptive case, but lead to attempts to change the external world in the normative case. No Moral Truth For a worldview that accepts these definitions, it must be concluded that inquiring as to the truth or falsity of moral sentiments is a mistake, in the same way it is a mistake even to ask about the color of a taste or the odor of a touch. One is simply asking the wrong kind of question in these cases. There are no conditions under which a moral sentiment is false, because any disagreement between the external world and a normative mental model is judged to be a fault of the external world, not the mental model. Indeed, this is the very point of moral judgment. Where there is not even an opportunity for falsity, there is neither the possibility of truth. Thus, there are no moral truths. No Moral Knowledge A direct consequence of this and the earlier definition of knowledge as justified, true belief is that there is no such thing as moral knowledge. Moral knowledge is impossible, not due to any limitation of the human condition, but because it is nonsensical to attempt to apply truth and falsity to moral sentiments. In philosophical jargon, a position that denies the possibility of moral knowledge is termed “moral skepticism.” Separation of Descriptive and Normative Elements Since true-or-false beliefs and moral sentiments are different and are even in a certain sense opposite, and because true-or-false beliefs and moral sentiments are evaluated in different ways, it is best to distinguish between the two in order to maintain clear thinking. Unfortunately, human beings speak, hear, write, and read natural languages, and natural languages obfuscate attempts to achieve such clarity by often mixing the expression of the two. Two individuals can relate the same fact, but one individual might label a faction “terrorists,” while the other individual might use the phrase “freedom fighters.” Indeed, the two individuals might have the exact same descriptive mental models about what has actually happened. However, the connotation of the language used differs between the two individuals. The connotation of the first includes a disapproving moral sentiment, while the connotation of the second includes an approving moral sentiment. Such connotations lead to a mixing of true-or-false beliefs and moral sentiments at the level of language. Because of this, and because human beings encounter a lot of language usage in their daily lives, often the first encounter one has with a new true-or-false belief also includes moral sentiments mixed with it, and the first encounter one has with a new moral sentiment also includes true-of-false beliefs mixed with it. Thus, true-or-false beliefs and moral sentiments do not come partitioned in a neat package, properly separated and labeled. Instead, a sound mind must consciously and deliberately do such partitioning itself in order to maintain clarity in its thinking. The Method of Attitudinal Propositions Doing such a mental partitioning can be facilitated by the realization that moral sentiments can readily be translated into descriptive beliefs about the normative mental models themselves. Such a belief can be termed an “attitudinal proposition” because it is a belief about a mind’s attitude toward something. For instance, if an individual named Zhaohui says, “stealing is wrong,” then the corresponding attitudinal proposition is “Zhaohui disapproves of stealing” or “Zhaohui is morally opposed to stealing” or “Zhaohui feels that stealing is wrong.” While it makes no sense to attempt to evaluate the truth or falsity of the moral sentiment “stealing is wrong” itself, descriptive mental models can be correct or incorrect about Zhaohui’s normative mental model. From the perspective of the more than seven billion human minds on the Earth, Zhaohui’s mind is part of their external world. Such attitudinal propositions make a different kind of claim than the objective statement that “murder is wrong.” They make claims about subjective mental states, claims that are true or false, as illustrated by Figure 8 and Figure 9. Thus, while there are no moral truths and no moral knowledge, there can be truths and knowledge about moral sentiments. Mixed statements with both descriptive information and normative connotations can be interpreted as a conjunction between two propositions: one about a state of affairs and another about a mind’s attitude toward the state of affairs. For instance, if someone named Chidi says, “the Liberation Front terrorists were defeated in the battle today,” this can be interpreted as two propositions: “the Liberation Front faction lost the battle today” and “Chidi disapproves of the Liberation Front.” Similarly, if someone named Astrid says, “during the battle today, the Liberation Front was set back in their struggle against oppression,” this can be interpreted as “the Liberation Front faction lost the battle today” and “Astrid approves of the Liberation Front.” If social decorum permits, one can make these translations explicitly as part of one’s conversation. This has the potential to inform one’s interlocutor that one has a worldview that includes moral skepticism. Of course, there are social settings where it may not be prudent to make such translations overtly. In these cases, one can quietly make this sort of translation from moral sentiments to attitudinal propositions in one’s own mind. Regardless of whether done overtly or tacitly, making such translations as a matter of habit rescues a truth from a sort of dialogue that consists of untruths. These habitual translations thus create opportunities to gain new knowledge and make learning experiences out of what, taken at face value, would not be opportunities to learn. Sometimes moral sentiments are expressed in very literal, self-aware terms, e.g., “I am a vegetarian because I am ethically opposed to raising and harvesting animals for food.” However, sometimes moral sentiments are expressed in terms of abstract concepts, such as “justice,” “virtue,” etc. When they appear in the predicates of statements, these normative moral constructs can readily be translated to descriptive beliefs by the method of attitudinal propositions. “This war is a just war” can be translated to the belief that the speaker morally approves of the war in question, and “this war is unjust” can be translated to the belief that the speaker morally disapproves of the war. “Practicing birth control is virtuous” can be translated to the belief that the speaker morally approves of birth control, and “using abortion as a method of family limitation is vicious” can be translated to the belief that the speaker morally disapproves of the use of abortion as a birth control method. However, sometimes these moral constructs are used not just in the predicates of statements, but assume an existence of their own, being used as the subject of statements. For instance, the word “good” is understood to mean something that is desirable when predicated of a physically existing subject, but in ancient Greek philosophy it was fashionable to speak about “the Good,” as if it were a thing in and of itself, in statements such as “the Good is One.” Likewise, today some individuals create theories of justice. For a time, it was fashionable for thinkers to speak of “Nature and Nature’s laws” with an uppercase “N.” There are at least three major ways morally skeptical worldviews can interpret moral constructs. One interpretation notes that not all grammatically correct sentences of natural languages form a meaningful thought. For instance, the sentence “purple ideas sleep furiously” is a grammatically correct sentence of the English language, but it is nonsense and meaningless. Puzzling over such a sentence is a waste of time. Under this interpretation, statements about moral constructs are viewed in a similar light, judging them to be meaningless, in a literal sense. This is ethical noncognitivism. (Ayer 1946) This is perhaps the most obvious extension of the earlier interpretation of moral sentiments. Much as it was interpreted to be a mistake to attempt to judge the truth or falsity of moral sentiments, this interpretation believes it a mistake to attempt to judge the truth or falsity of statements about moral constructs. Statements about moral constructs might still have connotations, even if their denotation is empty because they refer to nothing. When these connotations can be translated using the method of attitudinal propositions, there may be a true-or-false belief to be discovered. For instance, if someone named Thomas were to say, “Nature and Nature’s Law demands the overthrow of the Republic of Freelandia,” then this can be translated to “Thomas approves of the overthrow of Freelandia.” This can be done without engaging in speculative philosophy about what Nature wants or what Nature’s Law is, because ethical noncognitivism judges such statements as meaningless. Thus, whereas mixed language can be translated into the conjunction of a factual statement and a statement about someone’s normative mental model, statements about moral constructs are interpreted as missing the former, factual content and consisting of, at most, just the latter attitudinal proposition. Another interpretation is to take statements about moral constructs at face value and to make existence of a subject a criterion of truth. Under this interpretation, statements such as “all unicorns are white” are interpreted as false because there are not, in fact, any unicorns. Similarly, statements in terms of “justice,” “virtue,” etc., are trying to describe some feature of the non-mental world.3 However, there are no such physically existing things such as justice, virtue, etc., to describe, and all of these sort of statements fail. Therefore, all such statements about moral constructs are false. This is an error theory of morality. (Mackie 1977) This kind of an error theory for morality is specific to the subjects of statements. One can have a more general error theory for morality, which would contradict the earlier sections of this article, since a more general error theory would judge moral sentiments to be false, and earlier it was judged a mistake to apply truth or falsity to moral sentiments. However, a more specific error theory in which the existential criterion is applied only to subjects of statements is compatible with the earlier sections of this article. Under such a more specific error theory, statements such as “murder is wrong” are neither false nor true, since the subject of the statement, i.e., murder, describes something that exists, in this case a physical act. Statements such as “the Good is One” or “justice is more important than facts” are false because their subjects – i.e., “the Good” and “justice” – do not denote something that exists in the non-mental world. A third interpretation of statements about moral constructs is that they are not to be taken literally, but are useful fictions, like a fable, intended to remind individuals to act in accord with some moral system. However, while it may be the case that moral constructs should be interpreted as useful fictions, many individuals have, in fact, not interpreted them as such, but instead have taken them literally. Pages and pages of philosophical treatises have been devoted to speculating about “the Good,” “virtue,” etc. Even today, in contemporary news outlet there is much handwringing about the nature of “justice” or “human rights.” An issue with the useful fictions interpretation is that, while it may make a mind internally consistent, this is largely hidden from the external world. An individual who ironically speaks in terms of “justice” or “human rights” with the understanding that these are useful fictions is not readily distinguishable from someone who literally speaks in terms of “justice” or “human rights” as if they described actual things. This leaves those with useful fiction interpretations either to call out repeatedly that moral constructs are fictional when using them or to maintain secret worldviews indistinguishable from diametrically opposed worldviews. Regardless if statements about moral constructs are interpreted as neither true nor false, always false, or false but in a way that has utility, the important point that all these morally skeptical interpretations have in common is that statements about moral constructs are not literally true. Thus, such statements about moral constructs are moral fictions. The Fallacy of Externalization Statements about moral constructs are fictional because all truths describing morality are describing normative mental models, however indirectly this may be. Value judgements do this indirectly by ascribing goodness or badness to things in the non-mental world, such as the act of murder. Moral constructs take this indirection even further with the invention of abstractions that do not describe anything in the non-mental world, such as justice, virtue, etc. While all truths about morality describe normative mental models, moral fictions are a tool to create the impression that one is discussing something external to one’s own mind. This lie may be termed the “fallacy of externalization.”4 Moral fictions use the same language used to describe things that have physical existence independent of one’s own mind, such as gravity, oxygen, or tectonic shift. The use of such language is a way to pretend that one is describing something other than one’s own normative mental model. Delusions of Moral Knowledge By using the fallacy of externalization, moral fictions create the illusion of being statements made by a descriptive mental model. Descriptive mental models, as has been seen, can be evaluated as true or false and can constitute knowledge. However, it is a mistake to attempt to apply truth or falsity to moral sentiments, and there is no moral knowledge. Thus, moral fictions are a way for a mind to delude itself into believing it has moral knowledge. Conflict between Moral Skepticism and Moral Realism Worldviews that believe moral sentiments can constitute knowledge of the non-mental world are called “moral realism” in the philosophical jargon. Unfortunately, moral realism has been a common if not the most common meta-ethical5 position in the history of philosophy taught in academic settings, and so one is liable to encounter many moral fictions when perusing published thinking about ethics. While academic philosophy is notoriously irrelevant to the daily lives of the masses, this is not just a recondite disagreement. In contemporary life, one need not look very far to find those advancing their own pet ideas of what “justice” is. Morally contentious issues such as abortion are often framed in terms of moral fictions such as “personhood” or “human rights.” Worldviews of moral realism and worldviews of moral skepticism are prone to conflict. For instance, a morally realist worldview might view its invention of moral fictions as giving credence to its own morality. It might view its own morality as superior to others because it is “rational,” substantiated with argumentation, and supported by logic. A morally realist worldview might look at the morality of a morally skeptical worldview as unsubstantiated, inferior because of its lack of moral fictions, and prone to “emotional arguments.” However, a morally skeptical worldview would view the moral fictions of a morally realist worldview as a self-delusion, not an asset. It would view the claims that a moral system is superior to others as merely begging the question. It would view argumentation supporting a moral system as backwards rationalization of a preconceived conclusion. It would view claims that moral sentiments are supported with logic as impossible, since logic is a calculus of truth conditions, and it is mistaken to apply truth or falsity to moral sentiments. It would view the attempts to disparage moral sentiments as “emotional” as entirely misguided since emotions are a large part of how morality actually works in a human mind, rather than how philosophers speculate it should work in their pointless treatises. This meta-ethical conflict has large implications in how human beings go about their morality and how their worldviews relate to and understand one another. Unfortunately, there are many individuals who have never gone to the trouble of figuring out their own meta-ethical position. Some of these individuals launch right into moralizing assuming a kind of naive moral realism. It then falls upon those of morally skeptical worldviews to articulate their own meta-ethical positions before a naive moral realist can even understand where they are coming from. Such is a motive for this article. Practices of Moral Skepticism The meta-ethical position articulated herein is not just of speculative interest, but has implications for how to go about the doing of morality, implications that lead to certain practices. The first practice is, when expressing one’s own morality, to do so by referring to one’s attitudes directly, such as with statements of the form “I ethically approve of …” or “I disapprove of … morally.” Since the only true things that can be said about morality are descriptions of a normative mental model, this is the most direct and truest way to express moral sentiments. Furthermore, everyone is the world’s foremost expert on the contents of one’s own mind, so this is an approach that leads to authoritative information on a subject. Another practice is, when encountering the moral sentiments of others expressed in a manner not referring directly to their attitudes, to translate these statements into attitudinal propositions. This can be done overtly, when social context allows for it, or silently in one’s own mind, when social context does not. This has the benefit of rooting out the linguistic obfuscation of true-or-false beliefs and moral sentiments so that one can use methods for investigating truth or falsity for the former and not mistakenly attempt to use such methods for the latter. A final practice is to avoid moral constructs entirely. Because they are fictions, nothing true can be said about them, and it is waste of time to engage in speculation about them. It is relatively easy to avoid being the origin of moral fictions, but one is liable to encounter the moral fictions of others. Sometimes there is an attitudinal proposition that can be rescued from others’ moral fictions; sometimes not. Either way, this is perhaps the most challenging situation that those of a morally skeptical viewpoint encounter, for the only remedy such a circumstance allows is to explain precisely why one views moral fictions as frivolous and self-delusional. The strains of moral realism popular in the history of philosophy would portray morality as a quest for objective truths and moral knowledge. Moral skepticism as articulated here rejects this approach to morality. While this clarifies what morality is not, it does not identify what morality actually is. Moral Psychology Instead of Moral Philosophy The worldview articulated here identifies moral sentiments as normative mental models. Thus, it identifies morality as a psychological construct. Since moral sentiments of any one person are influenced by those around them, morality is a socio-psychological construct. At the same time that the meta-ethical position articulated here implies that traditional moral philosophy is not a worthwhile pursuit, it points to moral psychology as a way to truly understand morality. This may seem to have little relevance for anyone outside of academic pursuits. Most individuals lack the time and resources to do psychology (though many at least have the time and resources to study the work of others). Self-Awareness Instead of Self-Delusion There is also an important implication relevant for all individuals, regardless of vocation. It is that moral enlightenment does not come from discovering supposed moral truths, nor does it come from the vain invention of fictions intended to rationalize one’s moral sentiments, but comes from introspection. The best one can do is to know thyself, as the ancient maxim goes. A critical thinker is compelled to believe what is true and disbelieve what is false. What remains after this is just how one feels. The only enlightenment that can be attained about this remainder is to be aware of how one feels and to abstain from trying to rationalize one’s feelings post hoc. Then, one can investigate oneself with the same tools of inquiry – such as empirical evidence, investigation into causation, etc. – that one might use to investigate any other fact about the world. Ayer, Alfred Jules. 1946. Language, Truth and Logic. 2nd Edition. New York: Dover Publications. Mackie, J. L. 1977. Ethics: Inventing Right and Wrong. London: Penguin Books. This article uses the phrase “external world” to denote everything in the cosmos except for the contents of a specific mind.↩︎ This is a correspondence definition of truth, which is a kind of truth also called “a posteriori,” “synthetic,” “contingent”, or “matters of fact.” There are also coherence definitions of truth, which in philosophical jargon are called “a priori,” “analytic,” “necessary,” or “relations of ideas.” This latter kind of truth comprises things that are true by definition, such as “1 + 1 = 2” or “bachelors are unmarried men.” These kinds of true beliefs are tautologies, i.e., they refer to the same thing twice. This kind of tautology is only more interesting than more obvious tautologies like “2 = 2” or “bachelors are bachelors” inasmuch as they refer to the same thing with different symbols. This article regards tautologies as not useful for understanding morality because they merely beg the question. For instance, if one were to be questioned as why one felt that murder is wrong, and one replied “because I have defined ‘wrong’ that way,” this would not be informative.↩︎ This article uses the phrase “non-mental world” to denote everything in the cosmos except for the contents of any mind. This is a more restrictive class than what is referred to by “external world” by exactly one mind.↩︎ This kind of fallacy is different from the fallacies discussed in articles tagged “fallacy” in this blog. The articles tagged “fallacies” discuss mistakes in interpreting evidence, regardless of one’s philosophical position. The fallacy of externalization is a fallacy to moral skepticism, but is not a fallacy to moral realism.↩︎ The terms “meta-ethics” or “meta-morality” are philosophical jargon for that branch of moral philosophy that deals with what morality is, as opposed to those branches of moral philosophy that tackle specific ethical problems or that invent moral systems.↩︎
When I was a veterinary student, I was taught that vaccinations are a cornerstone of healthy pets. As I went on to practice, however, I saw severe acute vaccine reactions, as well as the clear onset of a variety of health problems soon after vaccination. I also saw several cats with feline fibrosarcoma, a tumor thought to appear in the vaccination site of 1 out of 1,000 cats. Now, after more than 20 years in practice, I believe that pets with fewer or no vaccines do better than those who are vaccinated. I also believe that vaccines can cause serious side effects that are often not noticed or recognized by conventional veterinarians. Here are some thoughts and suggestions for concerned pet owners. First, keep in mind that pathogens mainly affect individuals with a weakened immune system, so healthy food, fresh water, the right amount of exercise, and low stress is the best disease prevention. With the exception of rabies, most bacteria or viruses enter through the mouth or respiratory tract, so an animal with a strong immune system, in most cases, will respond by eradicating the pathogen before it gets a chance to grow and spread. It also is rare to have more than one serious pathogen attack the body at the same time. In contrast, vaccines introduce multiple pathogens by an injection that bypasses the natural gateways. Vaccines can cause symptoms similar to the disease they are supposed to prevent. Combinations of vaccines also can overwhelm the immune system and cause long-term problems. Repeated exposure to vaccines can create toxic buildup and lead to chronic disease, like arthritis or even cancer. Indeed, vaccines often contain carcinogens, such as mercury and formaldehyde. We also should note that vaccines sometimes are made by infecting healthy laboratory animals, including dogs, cats, and horses. When I was a student, I saw these poor souls locked in perpetual isolation, often losing their lives under torturous conditions. The path of “no harm” is to limit vaccinations when possible. Steps to keep your dog disease free: 1. Maternal antibodies protect puppies fully until the age of 10 to 16 weeks. Vaccination before 12 weeks of age may neutralize the maternal immunity, leaving your pet more vulnerable. I have confirmed this by running antibody tests on these puppies. 2. When your puppy is 12 weeks old, get an antibody “titer test,” available in most veterinary clinics. The most significant diseases are distemper, parvovirus, and leptospirosis. Most clinics run just the first two tests. 3. If antibodies are present, socialize your puppy on a moderate basis with other dogs. Being exposed to other dogs, while being protected by maternal antibodies, is “nature’s way of vaccination.” Some labs warn of low (insufficient) antibody levels, but I have not seen any dogs with any antibody presence that got sick. Retest your dog for antibodies at the age of five months. 4. If a titer test shows zero antibodies, consider vaccinating against parvovirus at 12 weeks and against distemper four weeks later. Never give more than one antigen at a time. 5. Avoid boosters and unnecessary vaccine exposure by getting a titer test one month after the last vaccine and then two to three months later. 6. Do not use vaccines for kennel cough, Lyme disease, or Giardia. These have the highest rate of side effect. I have seen many dogs that were vaccinated for Lyme disease that have symptoms of arthritis at the age of two to three years. This vaccine has not been approved for people because of safety issues. Kennel cough is self-limiting, much like a cold. The vaccine causes frequent side effects, including kennel cough itself. 7. If you live in an area with rabies or travel with your dog, the vaccine may be necessary. Give it at least four to eight weeks from other vaccinations. In my experience, healthy puppies may not need any vaccination and will maintain their antibodies (protections) for a lifetime. This is the safest way. Of course, no one can give you a 100-percent guarantee that your puppy will not get infected, with or without vaccines, but I have not seen any dogs with parvo or distemper since starting to use this protocol in the late ’90s. If your health practitioner, day care, or boarding facility demands vaccines, remember that you make the final decision. Be polite, state your request clearly, and notice how much you can stand your ground. If they do not respect your wishes, you are free to choose other providers.
Making the Connection: Sterile Connectors Empower Viral Vaccine Production March 19, 2020 Vaccines prevent disease by introducing the human immune system to a weakened, killed or subunit of a strain of virus or bacteria. They are formulated to introduce the antigen (an antibody generating substance) and encourage a response that produces antibodies against the disease, thereby creating immunity. A Brief Vaccine History In 1796, Edward Jenner was in his apprenticeship as a surgeon/apothecary when smallpox was a devastating, fast-spreading disease. Jenner was intrigued to learn that milkmaids who had contracted cowpox working in dairy farms seemed to be immune to smallpox and theorized that these things were connected. To support his theory, he took pus from a milkmaid infected with cowpox and scratched it into the arm of a child. When the child did not fall ill, the concept of the vaccine was born. Over time the approach to vaccine production and administration has evolved, with great progress across the 20th century, including the eradication of smallpox in the 1950s. This progress was not entirely without incident (e.g. The Cutter Incident) however the lessons learned from this directly contribute to improvements in safety. Fast-forwarding to present day, vaccines are a cornerstone of public health, and the market has continued to grow. According to a Fortune Business Insights report, the global vaccines market size was worth $41.61 billion in 2018 and is projected to reach $93.08 billion by 2026, exhibiting a CAGR of 10.7% during the forecast period. The Flu Vaccine One of the most common viral vaccines, the flu vaccine, including inactivated influenza vaccine (IIV), recombinant influenza vaccine (RIV) or live attenuated influenza vaccine (LAIV), is delivered seasonally to protect the population from long-term illness, or in extreme cases of immature or impaired immune systems, death. The strains in the vaccine are chosen based on research that indicates what flu strains are expected for the upcoming season. The flu vaccine is made via one of three regulatory approved approaches: - Egg-based (most common): For inactivated influenza vaccines, the candidate vaccine viruses (CVV) are inactivated using heat or chemicals and the virus hemagglutinin antigen, which triggers the human immune system to create antibodies that target the virus, is purified. For live attenuated influenza vaccine, the CVV are alive but weakened so as not to cause illness and are grown in different chick embryos in series. See more in the video below, featuring Pall’s Kleenpak® Presto genderless connector at 45 seconds. - Cell-based (gaining traction due to egg allergies): The flu CVV is inoculated in a cell-line (generally mammalian, though various methods exist) to replicate for several days. Just a few weeks ago, Seqirus was awarded the first adjuvanted cell-based flu vaccine approval by the FDA. - Recombinant (the newest method): The DNA coding for the flu virus antigen is obtained and replicated synthetically by combining with a viral vector such as baculovirus or a plasmid, and placing in a qualified cell line to produce the antigen in bulk. In each case, once grown, the flu virus, or related antigen is harvested, purified, inactivated and formulated into an injectable or inhalable format. Depending on the strength of the vaccine, adjuvants will be added to increase effectiveness. Stabilizers may also be added to enhance shelf life and stability. With the rapid emergence of acute viral diseases such as Ebola and Zika viruses, the development of more potent vaccines are required. RNA-based vaccines that use the body’s cells to produce a specific antigen that elicits an immune response, represent a promising alternative due to their high potency, rapid development and potential for low-cost manufacturing. Challenges in Vaccine Manufacturing Traditional viral vaccine production is laborious, starting with isolating a target virus strain. If the wrong strain(s) is selected, it will not be efficacious. Even with the right strain, there are challenges based on a virus’ ability to mutate and evolve rapidly. And while adjuvants enhance the immune response, aluminum-based adjuvants cannot be sterile filtered, increasing the need for single-use closed systems to ensure sterility is maintained during production. Existing vaccine development and commercialization methods can take 12 to 14 years, making time a critical challenge – seasonal or epidemic demands amplify this challenge. Pressures to decrease timeline and cost of vaccine production continue to build, and there is a tremendous global push to make affordable vaccine access a standard. Recent outbreaks, such as various coronavirus strains (SARS, MERS and COVID-19), various flu viruses and Ebola have highlighted the importance of accelerating the identification, development and commercialization of effective life-saving vaccines. The industry is responding with novel vaccine platforms that accelerate the development timeline to less than one year. Once developed and confirmed as safe and effective the remaining challenge is one of scaling up the process to supply adequate doses to the market. The Importance of Fluid Paths in Vaccine Production From upstream to downstream and into final formulation and fill, there are multiple points of fluid transfer in vaccine production. The points of transfer must be sterile and failsafe to avoid lost batches due to contamination and ultimately to ensure patient safety. At Pall, our next-generation Kleenpak® Presto sterile connectors offer a unique solution to fluid path challenges, especially in single-use closed vaccine production systems. Proven to maintain sterility when connecting two fluid paths, they are commonly used when sterile filtration cannot be done prior to vaccine filling of syringes or vials intended for patient use. This sterility defense is provided by a polyethersulfone peel strip material that protects each connector end which is integrated on a gamma-irradiated or autoclaved single-use assembly. With an easy-to-use ‘click-pull-twist’ mechanism, each connector end is joined, the protective peel strip removed and the connectors actuated. Intuitive design features visual error-proofing to verify that the product is securely connected and ready for sterile fluid transfer. While trendier drug products like cell and gene therapies get a lot of media attention, there is still a great deal of innovation in the vaccine space. More than 120 new products are projected to be in development, and manufacturers are looking at ways of improving vaccine manufacturing with a different toolbox for speed and reliability. Pall Biotech supports drug manufacturers globally with ready-to-use solutions that integrate easily into existing systems and processes, or total solutions with equipment and services that cover your full process. We have also amassed a wealth of data and expertise to optimize the development of a process from research and development through to commercial scale. To learn more about viral vaccines, subscribe now and get the latest blogs, news and offers from Pall Biotech. Claire Jarmey-Swan – Global Product Manager, Pall Biotech - Sort By
Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves (minute distortions of spacetime predicted by Einstein’s theory of general relativity) to collect observational data about objects such as neutron stars and black holes, events such as supernovae, and processes including those of the early universe shortly after the Big Bang. They lie in between the bands for ground-based detectors, the initial Laser Interferometer Gravitational-wave Observatory (LIGO) and its advanced configuration (ALIGO) and pulsar timing arrays such as the European Gravitational Observatory, Cascina, Italy; GEO600 in Sarstedt, Germany, and the Kamioka Gravitational-wave Detector (KAGRA), operated by the university of Tokyo in the Kamioka Observatory, Japan. Gravitational-wave astronomy seeks to use direct measurements of gravitational waves to study astrophysical systems and to test Einstein’s theory of gravity. The observations of gravitational-wave signatures, and were the first observation of a binary black hole merger. The Laser Interferometer Space Antenna (LISA) is a European Space Agency mission designed to detect and accurately measure gravitational waves – tiny ripples in the fabric of space-time-from astronomical sources. LISA should detect objects in the universe – black holes, neutron stars. The researchers developed computer simulations of faint black hole binaries, consisting of two black holes merge too. As a young area of research, Gravitational-wave Astronomy is still in development. However, there is consensus within the astrophysics community that this field will evolve to become an established component of 21st century multi-messenger astronomy. Detecting gravitational waves from binary star systems composed of White Dwarfs, neutron stars, and black holes. A collaboration of physicists has announced the first ever direct detection of gravitational waves – ripples created by the collisions of black holes. NASA and the European Space Agency is planning to launch a space-based detector as well as the continued improvement of the pulsar timing arrays currently building up the accuracy and sensitivity to detect gravitational waves from the early universe. The Advanced LIGO detectors are a masterpiece of experimental physics. The National Science Foundation (NSF) project to detect gravitational waves here on Earth. On 17th August 2017, the advanced LIGO or VIRGO interferometers observed gravitational waves emitted by the inspiral and merger of a binary neutron star system for the first time. The emitted gravitational waves produced by compact binary systems and mergers of supermassive black holes. The fact that this one single gravitational wave event (gw170817) to measure the age for the universe is remarkable, and not possible with every gravity wave detection. Image Credit: LIGO/A. Simonnet
V.2 #2 Recommended Practices - Teaching Content to Middle School and Secondary Students with Learnin When I speak to middle and high school teachers, they all express the same trepidation about the start of the school year: how am I going to meet the wide range of academic needs of all my students while teaching the curriculum? Teachers at these grade levels are challenged to meet the increasing academic and cultural diversity of today’s classroom in an atmosphere of high-stakes testing and rigorous achievement standards for all students, including those with learning disabilities (LD). In the United States, there are 2,780,218 students (ages 6-21) being served in the LD category. This is 45% of the total number of students served under the Individuals with Disabilities Education Improvement ACT of 2004, and 5.2 percent of the U.S. resident population of students in that age group. An estimated 48% of students with LD spend nearly all their instructional time in general education and an additional 29% spend approximately half of their instructional time (U.S. Department of Education, 2005). So how can teachers create supportive academic environments that engage learners and provide successful academic experiences for all students? Furthermore, how do teachers make accessible the content of their curriculum to all learners in heterogeneous classrooms? Two methods may be differentiated instruction and universal design for learning. Differentiated instruction is an instructional method for teaching content to students with diverse learning needs. Differentiation in instruction can occur in the following areas and in the following ways: (1) content—the selection of curriculum, how it is presented and accessed by students; (2) process—how students come to understand the curriculum through the use of tiered activities (e.g., teachers provide choices for assignments that vary in difficulty), pacing of lessons, and strategically designed lessons; (3) product—how the students will demonstrate knowledge, competency or understanding of a topic or skill; (4) affect—how the tone of the classroom community will be structured to ensure respect and examination of perspectives; and (5) learning environment—how the physical setting of the classroom will support learning, availability of materials and procedures for participating (van Garderen & Whittaker, 2006). While differentiation is a method for planning accessibility of content, Universal design for learning (UDL) focuses on reducing and eliminating barriers to learning. UDL is a method to consider student diversity while you are preparing lessons, rather than accommodations after the fact. UDL principles suggest that teachers consider appropriate goals, use flexible and supportive digital materials, diverse methods, and flexible assessments. To begin the process of purposeful planning of curriculum units driven by both the principles of UDL and methods of diversified instruction, teachers must consider the following questions. What is the critical content all students must leave this unit understanding? What are the academic strengths and weakness of the students in my class? What types of course artifacts—print, electronic and multi-media, etc.—can be incorporated into my curriculum to increase access to the information the students’ need to comprehend? How are course instruction and assessment activities designed to make the curriculum content accessible to all learners? What strategies do students need to have in order to learn the content? The challenge of making curriculum accessible to all learners can be daunting even for the most experienced teacher. The principles of UDL and differentiated instruction come together with research proven instructional strategies in a very practical and helpful resource on how to design and implement accessible curriculum entitled, Teaching Content to All: Evidence-Based Inclusive Practices in Middle and Secondary Schools by Keith Lenz and Don Deshler with Beth Kissam. The authors developed a practical method for planning instruction to meet the needs of academically diverse learners called SMARTER Planning. SMARTER is an acronym for the following: Shape the Critical Questions: identifying the critical, essential topics or ideas in the course curriculum by formulating in 10 questions or less, what you would like your students to learn Map the Critical Content: organizing the essential topics or ideas to elucidate relationships between them Analyze for Learning Difficulties: identifying content that might be difficult for students to learn because of the academic diversity of your students Reach Enhancement Decisions: creating an overall plan for addressing learning opportunities and learning challenges within the context of the class’s diversity including graphic organizers, note taking systems, strategy use, participation options, accommodations, instructional activities and materials Teach Strategically: informing students of how instruction is delivered and explicitly teaching students strategies, methods, and content Evaluate Mastery: planning a variety of evaluation products that assess whether or not students are learning what they are supposed to be learning, are engaged in their learning, and receive commensurate grades that reflect their knowledge Revisit Outcomes: going back to revise your course, asking if the critical course questions were meaningful and accurate SMARTER Planning helps teachers decide what to teach by identifying the necessary curricular content expected of all students in the course. These are the critical ideas for the curriculum. Each curricular unit is then broken down to identify the following: (a) what content particular to that unit of study all students must know and what skills they must demonstrate to support the critical ideas, (b) what most students must know and what skills they must demonstrate to support the critical ideas, and (c) what will some students know and what skills must they demonstrate to support the critical ideas. Mastery of critical core ideas for all students is necessary for “C” performance—reflecting the core ideas of the unit mandatory for continued progress through the curriculum. Mastery of the critical ideas and the additional supporting skills most students must know is necessary for “B” performance. Mastery of the critical content plus the additional content and additional concepts only some students are expected to know is required for “A” performance. Once teachers identify what they will teach, they must determine how to teach it. Deciding what to teach and how to teach it is only a part of the process for teaching in an academically diverse classroom. Teachers must also teach students how to learn. Successful classrooms of diverse learners are led by teachers who effectively teach the content of their curriculum and teach their students to be effective and active learners. Through explicit instruction in strategy use, modeling appropriate strategies and monitoring strategy implementation by students, teachers help to create active learners that know how to learn content. A helpful resource in providing strategy instruction to middle school and secondary students is Esther Minskoff’s and David Allsopp’s Academic Success Strategies for Adolescents with Learning Disabilities and ADHD. The techniques in this book were developed at James Madison University through a federal research grant and many of them are available on the grant’s website, The Learning Toolbox [http://coe.jmu.edu/Learningtoolbox/]. The Learning Toolbox is a useful resource for identifying appropriate strategies, for teaching strategies, and for educating parents on the strategies their children are using. Strategies for math, reading, study skills, organization, test-taking and content areas are detailed. Improving educational outcomes for academically and culturally diverse learners has been the focus of recent federal funding initiatives since the implementation of the No Child Left Behind Act of 2001, the Individuals with Disabilities Education Act of 1997, and the Individual with Disabilities Education Improvement Act of 2004. Several research centers funded by federal research grants have developed websites and web-based trainings to assist teachers in the design and implementation of accessible content area curriculum. These are described in the webliography at the end of this column. Included in this list are the University of Kansas Center for Research on Learning, whose materials and research are incorporated by Lenz and Deshler, and the Center for Applied Special Technology, which is devoted to the implementation of UDL. Teaching students with academically diverse needs is the charge of all general education teachers in today’s schools. The presence of diversity, both academically and culturally, in today’s classrooms creates increased variability in academic performance that should not only drive instructional accommodation, but should also take advantage of it. Differentiated instruction, UDL and purposeful course planning and strategy instruction are research proven methods for achieving this mission. Lenz, B.K., Deshler, D.D., & Kissam, B.R. (2004). Teaching content to all: Evidence-based inclusive practices in middle and secondary schools. Pearson Education, Inc. Minskoff, E. & Allsopp, D. (2003). Academic success strategies for adolescents with learning disabilities and ADHD. Paul H. Brookes Publishing Co. U.S. Department of Education. (2005). Twenty-seventh annual report to Congress on implementation of the Individuals with Disabilities Education Act. Washington, DC: Author. Van Garderen, D. & Whittaker, C. (2006) Planning differentiated, multicultural instruction for secondary inclusive classrooms. Teaching Exceptional Children, 38(3), pp. 12-20. Friend, M. & Bursuck, W.D. (2009). Including students with special needs: A practical guide for classroom teachers (5th ed.). Upper Saddle River, NJ: Pearson Education Inc. Hitchcock, C., Meyer, A., Rose, D., & Jackson, R. (2002) Access to the general curriculum: Universal design for learning. Teaching Exceptional Children, 35(2), pp. 8-17. Mastropieri, M.A. & Scruggs, T.E. (2007). The inclusive classroom: Strategies for effective instruction (3rd ed.). Upper Saddle River, NJ: Merrill Prentice-Hall. Prater, M.A. (2003). She will succeed! Strategies for success in inclusive classrooms. Teaching Exceptional Children, 35(5), pp. 58-64. Center on Accelerating Student Learning (CASL) CASL is a research effort to create instructional programs that will accelerate learning for students with disabilities in the early grades and thereby provide a solid foundation for strong achievement in the intermediate grades and beyond. CASL is a five-year collaborative effort supported by the U.S. Department of Education's Office of Special Education Programs (OSEP). Participating institutions are the University of Maryland, Teachers College of Columbia University, and Vanderbilt University. Visit the Outreach section of the CASL Web site for information on ordering materials to use in the classroom. [http://kc.vanderbilt.edu/CASL/index.html] Center for Applied Special Technology CAST develops innovative, technology-based educational resources and strategies based on the principles of Universal Design for Learning (UDL). A wealth of free resources for educators are available, including: UDL lesson design tools, a digital text book builder, a strategy tutor, online tutorials, and related publications. [www.cast.org] Center for Electronic Studying CES is a research and development group at the University of Oregon College of Education investigating innovative applications of technology for middle school, secondary, and post-secondary students, their teachers and their schools. Their Web site offers information and resources about the projects currently under investigation. [http://ces.uoregon.edu/] Institute for Academic Access The Institute for Academic Access is conducting research to create instructional methods and materials that will provide secondary students with disabilities authentic access to the high school general education curriculum. It is a five-year, three-site, federally funded collaborative project of the University of Kansas Center for Research on Learning (KU-CRL) and the University of Oregon Institute for the Development of Educational Achievement (UO-IDEA). Each Web site provides information about instructional strategies. [http://kucrl.org/IAA%20Web/index.html] Learning Toolbox This website was developed at James Madison University with a U.S. Department of Education grant on Steppingstones in Technology Innovation for Students with Disabilities. It contains tools and resources to assist secondary and postsecondary students who have LD and ADHD. The Learning Toolbox provides strategies for test taking, studying, note taking, problem solving, and remembering information. The toolbox has three access areas – one for parents explaining the strategies students may be using and how to help support them at home; one for teachers outlining the steps for selecting and teaching the strategies; and, last, one for students that help them select and use an appropriate strategy. [http://coe.jmu.edu/Learningtoolbox/] National Center on Accessing the General Curriculum (NCAC) In a collaborative agreement with the U.S. Department of Education's Office of Special Programs (OSEP), CAST has established a National Center on Accessing the General Curriculum to provide a vision of how new curricula, teaching practices, and policies can be woven together to create practical approaches for improved access to the general curriculum by students with disabilities. [http://www.cast.org/ncac/] National Center for Culturally Responsive Educational Systems The National Center for Culturally Responsive Educational Systems (NCCRESt) is a project funded by OSEP. This program provides technical assistance, tools, products, position papers and professional development resources to reduce referrals to special education and close the achievement gap between students from culturally and linguistically diverse backgrounds and their peers. [www.nccrest.org] University of Kansas Center for Research on Learning University of Kansas Center for Research on Learning is an organization noted for creating instructional solutions that dramatically improve quality of life, learning, and performance for students with disabilities. Researchers at the center have developed the Strategic Instruction Model (SIM), a comprehensive group of strategies that include revised curriculum materials and teaching routines to address the needs of learners in their classrooms. The Center offers training and professional development programs. [http://www.ku-crl.org/] Annmarie Urso, Ph.D., is an Assistant Professor at the Ella Cline Shear School of Education, Division of Special Education, at the State University of New York at Geneseo. Her current research interests include the role of processing speed in the cognitive profiles of poor readers and effective interventions for students identified as treatment resisters in reading. Dr. Urso also studies pre-service teachers as they prepare to teach culturally and linguistically diverse exceptional learners. She is interested in the role of cultural historical activity theory and cultural modeling design as frameworks for course design in pre-service teacher education programs.
To create a named range using VBA, you need to use the “Names” property further with the “Add” method. In add method, you have arguments to define the name that you wish to give to the range and specify the address of the range (make sure to use the dollar sign with the address to freeze the range). Create a Name Range using VBA - Define the workbook where you want to create the named range. - Use the names property and then further add method. - Specify the name in the “Name” argument. - Refer to the range using the “ReferTo” argument. In the above example, you have the active workbook, and then by using the “Names” property with the “Add” method you have defined the name of the range, and in the end, the address of the range that you want to use. As I said earlier, in the range address, you need to use the $ sign to freeze the address. You can also use ThisWorkbook to refer to the workbook where you are writing the code, or you can use refer to a different workbook using the workbook object. VBA to Create Named Range from Selection You can also use the selection property to create a named range from the selection. Consider the following code. ActiveSheet.Names.Add Name:="myRangeName", RefersTo:=Selection And in the following code, you have a message box with which you can enter the name that you want to give to the named range. Sub vba_named_range() Dim iName As String iName = InputBox("Enter Name for the Selection.") ActiveSheet.Names.Add Name:=iName, RefersTo:=Selection End Sub Resizing a Named Range using VBA (Dynamic Named Range) To resize a named range already there in the worksheet, you need to use the resize property and tell VBA how many rows and columns you want to expand from the current range. Consider the following code which expands the named range “myRange” which has cell A1 as range initially but resizes it to column M and row 11. Sub vba_named_range() Dim iRow As Long Dim iColumn As Long iRow = ActiveSheet.Range("A1").End(xlDown).Row iColumn = ActiveSheet.Range("A1").End(xlToRight).Column ActiveSheet.Range("myRange") _ .Resize(iRow, iColumn).Name = "myRange" End Sub I have split this into three parts to make you understand this, now, let’s get into this. - In the FIRST part, you have variables declared to store rows and column count. - In the SECOND part, you have used the “END” method with the range to get the last row and column and store it to the variables. - In the THIRD part, you have used the Resize property with the named range “myRange”. And after that, the row and column number that you have in the variables. When you run this code, it resizes the old range according to the data you have and makes it a dynamic named range. Whenever you need to update it, you can run the code and resize the existing named range. More on VBA Range and Cells - How to Set (Get and Change) Cell Value using a VBA Code - How to Sort a Range using VBA in Excel - How to Merge and Unmerge Cells in Excel using a VBA Code - How to Check IF a Cell is Empty using VBA in Excel - VBA ClearContents (from a Cell, Range, or Entire Worksheet) - Excel VBA Font (Color, Size, Type, and Bold) - How to AutoFit (Rows, Column, or the Entire Worksheet) using VBA - How to use OFFSET Property with the Range Object or a Cell in VBA - VBA Wrap Text (Cell, Range, and Entire Worksheet) - How to Copy a Cell\Range to Another Sheet using VBA - How to use Range/Cell as a Variable in VBA in Excel - How to Find Last Rows, Column, and Cell using VBA in Excel - How to use ActiveCell in VBA in Excel - How to use Special Cell Method in VBA in Excel - How to Apply Borders on a Cell using VBA in Excel - How to Refer to the UsedRange using VBA in Excel - How to Change Row Height/Column Width using VBA in Excel - How to Select All the Cells in a Worksheet using a VBA Code - How to Insert a Row using VBA in Excel - How to Insert a Column using VBA in Excel
In line with the 2014 National Curriculum for Computing, our aim is to provide a high-quality computing education which equips children to use computational thinking and creativity to understand and change the world. The curriculum will teach children key knowledge about how computers and computer systems work, and how they are designed and programmed. Learners will have the opportunity to gain an understanding of computational systems of all kinds, whether or not they include computers. By the time children leave Stock Church of England Primary, they will have gained key knowledge and skills in the three main areas of the computing curriculum: computer science (programming and understanding how digital systems work), information technology (using computer systems to store, retrieve and send information) and digital literacy (evaluating digital content and using technology safely and respectfully). The objectives within each strand support the development of learning across the key stages, ensuring a solid grounding for future learning and beyond. At Stock Church of England Primary School, computing is taught weekly across the term. This ensures children are able to demonstrate their knowledge consistently across the term. Teachers use Purple Mash to construct and teach our lessons, which are often richly linked to engaging contexts in other subjects and topics. We have a set of 30 laptops and 10 iPads to ensure that all year groups have the opportunity to use a range of devices and programs for many purposes across the wider curriculum, as well as in discrete computing lessons. Employing cross-curricular links motivates pupils and supports them to make connections and remember the steps they have been taught. We also have a set of 12 Spheros that the children can use to develop their coding knowledge and experience in a more practical manner. The implementation of the curriculum also ensures a balanced coverage of computer science, information technology and digital literacy. The children will have experiences of all three strands in each year group, but the subject knowledge imparted becomes increasingly specific and in depth, with more complex skills being taught, thus ensuring that learning is built upon. For example, children in Key Stage 1 learn what algorithms are, which leads them to the design stage of programming in Key Stage 2, where they design, write and debug programs, explaining the thinking behind their algorithms. Our approach to the curriculum results in a fun, engaging, and high-quality computing education. The quality of children’s learning is evident on Purple Mash, a digital platform where pupils can share and evaluate their own work, as well as that of their peers. Evidence such as this is used to feed into teachers’ future planning, and as a topic-based approach continues to be developed, teachers are able to revisit misconceptions and knowledge gaps in computing when teaching other curriculum areas. This supports varied paces of learning and ensures all pupils make good progress. Much of the subject-specific knowledge developed in our computing lessons equip pupils with experiences which will benefit them in secondary school, further education and future workplaces. From research methods, use of presentation and creative tools and critical thinking, computing at Stock Primary gives children the building blocks that enable them to pursue a wide range of interests and vocations in the next stage of their lives.
Maths / Numeracy - English / Literacy - Science with Revision Active and passive verbs, words and phrases. Active whiteboard learning Literacy, numeracy and science activities for teaching and learning, activities to classify shapes in numeracy. Activities to help children learn shape classification. Ideal for teaching present perfect tense. adaptable matching pairs game Adding doubles or near doubles. Addition and subtraction facts showing that addition can be done in any order, additions as inverse of subtraction and the addition of pairs to 100. Adverbs for KS2 covers year 3, year 4, year 5 and year 6 Basic skills in science, numeracy and literacy using quiz and games. Here is a website where you can buy childrens educational software and many software titles for schools. Childrens educational software which helps with learning to read, spell and revision. Our software covers literacy, numeracy and science topics presented in an interesting yet fun format. There are thousands of objectives covered in great detail some areas covered include word and sentence level work, suffixes and prefixes, high frequency words, speech sounds, verbs, cvc words. Times ( x ) tables ordinal numbers, addition, multiplication, subtraction and division, fractions and decimals all have working examples. Edify science experiments with good theory of many KS1 and KS2 objectives, examples are gases, friction, pushes and pulls, forces and motion. These teaching and learning exercises make fantastic children lesson warm ups. Use these activities at Christmas to add fun to learning real curriculum objectives in literacy, numeracy and science lessons. Classroom games and activities. Common words and phrases, including prefixes, suffixes and their spelling used by children Comparing adverbs and adjectives, comparative verbs and comparative and superlative words list. Examples of high frequency, compound and complex words and sentences; details of connectives and how to consolidate word useage. Our software is based on objectives from curriculum 2000 and can be purchased using elc's from curriculum on-line. Division can be found in both literacy ( as division word problems ) and in numeracy as grouping or sharing and division and multiplication problems. Early learning games ( software packages ) for schools covering maths, english and science. easy science activities for children. educational games, icebreakers, activities, software, resources covering numeracy ( maths ), literacy ( english ), and science for children attending primary school ( age 4 to age 11 ) Key stage 1 ( KS1 ) and key stage 2 ( KS2 ). phonemes, everyday expressions ew ue ow oo words. Phrases that link sentences, understanding pluralisation Although our software has a serious aim it is presented in a fun way, as a game that children play. These games are ideal teaching and learning activities and are used upto and including 11 year olds. Educational games ideal in the classroom as a group activity or one to one for special needs. Approved for use at home to help with homework - covers maths science and english. High frequency words, spelling patterns and multiple meaning words applicable from reception through KS1 and KS2. How to identify and explain active and passive verbs. How to identify adverbs, adjectives and pronouns in literacy. Inclusion of ICT in the primary classroom offers good cross-curricular focus for numeracy, literacy and science from reception through KS1 to KS2. Interactive teaching and learning software - specifically written with interactive whiteboards ( IWB ) in mind. Children can take part in hundreds of activities covering interactive maths, interactive science and interactive literacy games. These interactive learning packages are suitable for children from reception through KS1 and KS2 - each activity is part of a designed suite of exercises designed by teaching staff for real classroom teaching. Lessons can be planned to include a specific warm up activity from the suite of applications. Use our software to investigate and classify words, to consider meanings and spellings of comparatives as well as looking at irregular vowel and word patterns. Interactive whiteboards are especially useful in the classroom as a teaching resource these IWB 's can be fully utilised using any of our software applications which have all been designed with the IWB in mind. Currently we cover literacy numeracy and science from reception KS1 and KS2. Every application written by us is targeted at junior schools and their equivalent covering junior maths, junior science and junior literacy. Keystages ( KS ) covered are KS1 and KS2. there is particular emphasis placed on SAT years so there is a lot of focus on revision work for year 2 and year 6. All topics covered are from the national literacy the national numeracy and curriculum 2000 units of work. Written with kids in mind each learning game written is fun, educational and interactive. Strong focus is placed on science, english and mathemetics (our core subjects). We have many key stage 1 ( KS1 ) applications to cover KS1 maths KS1 english and KS1 science. We have many key stage 2 ( KS2 ) applications to cover KS2 maths KS2 english and KS2 science At foundation stage we cover literacy and numeracy Literacy also covers teaching 1st 2nd and 3rd person techniques, initial and final letter sounds, root words, igh and i-e words and literacy words with prefixes. Numeracy also covers doubles and near doubles, simple percentages adding to 100, how to solve simple fractions, mean range and mode activities. Rounding to the nearest 10, 100 or 1000 as well as a full suite of revision exercises. Science also covers gases, why we need to exercise, use of our muscles as well as seed germination and dispersal. An ideal teaching and learning resource with hundreds of activities for infants and juniors to make teaching easier for teachers. These resources cover literacy, numeracy and science. Lesson activities for PC or interactive whiteboard use cover a host of curriculum objectives. Literacy lessons can be aided with the inclusion of our range of software for key stage 1 and key stage 2. Literacy and ICT, numeracy and ICT as well as science and ICT can benefit from our vast experience in educational software. Literacy hour software can be used as an interactive resource for literacy hour presented via an interactive whiteboard. long a e ee I ai spelling patterns as well as long vowel pattern words. In addition there are long vowel phoneme ie 1-e oe words and spelling lists. match mate is an adaptable matching pairs game. Maths help comes from our educational software covering know by heart multiples, number names and recite them, inverse operations, mathematical sentence, interactive maths. Maths software covers reception, KS1 and KS2 its ideal for junior and primary schools. Maths software is interactive and works well on an interactive whiteboard. Maths lessons include revision. Mental maths are ideal for lesson warm ups. Multiplication and multiples are included especially useful as a lesson warm up or a plenary. The national numeracy strategy is used as the basis of our educational maths software. Numeracy software is used as mental warm up and development across the national curriculum. Numeracy software is ideal for numeracy hour or any lesson where numeracy matters. Our numeracy resources are available online from reception KS1 and KS2. Numeracy tests are useful for revision and numeracy development. As an ice breaker these numeracy resources are proving to be a valuable tool for the classroom. Our software covers the use of phonemes and phonic teaching. Used as a plenary this interactive educational software is ideal for maths english and science. Word prefixes and suffixes are presented with appropriate examples. These resources are very visual, their interactive nature aids teaching. Our resources make extensive use of vowel patterns and phonemes. Each activity is an ideal lesson warm up as a game for literacy numeracy or science. This is a website where you can find interactive educational PC software for kids. This software offers fun games for science maths and english for children up to the age of 11. Whiteboards are now commonplace within our schools classrooms and make an excellent resource - we have developed a suite of software packages for schools to be used on an interactive whiteboard. Word level work for literacy includes problem solving, spelling patterns, phonenes and word patterns beginning or ending. Prefixes and suffixes are all covered within this suite of educational software. We have a suite of resources available to cover literacy numeracy and science for year 1 year 2 year 3 year 4 year 5 and year 6.
The meteorite could give vital information on the history of the solar system. NASA’s rover currently roaming the surface of Mars, has found a rare iron-nickel meteorite that’s thought to have fallen from the skies of the red planet. The Curiosity discovered the strange “egg rock” which is the size of a golf ball using a spectrometer and has now been confirmed as a meteorite commonly found on Earth. The researchers say it isn’t unusual to find this on Mars either but as it’s the first discovery of its kind, it can be examined with possible new revelations and understandings of the workings of the solar system. “Iron meteorites provide records of many different asteroids that broke up, with fragments of their cores ending up on Earth and on Mars,” Dr. Horton Newsom, a researcher from the University of New Mexico, said in a statement. “Mars may have sampled a different population of asteroids than Earth has.” Dr Newsom works with the team that controls Curiosity’s Chemistry and Camera (ChemCam) instrument used to identify objects on its mission. It’s this that identified the meteorite by analyzing the rock’s chemical composition using a laser. The meteorite was discovered in an area called the Murray formation in lower Mount Sharp and Curiosity will continue to explore the area in order to further understand Mars’ environment and analyse whether or not life could once have been prevalent on the planet. As for the meteroite dubbed “Egg Rock”, researchers will continue to look at ChemCam data and compare it’s external surface chemistry with its internal composition. This will hopefully shed new light on the mysteries of the solar system.
Cold water mirage Titanic. Light does not travel in straight lines in the earth’s atmosphere. It bends according to the density of the air it is travelling through. Normally, the air near the earth is warmer than the air higher up, because air pressure is higher near the surface of the earth and air cools as it expands in the lower pressure of the atmosphere, higher up. This common situation results in standard refraction, where light is bent a normal amount. But when the air near the earth’s surface is unusually hot, such as in the desert or on a hot road, light bends more than normal towards the cooler air higher up. This has the effect of making the sky just above the horizon look like it is on the ground and our brains interpret this as water. Because distant objects appear lower than normal in these conditions, this is called an inferior mirage. Diagram of light rays in standard refraction: Diagram of light rays in an inferior mirage: These ray-bending diagrams are drawn by Dr. Andrew T. Young of San Diego State University. “Hobs” means “Height of Observer”. The most common example of this is the hot road mirage, where there appears to be water on the road ahead, on a hot, dry day: Hot road mirage © Physics Department, Warren Wilson College Another classic example of this is the desert mirage, which appears like a lake of water in a dry desert, as can be seen in the following photograph by mirage photographer Ed Darack: Most people are aware of this type of mirage, but what most people do not know is that the opposite occurs when the surface of the earth is unusually cold, such as over a very cold sea. In this situation the air near the surface is colder than the air higher up. Because this is the opposite of the normal thermal situation in the earth’s atmosphere, this is called a thermal inversion. In this situation temperature rises with height and light bends towards the cooler, denser air near the surface, bending it down, around the curvature of the earth. This is known as “looming”, which has the effect of making a distant object, such as the coast, appear higher – and therefore seem to be nearer – than it really is: Exaggerated diagram of a looming coastline, seen from a ship. But where the thermal inversion is so steep that the downward bending of light rays exceeds the curvature of the earth, light becomes trapped in what is known as a “duct”, where escaping light beams are continually bent back down, towards the earth, crossing and re-crossing each other as they go around the earth, trapped in the duct. This is known as a superior mirage, because here objects appear both higher than normal, but also distorted, with objects within the duct becoming inverted – and inverted again and again – each time the rays cross and re-cross each other: Dr. Andrew T. Young Superior Mirage Diagram. Note the rays crossing within the grey “duct”. ‘Hobs’ means Height of observer. The photograph below is a superior mirage of the coast around the small Swedish village of Höganäs, taken from a boat on the Danish coast, near Osterrenden Bridge. The land seen is therefore 100km away, but it has been lifted up above the horizon and distorted by the superior mirage: Swedish coast seen at 100km range due to superior mirage © Captain Peter Superior mirages are most often experienced in high latitudes and wherever the sea surface temperature is exceptionally low. Titanic’s officers were well aware of abnormal refraction and the phenomenon of looming, as her Second Officer Charles Lightoller explained at the British Inquiry into the sinking: “The man may, on a clear night, see the reflection of the light before it comes above the horizon. It may be the loom of the light and you see it sometimes sixty miles away.” The opposite of Looming is known as ‘sinking’, where objects which would normally be seen above the horizon are hidden below it. And because light rays bend downwards at different rates, the apparent height of objects seen will differ according to the degree of bending of the light. The vertical stretching of an objects is known as ‘Towering’ and the vertical compression of objects is known as ‘stooping’, as demonstrated in the following series of photographs of the same lighthouse under different thermal conditions, by Pekka Parviainen: Looming, Stooping and Miraging lighthouse © Pekka Parviainen And because these light rays are bending down and reflected at different rates and points in the duct, they cross over each other, causing very distortive effects, such as can also be seen in this trio of photographs of the same ship, taken by mirage photographer Pekka Parviainen: A distorting ship in a superior mirage. © Pekka Parviainen See also this summary entry on my site. I hope by blogging chapters from my book, A Very Deceiving Night, it will contribute to the ongoing discussions regarding the atmospheric conditions on the night of the tragedy and the true causes of the disaster. At the moment, the book is only available as an e-book. If you wish to purchase it then you can do so in Amazon Kindle format here and other formats, including Apple, Kobo and Nook, here. Thank you.
With independence from Great Britain in 1776, the Commonwealth of Massachusetts was governed by the same bicameral legislature that existed during the colonial period. It was not until 1780 that John Adams, armed with a statewide mandate for a constitutional convention, set about drafting a formal state constitution. What Adams forged proved so successful that it later became a template for the Constitution of United States. What made the 1780 Massachusetts constitution so influential was how it seemingly balanced the populist ideals promised to the citizenry by the Revolution with the fundamentally conservative expectations of the existing Massachusetts elite. In terms of structure, it established an elective chief magistrate (the governor), a bicameral legislature (the General Court made up of a House and a Senate), and an independent judiciary (an appointed state court system). Also, Adams included a declaration of rights to ensure civil liberties (as well as his brainchild's ratification). Although ratified by town meetings throughout the commonwealth, the document was fundamentally conservative in that it secured the ruling elite's control over the state by giving disproportionate power to the wealthy coastal counties of Suffolk and Essex. Not surprisingly, the 1780 constitution became the darling of the Federalist Party establishment that fought to resist constitutional reform. In opposition, the Democratic-Republicans chafed at the propertied basis for representation in the Senate, which gave an eastern county like Suffolk six senators to Berkshire's two, despite the fact that Berkshire had a larger population. Also, the Democratic-Republicans, whose popular base was in the western part of the state and tended to be of modest means, despised the pecuniary qualifications for the franchise, as well as the nonelected judiciary, claiming both were profoundly undemocratic. In 1820 the opponents to the 1780 constitution had their chance when the Maine district of Massachusetts was broken off and given statehood. As a result of such radical change, the General Court called for a constitutional convention to revisit the constitution of 1780. Despite optimistic expectations for major constitutional reform, an assortment of conservatives, led by a highly sophisticated Federalist Party machine, outwitted the forces of reform at the convention, and little significant change was effected. Power remained centralized in the east, with Boston serving as its epicenter. Although the state constitutional convention proved a great victory for the Federalist establishment, in the early 1820s the party faced an angry populist insurgency fed up with the dictatorial leadership style of the Federalists. In Boston a third party, the Middling Interest, emerged that rejected the deferential nature of past politics and took up an activist stand for reform. In the mayoral election of 1822, the insurgency forced Federalist Party boss Harrison Gray Otis to bow out of the race and elected a Middling Interest candidate, thus marking the demise of the Federalist Party in Massachusetts. Although it still existed in name for a few more years, the party never regained its once dominant position in Massachusetts political life, thus signaling the advent of the Jacksonian Age and the Second Party System. - Banner, James M., Jr. To the Hartford Convention: The Federalist and the Origins of Politics in Massachusetts.New York: Knopf, 1970. - Brooke, John L. The Heart of the Commonwealth: Society and Political Culture in Worchester County, Massachusetts, 1713–1861Cambridge: Cambridge University Press, 1989. - Brown, Richard D. and Jack Tager. Massachusetts: A Concise History.Amherst: University of Massachusetts Press, 2000. - Cayton, Andrew R. L. "The Fragmentation of 'A Great Family': The Panic of 1819 and the Rise of the Middling Interest in Boston, 1818–1822,"Journal of the Early Republic, 2 (Summer 1982), 143–167. - Clark, Christopher. The Roots of Rural Capitalism: Western Massachusetts, 1780–1860Ithaca, NY: Cornell University Press, 1990. - Crocker, Matthew H. The Magic of the Many: Josiah Quincy and the Rise of Mass Politics in Boston, 1800–1830.Amherst: University of Massachusetts Press, 2000. - Crocker, Matthew H. "'The Siege of Boston is once more raised'": Municipal Politics and the Collapse of Federalism, 1821–1823,"in Massachusetts Politics: Selected Essays, ed.Jack Tager, Martin Kaufman, and Michael F. Konig. Westfield, MA: Institute for Massachusetts Studies Press, 1998, pp. 52–71. - Dalzell, Robert F., Jr. Enterprising Elite, The Boston Associates and the World They Made.Cambridge, MA: Harvard University Press, 1987. - Fisher, David Hackett. The Revolution of American Conservatism: The Federalist Party in the Era of Jeffersonian Democracy.New York: Harper Torchbooks, 1965. - Formisano, Ronald P. The Transformation of Political Culture: Massachusetts Parties, 1790s–1840sNew York: Oxford University Press, 1983. - Handlin, Oscar and Mary Flug Handlin. Commonwealth: Study of the Role of Government in the American Economy, 1774–1861, rev. ed.Cambridge, MA: Belknap Press of Harvard University Press, 1969. - Hartford, William F. Money, Morals, and Politics: Massachusetts in the Age of the Boston Associates.Boston: Northeastern University Press, 2001. - McCaughey, Robert A. Josiah Quincy, 1772–1864: The Last Federalist.Cambridge, MA: Harvard University Press, 1974. - Morison, Samuel Eliot. Harrison Gray Otis, 1765–1848: The Urbane Federalist.Boston: Houghton Mifflin, 1969. - Morison, Samuel Eliot. The Maritime History of Massachusetts, 1783–1860.Boston: Houghton Mifflin, 1961. - Peterson, Merrill D., ed. Democracy, Liberty, and Property: The State Constitutional Conventions of the 1820's.New York: Bobbs-Merrill, 1966. - Sheidley, Harlow W. Sectional Nationalism: Massachusetts Conservative Leaders and the Transformation of America, 1815–1836.Boston: Northeastern University Press, 1998. - Smith, Page. John Adams: 1784–1826, Vol. II.Garden City, NY: Doubleday, 1962. - Story, Ronald. Harvard and the Boston Upper Class: The Forging of an Aristocracy, 1800–1870Middletown, CN: Wesleyan University Press, 1980. - Wilkie, Richard W. and Jack Tager, eds. Historical Atlas of Massachusetts.Amherst: University of Massachusetts Press, 1991. The Federalist Party The Federalist Party was dominated by a man who never actually ran for public office in the United States - Alexander Hamilton. "Alexander Hamilton was, writes Marcus Cunliffe, 'the executive head with the most urgent program to implement, with the sharpest ideas of what he meant to do and with the boldest desire to shape the national government accordingly.' In less than two years he presented three reports, defining a federal economic program which forced a major debate not only on the details of the program but on the purpose for which the union has been formed. Hamilton's own sense of purpose was clear; he would count the revolution for independence a success only if it were followed by the creation of a prosperous commerical nation, comparable, perhaps even competitive, in power and in energy, with its European counterparts." (fn: Marcus Cunliffe, The Nation Takes Shape, 1789-1837, (Chicago, 1959), 23.) (Linda K. Kerber, History of U.S. Political Parties Volume I: 1789-1860: From Factions to Parties. Arthur M. Schlesinger, Jr., ed. New York, 1973, Chelsea House Publisher. p. 11) "Federalists created their political program out of a political vision. They had shared in the revolutionaries' dream of a Republic of Virtue, and they emerged from a successful war against empire to search for guarantees that the republican experiment would not collapse." (Kerber, p. 3) "The Federalist political demand was for a competent government, one responsible for the destiny of the nation and with the power to direct what that destiny would be. What was missing in postwar America, they repeatedly complained in a large variety of contexts, was order, predictability, stability. A competent government would guarantee the prosperity and external security of the nation; a government of countervailing balances was less likely to be threatened by temporary lapses in civic virtue, while remaining strictly accountable to the public will." (Kerber, p. 4) "So long as Federalists controlled and staffed the agencies of the national government, the need to formulate alternate mechanisms for party decision making was veiled; with a Federalist in the White House, Federalists in the Cabinet, and Federalist majorities in Congress, the very institutional agencies of the government would themselves be the mechanism of party. Federal patronage could be used to bind party workers to the Federalist 'interest.' 'The reason of allowing Congress to appoint its own officers of the Customs, collectors of the taxes and military officers of every rank,' Hamilton said, 'is to create in the interior of each State, a mass of influence in favor of the Federal Government.' (fn: Alexander Hamilton, 1782, quoted in Lisle A. Rose, Prologue to Democracy: The Federalists in the South, 1789-1800, (Lexington, Kentucky, 1968), 3.) Federalists though of themselves as a government, not as a party; their history in the 1790's would be the history of alignments within the government, rather than of extrernal alignments which sought to influence the machinery of government." (Kerber, p. 10) "Major national issues invigorated the process of party formation; as state groups came, slowly and hesitantly, to resemble each other. The issues on which pro-administration and anti-administration positions might be assumed increased in number and in obvious significance; the polarity of the parties became clearer." (Kerber, p. 11) "As Adams' presidential decisions sequentially created a definition of the administration's goals as clear as Hamilton's funding program had once done, the range of political ideology which called itself Federalist simply became too broad to the party successfully to cast over it a unifying umbrella. Federalists were unified in their response to the XYZ Affair, and in their support of the Alien and Sedition Acts, which passed as party measures in the Fifth Congress, but in little else. The distance between Adams and Hamilton - in political philosophy, in willingness to contemplate war with France, in willingness to manipulate public opinion - was unbridgable; Hamilton's ill-tempered anti-Adams pamphlet of 1800 would be confirmation of a long-established distaste." (Kerber, p. 14) "One result of the war was to add to Federalist strength and party cohesion. There were several varieties of Federalist congressional opinion on the war: most believed that the Republicans had fomented hard feeling with England so that their party could pose as defende of American honor; many believed that in the aftermath of what they were sure to be an unsuccessful war the Republicans would fall from power and Federalists would be returned to office . . . Regardless of the region from which they came, Federalists voted against the war with virtual unanimity." (Kerber, p. 24) "As an anti-war party, Federalists retained their identity as an opposition well past wartime into a period that is usually known as the Era of Good Feelings and assumed to be the occasion of a one party system. In 1816, Federalists 'controlled the state governments of Maryland, Delaware, Connecticut and Massachusetts; they cast between forty percent and fifty percent of the popular votes in New Jersey, New York, Rhode Island, New Hampshire and Vermont...Such wide support did not simply vanish...' (fn: Shaw Livermore, Jr. The Twilight of Federalism: The Disintegration of the Federalist Party 1815-1830, (Princeton, 1962), 265.) Rather, that support remained available, and people continued to attempt to make careers as Federalists (though, probably fewer initiated new careers as Federalists). Because men like Rufus King and Harrison Gray Otis retained their partisan identity intact, when real issues surfaced, like the Missouri debates of 1820, a 'formed opposition' still remained to respond to a moral cause and to oppose what they still thought of as a 'Virginia system.' Each of the candidates, including Jackson in the disputed election of 1824 had Federalist supporters, and their presence made a difference; Shaw Livermore argues that the central 'corrupt bargain' was not Adams' with Clay, but Adams' promise of patronage to Federalists which caused Webster to deliver the crucial Federalist votes that swung the election. If the war had increased Federalist strength, it also, paradoxically, had operated to decrease it, for prominent Federalists rallied to a beleaguered government in the name of unity and patriotism. These wartime republicans included no less intense Federalists than Oliver Wolcott of Connecticut and William Plumer of New Hampshire, both of whom went on to become Republican governors of their respective states, and in their careers thus provide emblems for the beginning of a one party period, and the slow breakdown of the first party system." (Kerber, p. 24) "The dreams of the Revolution had been liberty and order, freedom and power; in seeking to make these dreams permanent, to institutionalize some things means to lose others. The Federalists, the first to be challenged by power, would experience these contradictions most sharply; a party that could include John Adams and Alexander Hamilton, Charles Cotesworth Pinckney and Noah Webster, would be its own oxymoron. In the end the party perished out of internal contradiction and external rival, but the individuals who staffed it continued on to staff its succesors." (Kerber, p, 25) - History of U.S. Political Parties Volume I: 1789-1860: From Factions to Parties. Arthur M. Schlesinger, Jr., ed. New York, 1973, Chelsea House Publisher. - The Revolution of American Conservatism: The Federalist Party in the Era of Jeffersonian Democracy. David Hackett Fischer. New York, 1965, Harper and Row. - The Age of Federalism: The Early American Republic, 1788-1800. Stanley Elkins and Eric McKitrick. New York, 1993, Oxford University Press. The Federalists were referred to by many monikers over the years by newspapers. - In 1809, The Concord Gazette refers to the Federalist Ticket as the American Ticket. - Beginning in 1810, the Newburyport Herald (MA), began referring to Federalists as the American Party (as opposed to the "French" Party, who were Republicans). This continued in the 1811 elections. The Aurora, based in Philadelphia, the most well-known Republican newspaper of the era (see American Aurora: A Democratic-Republican Returns by Richard N. Rosenfeld.) in the February 11, 1800 issue referred to Mr. Holmes, the losing candidate for the Special Election for the Philadelphia County seat in the House of Representatives as an "anti-republican". The October 7, 1799 issue of the Maryland Herald (Easton) referred to the Federalist ticket of Talbot County as Federal Republicans. It would continue to be used intermittently throughout the next 20 years. Newspapers that used this term included the Gazette of the United States (Philadelphia) and Philadelphia Gazette in 1800, the Newport Mercury in 1808, the New Bedford Mercury in 1810, the True American (Philadelphia) in 1812, the Northumberland Republican (Sunbury) in 1815, the United States Gazette (Philadelphia) in 1816 and the Union (Philadelphia) in 1821 and 1822. Friends of Peace / Peace / Peace Ticket: Beginning in 1812 ("In laying before our readers the above Canvass of this county, a few remarks become necessary, to refute the Assertion of the war party, that the Friends of Peace are decreasing in this country." Northern Whig (Hudson). May 11, 1812.) and continuing through to 1815 a number of newspapers referred to the Federalists as the Peace Party (or Peacemaker Party, as the Merrimack Intelligencer (Haverhill) of March 19, 1814 used), as the Peace Ticket or as the Friends of Peace due to their opposition of the War of 1812 (many of these same newspapers referred to the Republicans as the War Party). This use occurred all through at least August of 1815, with the Raleigh Minerva of August 18, 1815 referring to the Federalist candidates as Peace candidates. These newspapers include the Columbian Centinel (Boston), Merrimack Intelligencer (Haverhill), Providence Gazette, the New York Evening Post, the New York Spectator, the Commercial Advertiser (New York), Northern Whig (Hudson), the Broome County Patriot (Chenango Point), the Independent American (Ballston Spa), the Baltimore Patriot, the Alexandria Gazette, Poulson's, Middlesex Gazette (Middletown), the Political and Commercial Register (Philadelphia), Freeman's Journal (Philadelphia), the Carlisle Herald, Northampton Farmer, Intelligencer and Weekly Advertiser (Lancaster), National Intelligencer (Washington), The Federal Republican (New Bern), the Raleigh Minerva, The Star (Raleigh) and Charleston Courier. The New Hampshire Gazette (Portsmouth) took the opposite side, listing the Federalists in the March 16, 1813 edition as "Advocates of Dishonorable Peace and Submission." "The Tyranny of Printers": Newspaper Politics in the Early American Republic. Jeffrey L. Pasley. Charlottesville, 2001, University Press of Virginia. U.S. House of Representatives House of Representatives: the lower or popular house of the United States Congress. 1788 - 1826: Alabama, Connecticut, Delaware, Georgia, Illinois, Indiana, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Mississippi, Missouri, New Hampshire, New Jersey, New York, North Carolina, Ohio, Pennsylvania, Rhode Island, South Carolina, Tennessee, Vermont, Virginia Office Scope: Federal Role Scope: District / State Historical Note: The following states had a Role Scope of State at various times because they only had one member in the U.S. House of Representatives: Alabama (1819, 1821) Delaware (1789 - 1810, 1822, 1824) Illinois (1818 - 1824) Indiana (1816 - 1820) Louisiana (1812 - 1820) Mississippi (1817 - 1824) Missouri (1820 - 1824) Rhode Island (1790) Tennessee (1796 - 1801) Historical Note: The following states had a Role Scope of State at various times because they elected their members at-large and each Representative served the entire state instead of a specific district: Connecticut (1790 - 1824) Delaware (1812 - 1822) Georgia (1789 - 1824) New Hampshire (1790 - 1824) New Jersey (1789 - 1796, 1800 - 1810, 1814 - 1824) Pennsylvania (1788, 1792) Rhode Island (1792 - 1825) Vermont (1812 - 1818, 1822)
Definition (from Wikipedia): A natural number is called a prime number (or a prime) if it is bigger than one and has no divisors other than 1 and itself. For example, 5 is prime, since no number except 1 and 5 divides it. On the other hand, 6 is not a prime (it is composite), since 6 = 2 * 3. The input file DATA2.txt will contain 5 lines, each line having an integer N with 5 <= N <= 10000. The output file OUT2.txt will contain 5 lines. Each line contains the sum of the primes less than the corresponding N in DATA2.txt. 5 10 1000 5 17 76127
The function of horned beetles' wild protrusions has been a matter of some consternation for biologists. Digging seemed plausible; combat and mate selection, more likely. Even Charles Darwin once weighed in on the matter, suggesting -- one imagines with some frustration -- the horns were merely ornamental. In this month's American Naturalist (Dec. 2006) and the Nov. 2006 issue of Evolution, Indiana University Bloomington scientists present an entirely new function for the horns: during their development, Onthophagus horned beetles use their young horns as a sort of can opener, helping them bust out of thick larval shells. The finding will surprise anyone who assumed hornless Onthophagus adults (usually the females) never form the horns in the first place. They do, the scientists say, but the nubile horn tissue is reabsorbed before the beetles' emergence as adults. "The formation of horns by beetle pupas that soon lose them just doesn't seem to make sense, so obviously we were intrigued," said IU Bloomington evolutionary biologist Armin Moczek, lead author of both papers. "It appears these pre-adult horns are not a vestigial type of structure, which many of us thought was the case. Instead we have shown these horns actually serve an important function regardless of whether they are resorbed in the pupal stage or maintained into the adult." Because all the Onthophagus beetles the scientists examined form horns during development, Moczek and colleagues also argue the evolution of ornate horns in the adult beetles may actually have happened second -- that is, some time after their initial evolution as larval molting devices. In the Evolution report, the scientists examined literature describing the evolutionary relationship of 47 Onthophagus species. They also studied the development of eight beetle species in the laboratory (seven Onthophagus species and one species from the closely related but hornless genus Oniticellus). The scientists found that all seven Onthophagus species examined in the laboratory develop horns during their larval and pupal development. That finding should instigate a complete revision of the evolutionary history of Onthophagus beetles, which are largely categorized according to their adult shapes with little or no heed given to the quirks of the beetles' development. Despite the growing presence of developmental biology in evolutionary studies, "Even today, evolutionary theory is very much a theory of adults," Moczek said. "But evolution doesn't morph one adult shape into another. Instead there's an entire lifetime of development that we can't afford to ignore." Curious as to whether or not the horns had a function, the scientists destroyed the horn tissue of beetle larvae using electrosurgery: minute voltage arcs that permit precise destruction of targeted cells while nearby tissues are left intact and undamaged. With the larval horn tissue destroyed, the scientists observed most larvae were unable to break the husks of their larval head capsules, resulting in young adult hatchlings whose heads were tightly (and lethally) encased within larval helmets. Altered Oniticellus, on the other hand, had no trouble breaking free of their former exoskeletons. "It may be that these larval horns enabled Onthophagus beetles to grow a thicker carapace," Moczek said. "But it is also possible a thicker carapace made horns necessary. We are left with the commonly asked question in evolutionary developmental biology, 'Which came first"'" Most scientists have assumed the sexual dimorphism of some Onthophagus beetles was borne of differential growth; flamboyantly horned male beetles grow them, hornless females simply can't. But the American Naturalist report shows that even within sexually dimorphic horned beetle species, both sexes initially form the horns, even if one or both sexes reabsorb the horn tissue sometime before adulthood. In the American Naturalist report, Moczek examined the development of four Onthophagus species. Both species nigriventris and binodis exhibit typical sexual dimorphism -- adult males have horns, females are hornless. In sagittarius, the sexual dimorphism is reversed. Adult females and males of the fourth species Moczek examined, taurus, are both hornless. Despite the differences in adult appearance, all four species begin to grow horns as larvae -- regardless of sex, Moczek found. The hornlessness of some adult beetles is therefore not the result of an inability to make horns, Moczek says, but the reshaping or reabsorption of horn tissue before the beetles become adults. "I think these findings illustrate quite clearly the importance of development to evolutionary biology," Moczek said. "By including studies of your organism's development, at the very least you stand to gain fundamental insights into its biology. More often than not, however, you may prevent yourself from making big mistakes when drawing up evolutionary histories. In this case, I think we did both." Research reported in both papers was supported by grants from the National Science Foundation and the Howard Hughes Medical Institute. Moczek , graduate student Tami Cruickshank and undergraduate student Andrew Shelby coauthored the Evolution paper. Moczek is the American Naturalist paper's sole author. Cite This Page:
Doctors order basic blood chemistry tests to assess many conditions and learn how the body’s organs are working. Often, blood tests check electrolytes, the minerals that help keep the body's fluid levels in balance and which are necessary to help the muscles, heart, and other organs work properly. Blood chemistry tests also measure other substances that help show how well the kidneys are working and how well the body is absorbing sugars. Tests for Electrolytes Typically, tests for electrolytes measure levels of sodium, potassium, chloride, and bicarbonate in the body. Sodium plays a major role in regulating the amount of water in the body. Also, the passage of sodium in and out of cells is necessary for many body functions, like transmitting electrical signals in the brain and in the muscles. The sodium levels are measured to detect whether there's the right balance of sodium and liquid in the blood to carry out those functions. If a child becomes dehydrated (from vomiting, diarrhea, or other causes), the sodium levels can be too high or low, which can cause confusion, weakness, lethargy, and even seizures. Potassium is essential to regulating how the heart beats. Potassium levels that are too high or too low can increase the risk of an abnormal heartbeat (also called arrhythmias). Low potassium levels are also associated with muscle weakness and cramps. Chloride, like sodium, helps maintain a balance of fluids in the body. Certain medical problems like dehydration, heart disease, kidney disease, or other illnesses can disrupt the balance of chloride. Testing chloride in these situations helps the doctor tell whether an acid-base imbalance is happening in the body. Bicarbonate prevents the body's tissues from getting too much or too little acid. The kidney and lungs balance the levels of bicarbonate in the body. So if bicarbonate levels are too high or low, it might indicate a problem with Other Substances Measured Other blood substances measured in the basic blood chemistry test include blood urea nitrogen and creatinine, which tell how well the kidneys are functioning, and glucose, which indicates whether there is a normal amount of sugar in the blood. Blood urea nitrogen (BUN) is a measure of how well the kidneys are working. Urea is a nitrogen-containing waste product that's created when the body breaks down protein. If the kidneys are not working properly, the levels of BUN will build up in the blood. Dehydration, excessive bleeding, and severe infection leading to shock also can raise BUN levels. Creatinine levels in the blood that are too high can indicate that the kidneys aren't working properly. The kidneys filter and excrete creatinine; if they're not working as they should, creatinine can build up in the bloodstream. Both dehydration and muscle damage also can raise creatinine levels. Glucose is the main type of sugar in the blood. It comes from the foods we eat and is the major source of energy needed to fuel the body's functions. Glucose levels that are too high or too low can cause problems. The most common cause of high blood glucose levels is diabetes. Other medical conditions and some medicines also can cause high blood glucose.
Schematic view of the Cherenkov effect Windows to the Universe original image The Cherenkov Effect The theory of relativity states that no particle can travel at the speed of light in a vacuum. However, light travels at lower speeds in dense media, like water. A particle traveling in water must have a speed less than the speed of light in a vacuum, but it is possible for it to move faster than the speed of light in water. If the particle is charged, it will emit radiation (light). This process is similar to the sonic boom heard when an airplane exceeds the speed of sound. Neutrino interactions with water can produce such particles. Sensitive light detectors measure this Cherenkov radiation in Neutrino experiments. This material is considered too complex to be written about at an intermediate or beginner level, so all levels appear in the advanced form. Shop Windows to the Universe Science Store!Cool It! is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store You might also be interested in: On March 29, 2006 a total solar eclipse was visible from parts of Africa. Scientists from the University of Cape Coast in Ghana webcast live video coverage of this event. Windows to the Universe was a...more All of the matter and energy in the Universe was initially confined in a very small region. An explosion occurred which caused the Universe to begin expanding. This expansion continues today. ...more The neutrino is an extremely light particle. It has no electric charge. The neutrino interacts through the weak force. For this reason and because it is electrically neutral, neutrino interactions with...more Theories about fusion inside the solar core predict the number of neutrinos that should reach Earth. Experiments on Earth have been set up to detect solar neutrinos in order to test these models. Current...more When the temperature in the core of a star reaches 100 million degrees Kelvin fusion of Helium into Carbon occurs. Oxygen is also formed from fusion of Carbon and Helium together when the temperature is...more A plot of the binding energy per nucleon vs. atomic mass shows a peak atomic number 56 (Iron). Elements with atomic mass less then 56 release energy if formed as a result of a fusion reaction. Above this...more There are several experiments where nuclear fusion reactions have been achieved in a controlled manner (that means no bombs are involved!!). The two main approaches that are being explored are magnetic...more
New research says Saturn’s rings are a cosmological ‘blink and you’ll miss it’ moment. Scientists first inkling that Saturn’s rings aren’t a permanent structure came during Voyager 1 & 2’s observations decades ago. Now, a paper confirms the majestic rings of Saturn will eventually disappear. But don’t worry, Saturn will still be an incredible sight from your backyard telescope. While the gas giant’s ring system is a cosmological ‘blink and you’ll miss it’ moment, people will enjoy Saturn’s rings for generations to come. What’s happening? Gravity and Saturn’s magnetic field are creating a ‘ring rain.’ And it’s pulling in a whole bunch of water. UV light from the sun and plasma clouds caused by tiny meteor strikes charge the icy particles. They then become attracted to Saturn’s magnetic field, and the planet’s intense gravity does the rest. “We estimate that this ‘ring rain’ drains an amount of water products that could fill an Olympic-sized swimming pool from Saturn’s rings in half an hour,” said James O’Donoghue of NASA’s Goddard Space Flight Center in Greenbelt, Maryland. O’Donoghue, the lead author on the paper, added, “From this alone, the entire ring system will be gone in 300 million years, but add to this the Cassini-spacecraft measured ring-material detected falling into Saturn’s equator, and the rings have less than 100 million years to live. This is relatively short, compared to Saturn’s age of over 4 billion years.” Saturn’s rings look nice and smooth at a distance, but they are made up of countless pieces of ice ranging from dust-grain sized to boulders spanning several years across. The new research also points to Saturn gaining its beautiful ring system well after becoming a gas giant. They may be no older than 100 million years. “We are lucky to be around to see Saturn’s ring system, which appears to be in the middle of its lifetime. However, if rings are temporary, perhaps we just missed out on seeing giant ring systems of Jupiter, Uranus and Neptune, which have only thin ringlets today!” says O’Donoghue. Okay, but how would Saturn acquire its rings? One theory points to small, icy moons around Saturn smashing into each other. And the debris from these collisions forming the rings we see today. There’s still more scientists can learn about Saturn’s ‘ring rain.’ As Saturn goes through multi-year long seasons (thanks to a 29.4-year orbit), its rings get varying exposure to the Sun. Since UV light is one of the main mechanisms for ‘ring rain’ it should change how much it rains depending on the season. Nothing is forever, but losing Saturn’s rings stings. But who knows how the solar system will look in another 100 million years. A passing comet or asteroid might have set the pieces in motion for Saturn to get its rings. It could happen again.
Fruits from the eggplant (Solanum melongena) make a tasty meat substitute in sandwiches or as part of classic dishes like eggplant Parmesan. Growing these deep purple fruits requires attention to the surrounding climate, including both the air and soil temperature. Choosing between sunny, shady and greenhouse conditions plays a large part in determining your eggplant's success. Begin in the Greenhouse Eggplant seeds cannot germinate in cool soil. Beginning your eggplants in the greenhouse is the best way to start a successful plant. Soil kept within a warm range of 80 to 90 degrees Fahrenheit encourages germination and strong seedling growth. The seedlings should be at least 3 inches tall before you move them outdoors. The greenhouse environment also helps prevent flea beetle infestations, especially in the spring. By isolating the eggplants indoors until the summer, the pests do not have a chance to damage the sensitive seedlings. Growing eggplants outdoors requires full sun. Try planting them on a south-facing area of your yard once they are larger than 3 inches. Ample sunlight provides the energy needed for large fruit production through photosynthesis. Instead of planting the eggplant in a basic garden bed, you may have better results with a raised bed. The isolated tilling of a dedicated raised bed provides more warmth for the eggplant as it grows. In addition, keeping the soil well-watered with frequent watering to 12 inches deep helps the plant grow healthy and tall. Shade and Cold Effects Cold and shade are the eggplant's worst enemies. Unexpected frosts easily stunt eggplant growth and directly affect fruiting ability. On the other hand, shading an eggplant affects the leaf photosynthesis. Without substantial sun, the leaves cannot grow bushy and absorb more light as the plant grows through the spring and summer. As a result, the eggplant does not have enough energy to create large fruits and may not fruit at all if it is in deep shade. These unhealthy plants become vulnerable to insect infiltration and pathogens if cold weather and shading persists. Consider Container Growing Containers allow you to control the soil and have the added benefit of being warmer than an in-ground planting site. If you choose a black container, the absorbed heat from the sun can be as much as 10 F higher than if you planted directly in the ground. This additional warmth means more and larger fruits. And, if there's a cold snap, you can easily move the container indoors until warm weather returns. Containers add flexibility to your gardening, especially if your weather is not consistent. - Hemera Technologies/AbleStock.com/Getty Images
Learn how builders use perimeter and area to build towns and cities! Perimeter and area are important measurements that help builders design homes, parks, and more! This charming title teaches children how cities are built, while giving them great examples to practice calculating perimeter and area, improving their mathematical skills through STEM themes. With easy-to-read text, engaging practice problems, clear mathematical diagrams, and an accessible glossary, this title gives readers everything they need to make these calculations with ease. By the end of this book, readers will feel very familiar with perimeter, area, and grid patterns. Have fun learning about perimeter, area, and grid patterns at amusement parks! This exciting title teaches children all about amusement parks and how they are built while incorporating important mathematical and STEM skills. With vibrant images, easy-to-read text, engaging practice problems, clear mathematical diagrams, and an accessible glossary, this title gives readers everything they need to calculate perimeter and area with ease. Learn all about calculating volume at the aquarium! Aquarium workers have to understand volume to design the best tanks for marine animals. Volume helps them determine how much water to put in a tank and how much food to feed the animals. Readers will learn about this and more, all while practicing volume calculation and mathematical formulas in a practical and fun way! With the help of eye-catching images, easy-to-read text, simple practice problems, clear mathematical charts, and an engaging story, this text will leave readers entertained and confident in their cubic measurement and STEM skills. This exciting title shows readers how important volume is to flying a hot air balloon properly. Explore the history of the hot air balloon and learn how these eye-catching balloons fly! For hot air balloons to fly properly, passengers have to calculate the volume of the basket to see which balloon will be able to carry them. With vibrant images, easy-to-read text, simple practice problems, and clear mathematical diagrams, this title will engage readers while simplifying important mathematical formulas and STEM themes. Learn how important data analysis is to crime scene investigations! Investigators examine data from fingerprints, blood samples, DNA samples, and lie detector tests to analyze and interpret crime scenes. This exciting title provides readers with engaging examples of data analysis and probability. Through eye-catching images, easy-to-read text, simple practice problems, and clear mathematical charts and diagrams, this title will leave readers more confident in their data analysis and STEM skills, all while showing them how the things they learn are used outside of the classroom in very important ways! Learn all about graphing in this informative title! This book helps readers understand how to graph a clear visual representation of data, encouraging mathematical and STEM skills. See strong examples of bar graphs, circle graphs, and line graphs, learning the best way to present data using these tools. Full of vibrant images, easy-to-read text, clear mathematical diagrams, and simple practice problems, this title will help readers improve their graphing skills, creating their best graphs yet! Percentages, decimals, and equivalent fractions are all very helpful tools in running a business--even a store in the mall! This informative title introduces readers to percentages, teaching them the basics of the concept of percentage and how to calculate them. Percentages can also be written as decimals or fractions! Children will learn how to use all three and convert them back and forth through helpful mathematical charts, engaging practice problems, STEM themes, an accessible glossary, and easy-to-read text! See how budgets, percentages, and fractions come into play while looking for a birthday present! This charming title will help readers understand percentages and STEM themes, encouraging them to create a budget and use percentages to calculate sale prices of various items. Helpful mathematical charts, engaging practice problems, and easy-to-read text make calculating percentages simple! The skills that readers learn in this book will be useful for a lifetime! Find the answers to all kinds of questions using mathematical equations! This title teaches readers that they can use their understanding of variables, expressions, and equations to answer questions about anything from food to space! Create an equation to calculate how much pizza two boys eat! Create an equation to calculate how many baby teeth a growing child has left! Mathematical equations can provide the answers to so many questions. This book shows readers how practical and useful their mathematical and STEM skills can be, encouraging them to look for math everywhere! With vibrant images, easy-to-read text, and simple practice problems, this title will make equations fun and easy! Learn how to create algebraic equations while traveling through our solar system! Introduce students to variables, expressions, and equations in this exciting title about the night skies. This book challenges students to learn more advanced mathematical and STEM skills, using exciting astronomy examples to keep readers engaged. Readers will practice familiar mathematical skills, like addition and subtraction, in a new way by forming equations! With eye-catching photos, easy-to-read text, and clear practice problems, this title makes mathematical concepts that could be seen as intimidating seem simple and fun instead! Great sports players understand angles! Readers are introduced to geometry in this exciting title that uses easy-to-read text, STEM topics, eye-catching photos, and engaging practice questions to teach children about basic geometrical concepts, including angles, lines of symmetry, perpendicular lines, and a vertex. Featuring challenging mathematical problems, a glossary, and useful index, this title will give readers all the tools they need to learn the winning angles! Practice graphing while cleaning up the school! In this engaging title, a summer storm leaves the school in a mess, so students work to clean it up. Young readers can practice their graphing and STEM skills by creating graphs of the items collected to determine what needs to be recycled. This book improves graphing skills and encourages students to help their schools in any time of need! With vibrant images, simple examples, clear charts, and helpful mathematical diagrams, this book will make children confident in their graphing skills. In this title, young readers must practice creating patterns with plants in order to maximize space in a small garden in the city, improving their mathematical and STEM skills. Vibrant images, practical examples, and helpful mathematical diagrams and charts engage readers while teaching them how simple it can be to use patterns in their daily life! Use patterns to plan school gardens! In this title, two groups of students are planting different gardens, but both are using patterns. This book encourages young readers to practice making patterns in a practical way, improving their mathematical and STEM skills. Vibrant images, practical examples, and helpful mathematical diagrams keep children engaged, showing them how patterns can be easy and fun to make! Take a trip to the world market for an exciting way to learn about standard measurement! This title takes young readers to markets around the world, showing them how to measure common food items with standard measurements. Some things are measured by length! Some things are measured by weight! Children will learn these and other measurement techniques through vibrant images, fun examples, and simple mathematical charts, enhancing their mathematical and STEM skills. Practice standard measurement at a farmers market! This engaging e-book teaches young readers how to measure common food items by their height, circumference, and more, improving their mathematical and STEM skills! Vibrant images, practical examples, and simple mathematical charts help children use standard measurements, showing them how they can use these measurements in their daily lives. From bikes to balloons, from ships to trains, readers will discover many different modes of transportation in this exciting title! The vibrant photos, captivating maps, and helpful mathematical diagrams and charts will have children excited to learn about transportation while being given opportunities to practice their developing mathematical and STEM skills through addition. Take a trip to the city and practice addition! This charming title follows the story of a family who visits the city for the day, discovering how many ways addition and early STEM skills are a part of transportation and daily life. Students will add ticket prices, travel times, passengers, taxis, and more! Vibrant images, fun practice problems, and helpful mathematical charts help young readers practice addition and early STEM themes. This book makes addition fun and encourages children to see what they can add in their daily lives! Practice two-digit addition and subtraction while planning a family reunion! This charming title encourages young readers to use subtraction skills and STEM concepts to help plan the reunion by determining how many people are attending and how much the family will need to accommodate everyone. Add up the tallies to determine where the reunion should be! Calculate how many children are attending by subtracting the number of adults from the number of total guests! Examples like these and more pair with helpful mathematical charts and vivid images to make useful skills like addition and subtraction seem easy and fun! Use subtraction to plan a harvest lunch! This charming title challenges young readers to practice two-digit subtraction and STEM skills to plan food, games, and more. For the first game, subtract 24 apples from 50 apples, leaving 26 apples for other activities! Practical examples like this, along with helpful mathematical diagrams and charts, show children that subtraction can not only be very useful, but can be easy and fun! Introduce young readers to division with this engaging title! Knowing how to use division makes planning a camping trip much easier! This book will excite readers by using practice problems, vibrant images, and helpful mathematical diagrams to improve their division and STEM skills. Meet a family of three who makes 12 smores, then divides 12 by three, giving each family member four smores! Meet five friends who have ten hot dogs, then divide ten by five, giving each friend two hot dogs! Division can help families and friends make things even on a camping trip and can help children in daily life. Practice division while searching through the items in an old attic! This charming title follows the story of four children whose grandparents are moving out of their old house. The attic has old photos, comic books, baseball cards, and paper dolls just waiting to be discovered, but everyone needs to get a fair turn! Divide four boxes to open amongst four children! This book challenges young readers to practice their division skills by dividing up all sorts of collections found in this attic. Not only will they improve their division and STEM skills, but they will learn how to best share things equally with other children. Use information about the Main Street animal shelter to practice graphing! This title introduces young readers to bar graphs, pictographs, and related early STEM concepts. Vivid, familiar images, clear tallies, and simple mathematical diagrams help children understand graphs and encourage them to create their own! Readers will enjoy making graphs about their favorite things! This charming title uses children's favorite things to make graphing and mathematical probability seem simple and fun! This book's vibrant images, practice problems, and mathematical diagrams help young readers understand graphing and early STEM themes. Encourage children to make their own bar graphs or pictographs about their favorite things! Practice nonstandard measurement at the community center! A rock climbing wall is the same height as eight children! A tennis racket is the same length as three ping pong paddles! This fun title uses vivid images, simple practice questions, and helpful mathematical diagrams to keep young readers engaged while helping them better understand nonstandard measurement and early STEM concepts.
The biology of the bull shark is still little known but it shows extraordinary physiological adaptations that allow it to persist in both freshwater and saltwater. Bull sharks have been captured in places you would never imagine a shark to be found; in the foothills of the Peruvian Andes, 3,700 kilometres up the Amazon River; and in Lake Nicaragua, the largest lake in Central America. However, the bull shark may not be able to complete its entire life cycle in freshwater, and all sharks in freshwater require access to saltwater through rivers and estuaries (2) (3). Its swims slowly and heavily, usually near the bottom, concealing its surprising agility and speed employed when attacking prey (2), and deceiving one into believing this may not be one of the most dangerous species of tropical shark, as it is frequently cited (2) (5) (6). Along with the great white and tiger shark the bull shark is responsible for the most accidents involving people (2); a result of its tendency to take large prey and the proximity of its habitat to the activities of humans (2). The bull shark’s broad and varied diet includes bony fishes, other shark species (even occasionally young bull sharks), sea turtles, birds, dolphins, and terrestrial mammals (2). The bull shark is viviparous, giving birth to 1 to 13 young in each litter after a pregnancy of 10 to 11 months (2). The female gives birth in late spring and early summer in both hemispheres, in estuaries, river mouths, and very occasionally in freshwater lakes (2). Mating takes place at the same time of the year but it is unknown where exactly as it has never been directly observed (4).
Understanding the Big Bang: A New Way to Make Particle Soup Scientists know how to recreate the hot, dense plasma that existed in the early milliseconds of our universe: by accelerating heavy atomic nuclei nearly to the speed of light and smashing them into each other in big particle colliders. Now, scientists have shown that this early universe particle soup can be created using smaller nuclei than previously thought possible. Professor Stefan Bathe (Baruch College, The Graduate Center, CUNY), graduate students Daniel Richford, Zachary Rowan, and Zhiyan Wang, and former graduate student Jason Bryslawskyj are part of the collaboration of scientists, called PHENIX. These experiments help physicists learn more about the early moments of our universe and better understand one of the four fundamental forces of nature. The results appear in Nature Physics. Crashing nuclei into each other breaks down the nuclei’s protons and neutrons into quark and gluon particles. These particles then exist, for a fraction of a second, in a fluid-like state called the quark-gluon plasma (QGP). Small-system collisions can teach scientists about QGP behavior in ways that heavier collisions cannot. This helps researchers learn how matter developed in the milliseconds after the big bang. Because QGP is governed by the strong interaction, collision experiments also help researchers better understand the theory behind that force. “QGP is one of the phenomena that the fundamental theory predicts,” Bathe said. “We are experimentally trying to see if the predictions of the standard interaction are correct; trying to learn something that theory alone can’t tell us.” The researchers analyzed flow patterns resulting from smashing gold into a proton, deuteron, and helium-3 nuclei, in a particle collider at Brookhaven National Laboratory, to confirm that the collisions produced QGP. In the past scientists had only observed QGP in collisions of heavy particles such as gold with gold, so early results of small-system collisions came as a surprise. “Nobody could believe it,” Bathe said, “so we needed to collect a lot of evidence. This paper unambiguously shows that this is QGP.”
Evolutionists maintain that birds are descended from reptiles. There is no evidence for that claim, however. On the contrary, there is a great deal of evidence to show that such an evolution is impossible. For instance, the question of how reptiles, land-dwelling animals, began to fly remains unanswered, and this is a matter which has given rise to considerable speculation among evolutionists. There are two main theories. The first is that birds gradually evolved from land-dwelling ancestors which glided from tree to tree, in other words that they descended from the trees (the arboreal theory). The other maintains that they evolved from the land upwards (the cursorial theory). According to the latter theory, the fore-arms of land animals which ran after their prey and frequently flapped those fore-arms in the process gradually turned into wings, and those creatures became birds and took to the air. Both theories, however, rest on entirely speculative grounds. Both are devoid of any scientific evidence. Yet the existence of a land animal which somehow managed to fly is “assumed.” Professor John Ostrom, of the Yale University Geology Department, describes this approach by saying, No fossil evidence exists of any pro-avis. It is a purely hypothetical pre-bird, but one that must have existed. (John Ostrom, "Bird Flight: How Did It Begin?," American Scientist, January-February 1979, vol. 67, p. 47) The scenarios of bird evolution carried in the media rest not on any scientific finding but on the preconceptions of researchers who have adopted evolution as a dogma and maintain their devotion to the theory for philosophical reasons. The really interesting thing is that the scientific findings definitively refute these Darwinist claims. With their feature of “irreducible complexity,” the unique structures in birds refute evolution and verify intelligent design. Let us now examine this rather closer. The Irreducibly Complex Structure of the Avian Lung Dinosaurs are part of the reptile family. When birds and the reptile family are examined they can be seen to possess very different physiologies. In the first place, although birds are warm-blooded, reptiles are cold-blooded. The metabolism of cold-blooded reptiles works very slowly. Birds, on the other hand, expend a great deal of energy in such a tiring activity as flying. Their metabolisms are much faster than those of reptiles. The provision of oxygen to the cells takes place very rapidly in birds. For that reason they have been equipped with a special respiratory system. By always flowing in the same direction in the lung the air loses no time in bringing oxygen to the organism. In reptiles, on the other hand, air is exhaled through the same channel as it is inhaled. This unidirectional air canal is a structure peculiar to the avian lung. It is not possible for such a complex structure to have come into being in stages. That is because in order for an animal to survive the unidirectional air channel system and the lungs must have existed in a flawless form and at every moment. The molecular biologist Michael Denton, known for his criticism of Darwinism, says on this subject: Just how such a different respiratory system could have evolved gradually from the standard vertebrate design without some sort of direction is, again, very difficult to envisage, especially bearing in mind that the maintenance of respiratory function is absolutely vital to the life of the organism. (Michael J. Denton, Nature"s Destiny, Free Press, New York, 1998, p. 361) (For greater detail on the avian lung see: The Unique Structure of Avian Lung Refutes Evolution) The Irreducible Complex Structure of Bird Wing Accepting the imaginary evolution of flight presupposes that at certain stages the wings were “primitive” and therefore insufficient for the task. However, an insufficient wing is insufficient for even the smallest amount of flight. In order for flight to take place, a living thing’s wings need to be flawless and fully formed. This is admitted by the evolutionist biologist Engin Korur: The common feature of eyes and wings is that they can only function when fully developed. To put it another way, one cannot see with an incomplete eye, nor fly with a half-wing. The question of how these organs came into being has remained one of the unsolved mysteries of nature. (Engin Korur, "Gözlerin ve Kanatlarin Sirri" ("The Secret of Eyes and Wings", Bilim ve Teknik (Science and Technology Magazine), October 1984, No. 203, p. 25) Stephen J. Gould, a palaeontologist who has shown how the fossil record refutes the Darwinist model of gradual evolution, says that it is impossible for the bird wing to have developed gradually: But how do you get from nothing to such an elaborate something, if evolution must proceed through a long sequence of intermediate stages, each favored by natural selection? You can"t fly with 2% of a wing… [I]n other words, can natural selection explain these incipient stages of structures that can only be used (as we now observe them) in much more elaborated form?( S. J. Gould, (1985), "Not Necessarily a Wing," Natural History, October, pp. 12, 13) Korur and Gould are quite right about the dilemmas facing the gradual development model of the bird wing. However, there is another very important point which needs to be emphasised here. According to the theory of evolution, a feature needs to be functional if it is to be selected. Most importantly, it is a prerequisite that the gradual development of random changes should constitute a “functional whole” in order for the organism to survive. In one article published in American Zoology magazine, the professor of biology and ornithologist Walter J. Bock wrote: Organisms at every stage in the evolutionary sequence must be functioning wholes interacting successfully with selective demands arising from the particular environments of the organisms at each stage in the evolutionary sequence. (Walter j. Bock, “Explanatory History of the Origin of Feathers”, American Zoology, 40: p. 482, 2000) (our emphasis) Here there appears a major discrepancy with the claims of the evolution of the wing. That is because mutations which might occur in the fore arm will not only fail to provide the creature with a functioning wing, but will also deprive it of its fore arms. That means that this creature will possess a body which is disadvantaged (in other words handicapped) in comparison to other members of its species. Of course a living thing whose fore arms functioned neither as proper feet nor as proper wings would be unable to perform such vital activities as defending itself from predators, hunting or mating as it had before, and would thus be eliminated on account of that disadvantage. (For further information on the bird wing see: The Irreducible Complexity of Wings Refutes Evolution.) The Natural History of Birds and Archaeopteryx In the same way that the anatomy of birds reveals “intelligent design,” the fossil record shows that they “emerged suddenly.” The oldest known bird fossil is the 150-million-year-old Archaeopteryx. This creature was a flying bird, with flawless flight muscles and feathers suitable for flight. No fossil of a half-bird half-reptile has ever been found. We may therefore say that Archaeopteryx was the first bird, and that with the same “flight” structure as modern birds it constitutes evidence against the theory of evolution. Evolutionists have been engaging in speculation about Archaeopteryx ever since the 19th century. The presence of teeth in its mouth and nails resembling claws on its wings, and its long tail, led to these aspects of the fossil being compared to reptiles. A great many evolutionists described Archaeopteryx as a “primitive bird” and even claimed that the animal is closer to reptiles than to birds. However, it gradually emerged that this myth was very superficial, that the animal was not a “primitive bird” at all, that on the contrary its skeleton and feather structure were ideally suited to flight, and that those features compared to reptiles are found in some other birds which lived in the past and which are even still living today. The evolutionist speculation about Archaeopteryx has today to a large extent fallen silent. Professor Alan Feduccia, from the Biology Department of North Carolina University and one of the world’s most eminent authorities on ornithology, has stated that, "Most recent workers who have studied various anatomical features of Archaeopteryx have found the creature to be much more birdlike than previously imagined," The “semi-reptile creature” portrait drawn of Archaeopteryx has been shown to be false. Again according to Feduccia, until recently, "the resemblance of Archaeopteryx to theropod dinosaurs has been grossly overestimated." (Alan Feduccia, The Origin and Evolution of Birds, Yale University Press, 1999, p. 81) In short, bird evolution is not a consistent thesis based on biological or palaeontological evidence, but is a totally illusory and unrealistic claim stemming from Darwinist preconceptions. The subject of bird evolution, which some experts delight in portraying as if it were a scientific fact, is nothing more than a fairy tale expounded for philosophical reasons. The truth revealed by science is that the flawless design in birds is an entirely intelligent one, in other words that birds were created by God.
Bulimia nervosa (bulimia) can affect both sexes and span all ages, socioeconomic, ethnic, and racial groups. Most commonly, however, it occurs in adolescent girls and young women. Bulimia is an eating disorder in which uncontrollable eating of large amounts of food (binges), is followed by intentional vomiting and/or misuse of laxatives, enemas, fasting, or even excessive exercise in order to control weight (purges). The sufferer will often hide this behaviour and to all outward impressions be perfectly normal and with a normal weight. Although these young people look fine, they also tend to have low self-esteem and a distorted idea of their body image. The process of bingeing and purging gives them a feeling of control, removing the anxiety for a while. They sometimes describe the feeling as being numb, which is better than the pain they feel otherwise. These feelings then make way for those of shame and guilt often followed by stress and depression. There isn’t one single cause for why a child or adolescent will become bulimic. The combination of their genes, backgrounds and what is happening for them in their lives, along with social attitudes and family influences, can all play their part. Sometimes an event might be a trigger for the onset of the disease. Symptoms of bulimia may include but are not limited to: - Obsession with food, weight, and body shape - Rapid uncontrolled binge eating – secretive - Self-induced vomiting – scarring on the back of the fingers from this process - Excessive exercise - Fasting and peculiar eating habits or rituals - Inappropriate use of laxatives or diuretics - Irregular or absent menstrual cycle - Unwarranted unhappiness with oneself and bodily appearance - Overachieving and impulsive behaviours In an increasingly image-focussed world, the rise of eating disorders is a distressing reality. An eating disorder is a mental illness that can affect people of any age, background or culture. Bulimia nervosa is when a person binge eats and then purges by either vomiting or exercising excessively. Bulimia mainly affects women, with adolescents and young women most commonly affected. A person suffering from bulimia often experiences feelings of shame, disgust and guilt. They will often hide their eating, purging, dieting, and exercising behaviours from their friends and family. People with bulimia regularly have a normal body weight making hiding their condition easier. Bulimia can cause serious health issues such as stomach ulcers, acid reflux and osteoporosis. The cycle of bulimia can lead to obsessions with food, diet, exercise and body image. As with other eating disorders, bulimia is sometimes found in connection with low self-esteem, negative body image, anxiety and depression. If you suffer from bulimia, it is vital to seek professional help. A psychologist can support you through the recovery process to relearn your approach to food and understand the reasons for your behaviour. With the right help, you can recover.
Many birds follow a seasonal reproductive cycle — they mate, nest, and raise chicks during specific seasons in a year. While birds that reproduce in spring and summer, like the Japanese quail, activate their sexual organs in response to long days (photoperiods), birds which reproduce in autumn or winter become sexually active in response to long nights (scotoperiods). Seasonal breeders activate their sexual organs when environmental signals herald the approach of plentiful resources ideal for birthing and raising young ones. Between these bouts of amour and family time, seasonal breeders’ reproductive organs regress and remain inactive. To reactivate sexual organs during breeding season, a neurological and hormonal signaling cascade between the hypothalamus, pituitary gland, and sexual organs comes into play. In birds whose reproduction is sensitive to long photoperiods, the mediobasal hypothalamus (MBH) in the brain measures photoperiod length. In the MBH, light activates the gene expression of Dio2 (type 2 iodothyronine deiodinase), an enzyme that converts thyroxine (T4) to triiodothyronine (T3). This MBH-produced T3 allows the hypothalamus to release gonadotrophin releasing hormone (GnRH), which stimulates the pituitary gland to produce gonadotropins — hormones that eventually activate the sexual organs. This system is called the ‘deiodinase-mediated pathway’. But how does a non-photosensitive bird like the spotted munia control its seasonal reproductive cycle? In an effort to answer this question, Vinod Kumar’s group at the University of Delhi and Sangeeta Rani’s group from the University of Lucknow teamed up to investigate if the hypothalamic deiodinase pathway was a general mediator of seasonal reproductive responses, or whether it operated specifically in birds responsive to long photoperiods. “We found a perfect model system in spotted munia, which, although an autumn breeding bird in nature, has been known to show gonadal stimulation under ultra-short days,” says Ila Mishra, first author of the publication that reports the team’s results. When the researchers exposed male and female munias to ultra-short days (3 hours of light and 21 hours of dark) for four weeks, they found that the birds’ reproductive cycle was stimulated — sexual organs showed clear signs of revival with the female birds’ ovaries and male birds’ testes growing larger. Munias exposed to ultra-long days (21 hours of light and 3 hours of dark), however, showed no activation of neurological or hormonal pathways connected to reproduction, and their gonads remained shrunken and inactive. Molecular tests revealed that GnRH levels were elevated in the hypothalamus of birds exposed to ultra-short days but not in those of birds exposed to ultra-long days. This indicated that sexual organ activation in the spotted munia followed the typical neurological and hormonal pathways known till date in photosensitive birds. However, what came as a surprise, was that hypothalamic levels of Dio2 in these birds did not increase. Unlike birds which respond to long photoperiods, the hypothalamic deiodinase pathway did not seem to be involved in regulating seasonal reproduction in spotted munias. The team’s work demonstrates that the hypothalamic deiodinase pathway can no longer be considered a universal mediator of reproductive seasonality, as was assumed until now. This raises a plethora of new questions on how scotosensitive birds use day/night lengths to time their reproductive seasons. “This systematic study points towards an unknown mechanism in the avian neuroendocrine gonadal axis, and that there is much that is yet to be explored and unknown so far in this field,” says C. M. Chaturvedi, an avian neurobiologist from Banaras Hindu University, who is unconnected to this study. Did you enjoy this article? Let us know in the comments below.
Writing is taught every day in the morning. Children are given the skills to write confidently and expressively for a range of different purposes. We follow the principals of 'Talk for Writing'. This means that children spend time learning to retell a text. They are immersed in this text and spend time looking at its structure, the vocabulary and the plot. The children then use this text to support them write their own version. Children are also given lots of opportunities to invent their own stories and Non-Fiction texts. Writing is also practised as part of other lessons. In each year group children explore a range of genres and text types across the year, including fiction, poetry, information texts, dialogue and plays, biography and so on. Teachers give guidance and instruction to children to improve writing skills. This includes grammatical accuracy and correct use of punctuation. We involve pupils in their own learning by sharing assessments with them, agreeing individual targets and sharing success criteria. Spelling patterns are taught weekly and the children practise and improve handwriting weekly. We follow the ‘Nelson’ Handwriting Scheme: more information about this can be found here.
Physicians at Weill Cornell Medical College (WCMC) and biomedical engineers at Cornell University have succeeded in building living facsimiles of human ears. They believe that their bioengineering method will finally achieve the goal of providing normal-appearing new ears to children born with a congenital ear deformity. The researchers used three-dimensional (3D) printing and injectable gels made of living cells. Over a three-month period, the ears steadily grew cartilage to replace the collagen used in molding them. The study’s colead-author is Dr. Jason Spector (director of the Laboratory for Bioregenerative Medicine and Surgery, LBMS; associate professor of plastic surgery at WCMC; and adjunct associate professor in biomedical engineering department at Cornell University). He says, “A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer.” Current replacement ears have Styrofoam-like consistency; sometimes, surgeons build ears from ribs harvested from young patients. “This surgical option is challenging and painful for children, and the ears rarely look totally natural or perform well,” says Spector, who is also a plastic and reconstructive surgeon at New York-Presbyterian Hospital/Weill Cornell Medical Center. “Other attempts to ‘grow’ ears have failed in the long term.” In addition to appearing and functioning naturally, these ears can be made quickly — taking a week at most.Scanning, Printing, Molding The study’s other lead author is Dr. Lawrence J. Bonassar (associate professor and associate chair of the biomedical engineering department at Cornell University). The deformity he and Spector seek to remedy is microtia, a congenital deformity in which a child’s external ear — typically only one — is not fully developed. Causes for this disorder are not entirely understood, but research has found that it can occur in children whose mothers took an acne medication during pregnancy. The incidence varies from one to four per 10,000 births each year. Many affected children have an intact inner ear but experience hearing loss due to the missing external ear structure, which normally acts to capture and conduct sound. Spector and Bonassar have been collaborating on bioengineered human replacement parts since 2007, and Bonassar also works with other Weill Cornell physicians. (For example, he and neurological surgeon Dr. Roger Härtl are currently testing bioengineered disc replacements using techniques similar to those described here.) The researchers are developing replacements for human structures primarily made of cartilage: e.g., joints, tracheas, and noses. Cartilage needs no vascularization to survive. To make the ears, Bonassar and colleagues first combined laser scans and panoramic photos (in just 30 seconds) of ears from twin girls to make a digitized 3D image. Then they converted that into a digitized “solid” ear and used a 3D printer to assemble a mold of it. They injected animal-derived collagen (frequently used in cosmetic/plastic surgery) into the resulting mold, then added ∼250 million human cartilage cells. As the main mammalian structural protein, collagen serves as a scaffold on which cartilage can grow. The high-density collagen gel developed by Cornell researchers resembles the consistency of flexible gelatin when the mold is removed. “The process is fast,” says Bonassar. “It takes half a day to design the mold, a day or so to print it, and 30 minutes to inject the gel. We can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in a nourishing cell culture medium before it is implanted.” During a three-month observation period, cartilage grew to replace the collagen scaffold in the ears. “Eventually the bioengineered ear contains only auricular cartilage,” says Spector, “just like a real ear.” Previous bioengineered ears have been unable to maintain their shape/dimensions over time, and their cells did not survive.Next Steps These researchers are looking at ways to expand populations of human cartilage cells in vitro so that those cells could be used in ear molds. Spector says the best time to give a bioengineered ear to a child would be at five or six years of age, when most ears are 80% of their adult size. “We don’t know yet if the bioengineered ears would continue to grow to their full size, but I suspect that they will,” he says. “Surgery to attach a new ear would be straightforward: The malformed ear would be removed and the bioengineered ear inserted under a flap of skin.” Spector says that if all future safety and efficacy tests work out, the first such procedure might be possible in as little as three years. “These bioengineered ears are highly promising because they precisely mirror the native architecture of a human ear,” he says. “They should restore hearing and a normal appearance to children and others in need. This advance represents a very exciting collaboration between physicians and basic scientists. It is a demonstration of what we hope to do together to improve the lives of patients with ear deformity, missing ears, and beyond.” Study coauthors include Dr. Alyssa J. Reiffel, Dr. Karina A. Hernandez, and Justin L. Perez from WCMC’s Laboratory for Bioregenerative Medicine and Surgery; and Concepcion Kafka, Samantha Popa, Sherry Zhou, Satadru Pramanik, Bryan N. Brown, and Won Seuk Ryu from Cornell University’s Department of Biomedical Engineering. Lauren Woods is a senior media associate at Weill Cornell Medical College, 1300 York Avenue, New York, NY 10021; 1-212-821-0560;
Forming students into teams encourages them to cultivate their deliberative abilities. Pin It on Pinterest. Students can record their observations and analyze what occurred in journals. Moving beyond the recall of facts requires carefully constructed lessons focused on the gradual development of independent thinking. Allowing our students to take stands on issues that matter to them engages the classroom in a way that fosters great critical thinking. Barometer—Taking a Stand on Controversial Issues. Welcome to Education World's Work Sheet Library. In this section of our library, we present more than ready-to-print student work sheets organized by grade level. Click on a grade level folder below to find a library of work sheets that you can use with your students to build a wide variety of critical thinking skills. All the.– Bruce, Anaheim, CA Teaching & Education · Thinking & Learning · Attention and Engagement · Memory · Critical Thinking · Problem Solving · Creativity · Collaboration · Information Literacy · Organization and Time Management. , Views. 0 · Math in Real Life · Can you solve the dark coin riddle? - Lisa Winer - Lesson by TED Ed.– Kimberly, Corpus Christi, TX Moving beyond the recall of facts requires carefully constructed lessons focused on the gradual development of independent thinking. A five-step framework for developing critical thinking skills published in the International Journal of Teaching and Learning in Higher Education can be adapted to the middle school and high.– Sandra, Lexington, KY Next, they come up with words that describe their reactions—trapped, free, angry, joyful, etc. Here are some guidelines: Understanding different viewpoints is a great way to delve deeply into a topic. These might include gender, age, family status married, single, how many children, etc. The group is also given a historical mdidle or similar topic. Allow at least 20 critical thinking lessons for middle school students for a conversation. Jigsaw—Developing Community and Disseminating Knowledge. Then a panel of experts is assembled to get the larger picture. A classic tool to guide students in relevant and meaningful discussion, and to build community. In groups, create a dramatic script based on the ideas within a given text. Critical thinking lessons for middle school students not script word for word. These critical thinking lessons for middle school students lessons that will drive and engage students in meaningful PBL and inquiry learning. Allowing students room to think deeply and discuss openly during critical thiniing activities is the key to them taking true responsibility for the learning. Through these kinds of activities we foster real thinkers and life-long synthesis essay on the death penalty. Otherwise the instructor just encourages each student to air their stock supply of prejudices, fixed ideas, and off-the-cuff impressions. Marlys, are you still out there!!! This is Edward and I am teaching a course in the Kiddle East and am still using your book. Classroom discussions of historical figures can cause students to question their presumptions. Can America's founding fathers truly be thought of as great if they owned slaves? In math classes, sxhool can also be asked to identify patterns in numbers as a means middpe developing analysis. High school students can develop critical thinking skills via study of textbooks in conjunction with classroom activities. Reading strategies include paraphrasing information, evaluating the author's claim and establishing a position of their own. Students can record their observations and analyze what occurred in journals. Classroom debates give students the opportunity to enumerate the pros and cons on an issue. Forming students into teams encourages them to cultivate their deliberative abilities. When students simulate of historical events, they delve into the "why" of history and predict dissertation undergraduate admissions outcomes if different decisions were made. Based in Los Angeles, Jana Sosnowski holds Master of Science in educational psychology and instructional technology, She has spent critical thinking lessons for middle school students past 11 years in education, primarily in the secondary classroom teaching English and journalism. Sosnowski has also worked as a curriculum writer for a math remediation program. Basics of Critical Thinking Skills Moving beyond the recall of facts requires carefully constructed lessons focused on the gradual development of independent thinking. Developing Whole Class Dialogue Traditional classrooms rely heavily on lecture, or the transfer of information directly from teacher to student. Activities for Middle Schoolers You can get middle schoolers to develop their critical thinking skills by inviting discussion on everyday situations. I always read the paper sequentially, from start to finish, making comments on the PDF as I go along. I look for specific indicators of research quality, asking myself questions such crutical Are the background literature and study rationale clearly articulated.Read more You can choose your academic level: undergraduate, bachelor or professional and we will choose a writer who has a respective degree....Read more Such material might include tables, charts, summaries, questionnaires, interview questions, lengthy statistics, maps, pictures, photographs, lists of terms, glossaries, survey instruments, letters, copies of historical documents, and studebts other types of supplementary material. A paper may have several appendices.Read more
Children’s involvement in science is increased when they have an opportunity to make decisions about science-based issues that have consequences for their lives. It’s bringing science into the 21st century using the Hub, isn’t it? This research focuses on students making evidence-based decisions based on their science knowledge and how some acted on this knowledge. The context was volcanoes, using resources sourced from the Science Learning Hub. The Hub videos help provide teacher and students with knowledge and skills. Seven teachers from two primary schools taught a unit on volcanoes that also focused on the potential social risks. Data was collected through interviews. Unit plans and samples of student work were collected. Data analysis was guided by the research question: What types of science evidence do young children provide when discussing potential social risks of volcanoes? The research demonstrated that there are different kinds of scientific evidence that children can understand and use to engage with the social aspects of a science-related issue. Evidence that sets the issue in a place and time Teachers asked the children to engage in the Volcano hunt and Identifying volcanic rocks activities. These activities enabled young children to establish and conclude that they were living in a volcanic area. The use of observation evidence to make science-based judgements With teacher guidance on how volcanologists locate, describe and categorise rocks using observational data, the children were able to present data and findings about their rock specimens that reflected how volcanologists work. The Geology in the field video clip reminded them of the scientific skills to be used and the Lost – a hot rock activity required that they present data in a scientific manner. Using evidence to make decisions and take action At a more sophisticated level, children can use their science understandings to make decisions and take action. In one instance, children used their knowledge about lava flow to identify the 5 km radius where evacuation would occur to make emergency plans for their school. Some children planned/made emergency kits for their home. The children in this study developed their ability to use the language and ideas of science to discuss the potential risk and consequences of a volcanic eruption. Some used scientific data to explain and justify their decisions in planning a disaster kit for a volcanic episode. This research has demonstrated that, when children are made aware of the science and social dimensions of an issue, they can use scientific evidence in their decision-making and action. Hodson, D. (2002). Some thoughts on scientific literacy: Motives, meanings and curriculum implication. Asia-Pacific Forum on Science Learning and Teaching, 3 (1). Sadler, T. (Ed.). (2011). Socio-scientific issues in the classroom. Dordrecht: Springer.
It is fascinating to watch the growth of an apple from bud to mature fruit; it is also useful to gardeners and growers who have to contend with insects and disease. Being able to identify the growth stage of an apple is often the key to timing thinning and applying fertilizers and insecticides. Dormant apple buds begin to swell in the early spring. The buds show a silver, fuzzy tissue then a green tip develops. This is the beginning of leaves; the leaves start growing, and as they fold back, they are called "mouse ears." After a few days, closed, hairy flower buds become visible. The Flower Grows As the flower buds grow, five green hairy sepals surround red petals. The flower stalks grow longer as the flower buds get bigger. White flowers tinged with pink burst open. The first flower in a cluster to open is known as the "king bloom." It often turns out to be the largest apple yielded from that cluster of blossoms. White stalks flare from the center; these are the stamens, and they are topped with tiny yellow anthers that bear pollen. To lure honey bees, the blossoms produce a sweet nectar at the base of the petals. Bees move from blossom to blossom collecting pollen from the anthers on their hairy bodies; as they visit blossoms on other trees, the pollen rubs off on those blossoms. When the blossoms have shed their pollen, the petals begin to wilt, and the anthers begin to shrivel. The female stigma becomes visible; this is where visiting bees deposited pollen. The stigma makes the pollen available to the ovary so that it can begin growing into an apple. From Ovary to Apple The petals begin falling. The green sepals are still attached; as the ovary grows, the flared sepals turn upright, and the stamens shrivel and dry up. Below the sepals, fuzzy apples begin to grow rapidly. In about June in most areas, smaller apples drop from a cluster; this is called the "June" drop. Spray thinners need to be used before the June drop. After this, apples must be thinned by hand. The Apples Mature Several weeks later, soft hairs disappear from the developing apple. The expanding apples begin storing sugar. They get larger and turn green then red. Their weight makes them hang from their stems.
Europe Heads for Mars |Beagle 2 lander leaving Mars Express. Credit: European Space Agency The H.M.S. Beagle set sail from Britain late in the stormy December of 1831, bearing the young naturalist Charles Darwin on a quest to understand the natural history of the farthest lands humans could reach. One hundred and seventy two years later, the UK’s Open and Leicester Universities, together with Astrium, an Aerospace Industry partner, aims to reach a bit farther: to Mars. Beagle 2, a compact, lightweight lander carried on the European Space Agency’s (ESA) Mars Express, will search for signs of life on the red planet. The H.M.S. Beagle’s captain, Robert FitzRoy, conceived the idea of taking a naturalist only in the summer of 1831, making Darwin the "scientific package" aboard the original Beagle something of an afterthought. Beagle 2 was also something of an afterthought, says lead scientist Colin Pillinger. "Mars Express was originally going to be a rescue mission. It was going to relaunch instruments that had been lost on the Russian Mars 96 mission," Pillinger explains. But the discoveries related to signs of life in Martian meteorites and the 1996 scientific revelation that ALH84001 might contain a fossil sparked a new idea, Pillinger says. "I suggested to ESA that if they were going to have a mission to Mars that they really needed to have a lander and address some of these new issues that had arisen 20 years after Viking had said We don’t think there’s any evidence of life on Mars.’" Other researchers in France and Finland had conceived a "net lander" strategy: many small landers making measurements on Mars. But with no plans for one lander, let alone many, Pillinger says, Mars Express had neither space nor a mass allocation for numerous net landers, no matter how small. "So it came down to a competition as to who could propose a [single] lander for a 60 kilogram limit. And Beagle is the only one that was a viable possibility," he says. Because of the severe mass restrictions, Beagle 2 has no propulsion system. Instead it relies on parachutes and a slowly deflating balloon in a controlled crash landing. And it cannot move once it lands. Instead it relies on a robotic arm studded with scientific instruments, like digits on the end of a living limb. Scientists call the instrument package the paw. |Simulation of Beagle 2, with solar panels deployed, on the Martian surface. Credit: European Space Agency Beagle 2, once on the surface, deploys four solar panels and the robotic arm. The paw includes a subsurface sampler called a mole, which taps itself into the ground preferably under a large boulder, Pillinger says a millimeter at a time. To sample unweathered rock, the paw also includes a grinder conceived by a dentist a microscope, spectrometers and other instruments. But the part of the package that excites Everett K. Gibson, geochemist, adjunct scientist on the Beagle 2 project and a senior scientist at NASA’s Johnson Space Center in Houston, is the gas-analysis instrument. "The real beauty of this is to be able to sample on the surface, beneath the surface, make fresh surfaces of the rocks to get samples and then to take those and send them into the gas-analysis package, where we can begin to get a handle on the nature of the biogenic elements that might be present." The gas-analysis experiment is a miniaturized version of the lab equipment developed by Pillinger to analyze Martian meteorite samples on Earth. It slowly heats a sample in the presence of oxygen, then analyzes the gases driven off. At each step in the heating cycle, different chemicals burn. During the process, the instrument can detect carbonates produced by water percolating through cracks in rocks and organic matter the chemical signs of life. Because the instrument analyzes gases, it can also analyze the Martian atmosphere. Gibson explains that the Beagle instrumentation eclipses that of Viking "I’m really pleased that we’ll have information in early 04 about the nature of the light elements, in particular carbon and the biogenic elements, from an in situ package that is well, well advanced beyond anything that Viking could do in 1976." |Mars Express will itself carry a small lander, the Beagle 2. Credit: European Space Agency Unlike Viking, Beagle 2 can garner isotopic information about individual carbon atoms. Carbon atoms come in two stable forms, called isotopes: carbon-12 and carbon-13. The only difference between the two is the number of neutrons it contains. Carbon 12-and carbon-13 are mixed together in atmospheres. As biological processes build organic molecules, they use more carbon-12 than -13. This distinctive carbon-12:-13 ratio becomes a detectable signature both of living organisms and their leavings. It is this signature that Beagle-2 will look for. But if the same instrumentation has found evidence of life in Martian meteorites, why take them to Mars? Because, Pilinger argues, however enticing, the evidence from meteorites is not complete. "You can prove that the meteorites come from Mars and that the carbonates were formed on Mars and that Martian water trickled through the rocks. And you see organic matter there. But what you cannot prove is that the organic matter is indigenous. You can’t prove it’s Martian. We just have one more step to make in this puzzle, we think, of going back to Mars and seeing whether the organic matter we’ve seen in the meteorites is in fact in Martian rocks and whether it meets the same criteria that we’ve recognized in the meteorites." Plans for the various parts of Beagle 2 are taking shape. But the very genius of the Beagle 2 design complete integration, with no separate boxes containing separate experiments poses a significant problem, that of cleaning and sterilizing the whole spacecraft to remove all microbial contamination. "This is a tricky issue for Beagle," Pillinger says. "You have to sterilize everything which is part of the spacecraft and you also have to clean parts of the spacecraft as well so that you don’t take dead bodies any more than you take live ones. The rules on planetary protection are quite extreme in that you have to meet an internationally agreed protocol. This is something that is exercising us quite a lot at the moment." The team is currently building a facility for cleaning and decontaminating. Gibson, meanwhile, is already thinking past Beagle 2. "I would love to see sons of Beagle scattered throughout the whole surface of Mars," he says. "Any spacecraft that’s going to Mars, that’s going into orbit about the planet, should have a probe of the Beagle type." Because sons of Beagle would be cheap, mission planners could risk sending some into rugged terrain, which also might have a higher probability of having harbored life. "If we can send a multitude of these vehicles onto the surface in some of these high-risk areas, we have a good chance of getting some really interesting data on the nature of potential living systems that might have been on the planet in the past," Gibson says. Pillinger, who trained as a chemist, has found astrobiology a great stimulus to learn many other fields. He has dabbled in biology, physics, astronomy and earth sciences and added a bit of plain old invention in his search for the evidence of life on other planets, he says. "It takes you back to the days of Victorian scientists, when you were allowed to be interested in what fascinated you."
Great Vowel Shift The Great Vowel Shift was a massive sound change affecting the long vowels of English during the fifteenth to eighteenth centuries. The long vowels shifted upwards in pronunciation. For example, the word child went from being pronounced cheild to choild to the modern pronunciation. The word "she" was originally pronounced shay. There are many more examples and sound files to listen to at the webpage below.
The Sivalik hills is a mountain range of the outer Himalayas also known as Manak Parbat in ancient times. Shivalik literally means 'tresses of Shiva’. This range is about 2,400 km (1,500 mi) long enclosing an area that starts almost from the Indus and ends close to the Brahmaputra, with a gap of about 90 kilometres (56 mi) between the Teesta and Raidak rivers in Assam. The width of the Shivalik hills varies from 10 to 50 km (6.2 to 31.1 mi), their average elevation is 1,500 to 2,000 m (4,900 to 6,600 ft). Other spelling variations used include Shivalik and Siwalik, originating from the Hindi and Nepali word shiwālik parvat. Other names include Churia hills, Chure hills, and Margalla hills. The Sivalik hills are the southernmost and geologically youngest east-west mountain chain of the Himalayas. The Siwaliks have many sub-ranges. They extend west from Arunachal Pradesh through Bhutan to West Bengal, and further westward through Nepal and Uttarakhand, continuing into Himachal Pradesh and Kashmir. The hills are cut through at wide intervals by numerous large rivers flowing south from the Himalayas. Smaller rivers without sources in the high Himalayas are more likely to detour around sub-ranges. Southern slopes have networks of small rills and channels, giving rise to ephemeral streams during the monsoon and into the post-monsoon season until groundwater supplies are depleted. The Sivalik hills are chiefly composed of sandstone and conglomerate rock formations, which are the solidified detritus of the great range in their rear, but often poorly consolidated. The remnant magnetization of siltstones and sandstones suggests a depositional age of 16-5.2 million years with Karnali River exposing the oldest part of the Siwalik Group in Nepal. They are bounded on the south by a fault system called the Main Frontal Thrust, with steeper slopes on that side. Below this, the coarse alluvial Bhabhar zone makes the transition to the nearly level plains. Rainfall, especially during the summer monsoon, percolates into the bhabar, then is forced to the surface by finer alluvial layers below it in a zone of springs and marshes along the northern edge of the Terai or plains. This wet zone was heavily malarial infested before DDT was used to suppress mosquitoes. It was left forested by official decree by Nepal's Rana rulers as a defensive perimeter called Char Kose Jhadi (four kos forest, one kos equalling about three km or two miles). Upslope, the permeable geology together with temperatures routinely exceeding 40° Celsius throughout April and May only supports a low, sparse, drought-tolerant scrub forest. The Siwalik Hills are also among the richest fossil sites for large animals anywhere in Asia. The Hills had revealed that all kinds of animals lived there. They were early ancestors to the sloth bear, Sivatherium, an ancient giraffe, Colossochelys atlas, a giant tortoise amongst other creatures. Low population densities in the Siwalik and along the steep southern slopes of the Mahabharat Range, plus virulent malaria in the damp forests on their fringes create a cultural, linguistic and political buffer zone between dense populations in the plains to the south and the "hills" beyond the Mahabharat escarpment, isolating the two populations from each other and enabling different evolutionary paths with respect to language, race and culture. People of the Lepcha tribe inhabit the Sikkim and Darjeeling areas.
Absorption chillers cool water using energy provided by a heat source. They differ from conventional (vapor compression) refrigeration systems in two ways. The absorption process is thermochemical in nature, as opposed to mechanical. Also, absorption chillers circulate water as the refrigerant instead of chlorofluorocarbons or hydro chlorofluorocarbons (CFCs or HCFCs, also known as Freon). The standard absorption chiller system uses water, as a refrigerant, and lithium bromide, as an absorbent, in its cycle. The lithium bromide has a high affinity for water. The process takes place in a vacuum, allowing the refrigerant (water) to boil at a lower temperature and pressure than it normally would, helping to transfer heat from one place to another. Small residential-sized units use ammonia as the refrigerant, and water as the absorbent. In addition to being direct fired by natural gas, absorption chillers can run off of hot water, steam, or waste heat, making them an integral part of cogeneration systems or anywhere that waste heat in any form is available. Absorption chillers are generally used where noise and vibration levels are an issue, particularly in hospitals, schools, and office buildings. III Equipment Options Absorption chillers can be fired directly or indirectly, and can be single-effect or double-effect. Indirect-fired chillers use heat from another source, while direct-fired chillers use a natural gas burner to power the cycle. Double effect chillers recycle some of the waste heat produced during the cycle, and thus are more efficient per unit of input heat; this efficiency comes at the cost of requiring a hotter input such as steam or natural gas. Equipment sizes range from 4.5 tons of cooling up to several hundred tons of cooling. 1. Equipment Manufacturer Database 2. Gas Air Conditioning Consortium
Rickets is defined as a condition associated with bone-deformity due to inadequate mineralization in growing bones. Albeit some cases are caused by renal disease, use of medication or specific hereditary syndromes, nutritional insufficiency is the most common cause of rickets, particularly in the developing world. Normal bone growth is highly dependent on adequate serum levels of vitamin D, thus insufficiency or deficiency of this vitamin is linked to the development of musculoskeletal symptoms and deformity in children – including rickets. Calcium deficiency is also thought to be the primary factor contributing to rickets in some regions of the world. A similar condition in adults is known as osteomalacia. This disease is unjustly neglected when compared to other metabolic bone diseases and appears not to be suspected or diagnosed promptly in susceptible patients, as modern physicians are not sufficiently aware of this condition. Rickets was initially reported in the mid-1600s, when children who lived in polluted industrialized cities of northern Europe developed a severe bone-forming disease characterized by deformities and growth retardation. Glisson and his colleagues described typical findings of bone deformity with curving of the legs. In the 19th century, Sniadecki was first to recognize the significance of sun exposure for the prevention and cure of rickets. This observation was extended by Palm who promoted systemic use of sun baths. In the early 1900s, vitamin D was found to be the essential ingredient of cod-liver oil, which was found to be effective in treating this disease. With the introduction of the supplementation of vitamin D, rickets became a rare diagnosis in the industrialized nations during the 20th century. Still, at the end of the last century nutritional rickets re-emerged as an important problem in North America, and was also prevalent in economically disadvantaged parts of the world where vitamin D deficiency was not commonly found. Rickets represents an important health issue not only in the developing countries, but also in the developed world. The prominent contributing factors include limited sunlight exposure, increased skin pigmentation, geographical location and decreased dietary intake. The estimation of the Centers for Disease Control and Prevention (CDC) is that 5 of every million children between 6 months and 5 years of age have rickets, with peak prevalence of vitamin D-deficient rickets between 6 and 18 months of age. The majority of affected children are black or breastfed. In North America, rickets is most commonly found in children with relatively more pigmented skin, who are exclusively breastfed. In Europe and Australia, rickets is typically identified in immigrant populations from the Indian subcontinent and Middle East. In the United States nutritional rickets was eradicated in the 1930s after the discovery that vitamin D possessed antirachitic properties. However, the disease has made an unfortunate comeback, primarily due to a lack of appreciation that human milk contains very little vitamin D to satisfy the infant’s requirement. Of the genetic causes, X-linked hypophosphataemic rickets is most commonly encountered, with a prevalence of 1 in 20 thousand children. Other genetic causes (such as autosomal dominant and autosomal recessive, or mutations in vitamin D 25-hydroxylase or 1-alpha-hydroxylase enzymes) are exceptionally rare.
While no map can fully undistort the land, it can sacrifice distortion in some areas to clarify it in others. - Orthographic Projection-Displays earth from a distance - Gnomonic Projection-Has its center at the center of the globe - Lambert Azimuthal Equal-area Projection-True-Area properties - Conic Projection- A Globe made into a cone, layed out, and given grids to form a map. - Lambert Conformal Conic Projection-Two parallels that show a true global proportion. - Polyconic Projection-Uses cones to establish parallels - Mercator Projection-Most commonly used; flat, square map with more distortion near the poles. Different Map types show different things and different information
Learn something new every day More Info... by email Harassment is a term that refers to any behavior, especially repeated incidents, that is specifically intended to distress, humiliate, or torment another individual. When this behavior occurs in schools, it is known as student harassment. The policies for dealing with this type of harassment will generally vary depending on the country, region, or even specific school where it takes places. Schools may have education programs to raise awareness among students, as well as strict no tolerance policies, in order to help prevent harassment among students. Although it generally takes place between classmates, harassment can also be committed by a teacher or other authoritative figure. Physical assault is one type of student harassment. This behavior may include tripping, kicking, hitting, or otherwise physically attacking a student. Parents whose children are victims of physical violence at school can generally choose to file criminal assault charges against the bullying student. Another type of student harassment is sexual harassment. Sexual harassment refers to any unwanted or threatening sex-related action toward another person. This may include a person continually making unwanted sexual advances toward another student, touching him or her inappropriately, or any other sexually related behavior that causes discomfort. Even if the behavior is not intended to cause distress to the victim, it is usually considered sexual harassment if the victim feels uncomfortable. The behavior does not necessarily have to be physical to be considered student harassment. Bullies can also use verbal threats of violence as a form of assault toward fellow students. Taunting or name calling can also be considered a verbal form of harassment. Intimidation is another type of nonphysical student harassment that can be more subtle and difficult to prove than other more blatant forms of harassment. Forms of intimidation may include a bully demanding money or favors from another student, threatening blackmail, or simply acting in any way that makes a victim scared for his or her safety. A more modern form of student harassment is often referred to as cyberbullying. Cyberbullying between classmates generally involves the use of technology to stalk, threaten, or otherwise bother the victim. This can include contacting the victim through text messages, social networking sites, or online instant messages, as well as starting mocking or threatening websites about the victim for the purpose of humiliation or intimidation. This form of harassment tends to be more difficult for some schools to punish because it does not occur on school grounds, but generally does involve students and may be a continuation of harassment that does happen during the school day. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
The "Age of Trilobites" and the Cambrian The most abundant and diverse animals of Cambrian time were the trilobites. Trilobites had long antennae, compound eyes, many jointed legs, and a hard exoskeleton like many of their modern arthropod relatives, such as lobsters, crabs, and insects. The Cambrian is sometimes called the "Age of Trilobites" because of their explosive diversification into all marine environments worldwide. In size, they ranged from a few millimeters (1 mm = 0.25 inches) to 45 centimeters (18 inches). Following the Cambrian, trilobites remained an abundant and diverse element of Ordovician marine faunas, but other groups of organisms that had been more minor elements of Cambrian faunas diversified dramatically. These include snails, clams, brachiopods, cephalopods, corals, bryozoans, and the now extinct graptolites. This post-Cambrian radiation, the Paleozoic Fauna, would dominate marine life until the end of Permian time.
May 14, 2013 New Study On Coral Reef Formations Lays To Rest Conflicting Theories April Flowers for redOrbit.com - Your Universe Online In the South Pacific, three types of coral reef island formations have fascinated geologists for ages. The coral of Tahiti forms a “fringing” reef, with a shelf growing close to the island´s shore. In Bora Bora, the “barrier” reefs are separated from the main island by a calm lagoon. Manuae represents the last type, an “atoll,” which appears as a ring of coral enclosing a lagoon with no island at the center.The mechanism underlying these reef shapes and how they developed over evolutionary time has produced an enduring debate between two hypotheses — the first from English naturalist Charles Darwin and the second from geologist Reginald Daly. A new study from researchers at MIT and Woods Hole Oceanographic Institution (WHOI) uses modern measurements and computer modeling to put this very old conundrum to rest. The findings of this study were published in the journal Geology. Fringing reefs, barrier reefs and atolls reflect different stages in a dramatic process, according to Darwin´s theory. This process occurs as an island sinks into the ocean floor — the ultimate fate of all volcanic ocean islands. As the volcanic rock cools and is carried away from the “hot spot” of the undersea volcano by tectonic plate movement, the island begins to sink as much as a few millimeters per year. Coral reefs on the island´s flanks grow upward toward the sea surface as the island sinks — getting enough sunlight that the living coral organisms on top, and their symbiotic algae, get enough sunlight to keep pace with the sinking. A fringing reef progresses to a barrier reef and finally to a signature atoll as the coral grows and the island sinks. MIT/WHOI graduate student Michael Toomey, along with collaborators Taylor Perron, the Cecil and Ida Green Assistant Professor of Geology at MIT, and Andrew Ashton, a coastal geomorphologist at WHOI, discovered Darwin´s reef theory can´t explain the trajectories of all volcanic ocean-island systems. The team realized the Hawaiian Islands are following a different kind of progression — finding fringing reefs where they expected to find no reef at all, and drowned barrier reefs where they expected to find living barrier reefs. “Those islands are just not sinking into atolls like the Society Islands,” Toomey says, “so we wanted to develop a model to explain these differences.” On the other hand, Reginald Daly´s theory argues that sea-level cycles, not island subsidence, are the key to understanding coral formations. During ice ages, when the water becomes locked in ice sheets on land, sea-level drops. It rises again between glaciations as the ice melts, suggesting to Daly that exposure to increased wave energy during sea-level drops would erode an island away. As the sea level rises again, the coral would regrow on submerged island platforms. The research team decided to create a computer model focusing on the relationships among coral growth, sunlight availability, water depth and erosion to consider both Darwin's and Daly´s theories. The model also calculates how a coral reef develops as sea level varies over hundreds of thousands of years in combination with island subsidence rates. There is a delicate balance between island sinking and subsidence. The reef will drown if the combination of sea-level change and island sinking deepens the water faster than the coral can grow. Likewise, if the coral grows faster than the water deepens, the coral growth will catch up with the sea surface, then slow down again as the reef is exposed to eroding waves at sea level. If the team includes island subsidence without glacial sea-level cycles, they can model Darwin´s scenario. The configuration that emerges, however, does not resemble current reef formations. When they added in a sea-level history based on geological evidence and paleoclimate data that allows the computer model to account for sea-level oscillation between the present level and approximately 393 feet below present levels every 100,000 years, the results were closer to real-world observations. The model ran a course of four glacial cycles, approximately 400,000 years, into the past. This yielded a coral-reef distribution that matched closely with the real-world observations — with the barrier reefs, drowned barrier reefs, and other forms all in the correct places on the map. “What this shows,” Perron says, “is that while island subsidence is important, as Darwin suggested, sea-level oscillations are also important for determining the distribution of reef types around the world.” The model is sophisticated enough to explain why the Society Islands follow Darwinian progression, but others do not. Without sea-level oscillations, most of the environment would likely progress as Darwin theorized, according to the simulations. Ashton says, when the oscillations are included, “It turns out there is only a little ℠Goldilocks´ zone, a narrow range of subsidence and reef-accretion rates, in which you can get that progression.” The researchers found it interesting the parameters needed by the model to create that tiny zone of Darwin progression match up with the actual growth and subsidence rates of Tahiti. Apparently, Tahiti has subsided just slowly enough over the last few glacial cycles for the deep lagoon to develop without drowning permanently. Hawaii, on the other hand, is sinking so quickly -- at more than 2 millimeters (0.07 inches per year) -- that it will never see Darwinian configuration. A little reef terrace is formed every time the sea level falls to its lowest point. At the end of each glacial period, as the sea-level rises, the reef drowns and remains drowned. The biggest revelation gleaned from this model is that coral reefs are very sensitive to sea-level changes. Peter Burgess, professor and chair of earth sciences at Royal Holloway, University of London (RHUL), says this is useful information because exploring different scenarios of real-world reef formation can help produce a record of how sea-level oscillated over long time scales. “It´s helpful to know how fast the sea level has changed in the past,” he says, “because there is a high probability it will change rapidly in the next couple hundred years, and we´d like to understand how that change might happen.”
Unsolved problems in linguistics From Wikipedia, the free encyclopedia Some of the issues below are commonly recognized as problems per se, i.e., it is general agreement that the solution is unknown. Others may be described as controversies, i.e., while there is no common agreement about the answer, there are established schools of thought that believe they have a correct answer. - Is there a universal definition of the word? - Is there a universal definition of the sentence? - Are there any universal grammatical categories which can be found in all languages? - Can the elements contained in words (morphemes) and the elements contained in sentences (syntactic constituents) be shown to follow the same principles? - The emergence of creole languages - Origin of language is the major unsolved problem, despite centuries of interest in the topic. - Unclassified languages (languages whose genetic affiliation has not been established, mostly due to lack of reliable data) - Special case: Language isolates - Undeciphered writing systems - Language emergence: - Language acquisition: - Controversy: infant language acquisition / first language acquisition. How are infants able to learn language? One line of debate is between two points of view: that of psychological nativism, i.e., the language ability is somehow "hardwired" in the human brain, and that of the "tabula rasa" or blank slate, i.e., language is acquired due to brain's interaction with environment. Another formulation of this controversy is "nature versus nurture". - Is the human ability to use syntax based on innate mental structures or is syntactic speech the function of intelligence and interaction with other humans? The question is closely related to those of language emergence and acquisition. - The language acquisition device: How localized is language in the brain? Is there a particular area in the brain responsible for the development of language abilities or is it only partially localized? - What fundamental reasons explain why ultimate attainment in second language acquisition is typically some way short of the native speaker's ability, with learners varying widely in performance? - Animals and language: How much language (e.g. syntax) can animals be taught to use? - An overall issue: Can we design ethical psycholinguistic experiments to answer the questions above? - ^ "Simulated Evolution of Language: a Review of the Field", Journal of Artificial Societies and Social Simulation vol. 5, no. 2 - ^ Robert Spence, "A Functional Approach to Translation Studies. New systemic linguistic challenges in empirically informed didactics", 2004, ISBN 3-89825-777-0, thesis. A pdf file
This activity was selected for the On the Cutting Edge Reviewed Teaching Collection This activity has received positive reviews in a peer review process involving five review categories. The five categories included in the process are - Scientific Accuracy - Alignment of Learning Goals, Activities, and Assessments - Pedagogic Effectiveness - Robustness (usability and dependability of all components) - Completeness of the ActivitySheet web page For more information about the peer review process itself, please see http://serc.carleton.edu/NAGTWorkshops/review.html. This page first made public: Aug 7, 2006 This is a short exercise that introduces basic thermodynamics. This exercise is designed for a mid/upper-level undergraduate geology course on the principles of mineralogy. Skills and concepts that students must have mastered Students should have knowledge of basic chemistry and of minerals equivalent to what they would learn in an introductory geology class. How the activity is situated in the course This activity is the 20th of 36 mineralogy exercises and is used around the middle of the course. Content/concepts goals for this activity - Learn to do fundamental thermodynamic calculations. Higher order thinking skills goals for this activity - Use data to create phase diagrams. Other skills goals for this activity Description of the activity/assignment This is a short exercise that introduces basic thermodynamics. Students write the formulas for grossular, quartz, anorthite, and wollastonite. Then they answer questions and make calculations related to thermodynamics, phase equilibria, and the above minerals. Determining whether students have met the goalsMore information about assessment tools and techniques. Download teaching materials and tips This assignment can be downloaded in pdf (Acrobat (PDF) 405kB Jul7 05) format.
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Ancient India : Festivals and celebrations! Transcript of Ancient India : Festivals and celebrations! What does the festivals represent? How was festivals celebrated in ancient times of India? Diwali is the festival of lights. Another word instead of Diwali is Deepawali. Diwali is celebrated on the darkest night of Kartik, (the eight month of the Hindu Calendar). Diwali is the most important festival, this importance has been recognized and celebrated 5000-7000 years ago and is now celebrated not only by Hindu's but any Indian who appreciates and wants to join in the celebrations of lights. Diwali is also a festival celebrated as Christmas in the west of India, so this is also a time to mark the beginning of a new year. Preparations for Diwali start just before the following festival such as cleaning the house, put special decorations and painting the house. What is Holi? When was Holi celebrated? Holi is the color festival, this is celebrated on the full moon day in the month of Phalguna. The holi festival startes just at the start of spring. This festival is celebrated in the north states of India, is like a carnival and it is very popular. Holi Festival is a time when all Indians come together leaving all their sadness and grievness behind from the past. Originally the festival in the past were only for the shudras who were not allowed to participate in festivals. Today this festival has lost its significance and is now for all Indians, and is a favourite festival of all time. In India, the celebrations of fairs and festivals form an amazing wondrous and joyful series of events, marking the rites of birth, death and renewal . The celebrations and festivals are moments of remembrance of the birthdays, great deeds of gods, goddesses, hero's, heroines, gurus, prophets and saints. All Indians from different religions and beliefs such as Hindu's, Muslims, Christians, Sikhs, Buddhists, and other religious groups celebrate individually, or together as a mixture of groups if their festivals are the same, on the same day. The Ancient tradition of celebrating festivals goes back to the Vedic times of Aryans. The Vedic scriptures and literature give many source of information about about festivals when celebrations were carried on to honor gods, trees, rivers and mountains. These festivals include prayers, fasting and also social and cultural significance. In the festivals of India there are performances of music, dancing, and drama which took place rugged physical activities. Other activities included wrestling, and wild bull, elephant, horses and rhino race. Information on few common Festivals Why is the this topic important to Indian History? This topic is important to Indian History because it leaves a special significance story about culture and a true Indians faith. The reason why Indians have these festivals is so that all Indians can celebrate their religion, customs, practice their special rituals and pray to their God. How are the festivals celebrated today, in the present world? In today's world, Indian festivals all around the world are celebrated with more enjoyment. There are many fun activities including the basic of praying to Gods, celebrating religion and customs. Some festivals these days have enjoyable rides such as rollercosters, jumping castles and a lot of singing performances and trivia questions. There is also food that is shared with all from different religions. Some festivals stay the same and don't change. What Changes have been in the area from past to present? There are over 60 festivals 10 Famous festivals/celebrations! 1. Durga Puja 2. Diwali (Deepawali) 3. Chhat Puja (Only festival dedicated to the Sun God) 6. Ratha Yatra 7. Raksha Bandhan 9. Thai Pongal
by David F. Coppedge * When speculating about life in the universe, scientists need to be more realistic than Hollywood. In Star Trek, no matter where the actors land, they can walk around and breathe the air. That may be easier on directors, but for a surface to be habitable, there are physical requirements. Astrobiologists limit their searches to regions around stars where liquid water can exist. Because liquid water is bounded by its freezing and boiling points, a "Goldilocks" zone neither too hot nor too cold must be found at particular distances from a star. Extending the inner and outer radii in the orbital plane produces a ring-shaped region called the Continuously Habitable Zone, or CHZ. Under-surface oceans may exist on some planets or moons, but surface life must be in the zone. A paper in Icarus last August added a complication to the CHZ concept. Earlier estimates overlooked the hazards of ultraviolet light. Highly-ionizing UV radiation rapidly destroys organic molecules. Many massive stars output prodigious amounts of UV. This cuts down on the number of candidate stars. Of twenty-one extrasolar planets studied, only five had an overlap between the UV-safe zone and the CHZ, where life could exist. Smaller stars have a different problem: a much narrower CHZ. The zone is also closer in, meaning that any planet lucky enough to fit in the zone would become gravitationally locked to the star, with one hemisphere always facing the star, overheating, and the other hemisphere facing away, forever freezing. It is unlikely life could survive except along a very thin longitude near the terminator (the boundary between light and darkness). This means that life is highly improbable except around sun-like stars, a mere 5% of all stars. Further complications arise when considering a star's host galaxy. Conditions too close to the center are exposed to hazardous radiation levels; too far out lack the heavy elements required for life. This means there is also a Galactic Habitable Zone (GHZ) to consider. Having a zone defined, of course, does not mean an earth-like planet will be present. Astronomers continue to find extra-solar planets at an accelerating clip. Upcoming missions like the Terrestrial Planet Finder (TPF), Kepler, and Space Interferometry Mission (SIM) may one day succeed in finding an earthlike planet around another sun-like star. Spectral analysis may even be able to infer the possible presence of life from certain "biosignatures" such as gas ratios unexpected from geological or atmospheric processes alone. Given the vastness of space and the number of stars in the "cosmic lottery," astrobiologists are not discouraged at the prospects for life, even with few suitable zones. What does this mean for Biblical creationists? The Bible does not specifically rule out some kind of life on other worlds. Many Christian thinkers have speculated about it. In the eighteenth century, in fact, the majority thought it foolish to deny it. Now that we have the means, searching for data to replace speculation is a good thing that Christians should welcome. A day of evidence is worth a millennium of conjecture. *David F. Coppedge works in the Cassini program at the Jet Propulsion Laboratory. Cite this article: Coppedge, D. 2007. Habitable Zones. Acts & Facts. 36 (4).
The steradian (symbol: sr) is the SI unit of solid angle. It is used to describe two-dimensional angular spans in three-dimensional space, analogous to the way in which the radian describes angles in a plane. The name is derived from the Greek stereos for "solid" and the Latin radius for "ray, beam". The steradian, like the radian, is dimensionless because 1 sr = m2·m−2 = 1. It is useful, however, to distinguish between dimensionless quantities of different nature, so in practice the symbol "sr" is used where appropriate, rather than the derived unit "1" or no unit at all. For example, radiant intensity can be measured in watts per steradian (W·sr−1). The steradian was formerly an SI supplementary unit, but this category was abolished from the SI in 1995 and the steradian is now considered an SI derived unit. A steradian is defined as the solid angle subtended at the center of a sphere of radius r by a portion of the surface of the sphere whose area, A, equals r2. Since A = r2, it corresponds to the area of a spherical cap (A = 2πrh) (wherein h stands for the "height" of the cap), and the relationship h/r = 1/(2π) holds. Therefore one steradian corresponds to the solid angle of a simple cone subtending an angle θ, with θ given by: This angle corresponds to an apex angle of 2θ ≈ 1.144 rad or 65.54°. Because the surface area of a sphere is 4πr2, the definition implies that a sphere measures 4π ≈ 12.56637 steradians. By the same argument, the maximum solid angle that can be subtended at any point is 4π sr. A steradian can also be called a squared radian. A steradian is also equal to the spherical area of a polygon having an angle excess of 1 radian, to 1/(4π) of a complete sphere, or to (180/π)2 ≈ 3282.80635 square degrees. The solid angle (in steradians) of the simple cone subtending an angle θ is given by: Analogue to radians In two dimensions, the angle in radians is related to the arc length it cuts out: Now in three dimensions, the solid angle in steradians is related to the area it cuts out: Steradians only go up to 4π ≈ 12.56637, so the large multiples are not usable for the base unit, but could show up in such things as rate of coverage of solid angle, for example. Full article ▸
Artificial intelligence may one day embrace the meaning of the expression "A picture is worth a thousand words," as scientists are now teaching programs to describe images as humans would. Someday, computers may even be able to explain what is happening in videos just as people can, the researchers said in a new study. Computers have grown increasingly better at recognizing faces and other items within images. Recently, these advances have led to image captioning tools that generate literal descriptions of images. [Super-Intelligent Machines: 7 Robotic Futures] Now, scientists at Microsoft Research and their colleagues are developing a system that can automatically describe a series of images in much the same way a person would by telling a story. The aim is not just to explain what items are in the picture, but also what appears to be happening and how it might potentially make a person feel, the researchers said. For instance, if a person is shown a picture of a man in a tuxedo and a woman in a long, white dress, instead of saying, "This is a bride and groom," he or she might say, "My friends got married. They look really happy; it was a beautiful wedding." The researchers are trying to give artificial intelligence those same storytelling capabilities. "The goal is to help give AIs more human-like intelligence, to help it understand things on a more abstract level — what it means to be fun or creepy or weird or interesting," said study senior author Margaret Mitchell, a computer scientist at Microsoft Research. "People have passed down stories for eons, using them to convey our morals and strategies and wisdom. With our focus on storytelling, we hope to help AIs understand human concepts in a way that is very safe and beneficial for mankind, rather than teaching it how to beat mankind." Telling a story To build a visual storytelling system, the researchers used deep neural networks, computer systems that learn by example — for instance, learning how to identify cats in photos by analyzing thousands of examples of cat images. The system the researchers devised was similar to those used for automated language translation, but instead of teaching the system to translate from one language to another, the scientists trained it to translate images into sentences. The researchers used Amazon's Mechanical Turk, a crowdsourcing marketplace, to hire workers to write sentences describing scenes consisting of five or more photos. In total, the workers described more than 65,000 photos for the computer system. These workers' descriptions could vary, so the scientists preferred to have the system learn from accounts of scenes that were similar to other accounts of those scenes. [History of A.I.: Artificial Intelligence (Infographic)] Then, the scientists fed their system more than 8,100 new images to examine what stories it generated. For instance, while an image captioning program might take five images and say, "This is a picture of a family; this is a picture of a cake; this is a picture of a dog; this is a picture of a beach," the storytelling program might take those same images and say, "The family got together for a cookout; they had a lot of delicious food; the dog was happy to be there; they had a great time on the beach; they even had a swim in the water." One challenge the researchers faced was how to evaluate how effective the system was at generating stories. The best and most reliable way to evaluate story quality is human judgment, but the computer generated thousands of stories that would take people a lot of time and effort to examine. Instead, the scientists tried automated methods for evaluating story quality, to quickly assess computer performance. In their tests, they focused on one automated method with assessments that most closely matched human judgment. They found that this automated method rated the computer storyteller as performing about as well as human storytellers. Everything is awesome Still, the computerized storyteller needs a lot more tinkering. "The automated evaluation is saying that it's doing as good or better than humans, but if you actually look at what's generated, it's much worse than humans," Mitchell told Live Science. "There's a lot the automated evaluation metrics aren't capturing, and there needs to be a lot more work on them. This work is a solid start, but it's just the beginning." For instance, the system "will occasionally 'hallucinate' visual objects that are not there," Mitchell said. "It's learning all sorts of words but may not have a clear way of distinguishing between them. So it may think a word means something that it doesn't, and so [it will] say that something is in an image when it is not." In addition, the computerized storyteller needs a lot of work in determining how specific or generalized its stories should be. For example, during the initial tests, "it just said everything was awesome all the time — 'all the people had a great time; everybody had an awesome time; it was a great day,'" Mitchell said. "Now maybe that's true, but we also want the system to focus on what's salient." In the future, computerized storytelling could help people automatically generate tales for slideshows of images they upload to social media, Mitchell said. "You'd help people share their experiences while reducing nitty-gritty work that some people find quite tedious," she said. Computerized storytelling "can also help people who are visually impaired, to open up images for people who can't see them." If AI ever learns to tell stories based on sequences of images, "that's a stepping stone toward doing the same for video," Mitchell said. "That could help provide interesting applications. For instance, for security cameras, you might just want a summary of anything noteworthy, or you could automatically live tweet events," she said. The scientists will detail their findings this month in San Diego at the annual meeting of the North American Chapter of the Association for Computational Linguistics. Original article on Live Science.
whisk fernArticle Free Pass whisk fern, either of the two species of the primitive fern genus Psilotum in the family Psilotaceae of the order Psilotales and the class Psilotopsida of the division Pteridophyta (the lower vascular plants). A whisk fern has water- and food-conducting tissues but lacks true leaves and roots. Photosynthesis occurs in the aerial stems, and water and mineral absorption occurs in the horizontal underground rootlike stems (rhizomes), which receive water and nutrients from fungi through a mycorrhizal association. There are two phases in the life cycle of a whisk fern. The large asexual plants (sporophytes) produce spores that develop into very small colourless sexual plants (gametophytes), which are similar to rhizomes in overall appearance. Eggs and sperm are produced in special structures on their surfaces. Union of these gametes initiates the second sporophyte phase. The genus Psilotum contains two species and one hybrid (P. complanatum, P. nudum, and P. x intermedium) of pantropical plants with whisklike green stems and scalelike appendages called “enations,” which may represent reduced leaves, but they contain no vascular tissue (veins). P. nudum also reaches into the subtropics, growing as far north as the southern United States in the New World, and it is cultivated as a greenhouse plant. In nature, the plants mostly grow as epiphytes (living on other plants). What made you want to look up whisk fern?
The strides that Dr. Martin Luther King, Jr. made during the Civil Rights Movement continues to be remembered and honored today, but did you know it actually took 15 years for Martin Luther King, Jr. Day to be created? In 1968, Congressman John Conyers introduced legislation to make a national holiday in honor of Dr. King, four days after he was assassinated. The bill was initially stalled, but luckily, Conyers and Representative Shirley Chisholm were persistent and they resubmitted the legislation during each legislative session. This, along with mounting pressure during the civil rights marches in Washington DC in 1982 and 1983, got the bill passed. On November 3, 1983, President Ronald Reagan signed the bill, establishing the third Monday of every January as Martin Luther King, Jr. National Holiday, beginning in 1986. The first national Martin Luther King, Jr. Day was observed on January 20, 1986. So today, we honor Dr. King and his message of compassion and equality for all. Happy Martin Luther King, Jr. Day!
Researchers show that benign, genetically engineered mosquitoes can outcompete disease-causing ones, suggesting a possible way to control the disease. Mosquitoes genetically engineered for malaria resistance can outcompete their wild counterparts–at least in the lab, according to researchers at Johns Hopkins University. While previous studies have described the creation of malaria-resistant mosquitoes, this is the first time that researchers have shown a reproductive advantage for the genetically engineered organisms, which is an important requirement if such mosquitoes are to be used as a practical malaria-control strategy. Malaria kills more than a million people worldwide each year, most of them children in sub-Saharan Africa, according to the World Health Organization. The disease is caused by Plasmodium parasites, protozoa that are transmitted from person to person by female Anopheles mosquitoes. Researchers have proposed a method of controlling the spread of malaria by introducing into the wild mosquitoes that can’t transmit the parasite, but computer models suggest that malaria-resistant mosquitoes must almost completely replace the native population in order to stop the cycle of transmission. In the current study, Marcelo Jacobs-Lorena and his colleagues at the Johns Hopkins School of Public Health, in Baltimore, put equal numbers of malaria-resistant mosquitoes and ordinary mosquitoes in a cage and allowed them to feed on mice infected with the malaria-causing parasite. The researchers then collected the eggs laid by the insects, reared them into adulthood, and allowed the new generation of mosquitoes to feed on infected mice. After nine generations, 70 percent of the mosquitoes were malaria resistant, meaning that the genetically engineered insects had largely outcompeted their nonresistant counterparts. In contrast, mosquitoes that fed on uninfected mice did not show any fitness differences. The researchers published their findings in the early online edition of the Proceedings of the National Academy of Sciences. Earlier work by Hillary Hurd, a parasitologist at Keele University, in the United Kingdom, showed that infection with Plasmodium affects mosquitoes’ fertility. “There’s a fitness cost to being infected,” Hurd says, so mosquitoes that are protected from infection should have an advantage over those that aren’t protected. The results of the Johns Hopkins study support that conclusion, she says. Researchers have created different types of malaria-resistant mosquitoes by interfering with the Plasmodium parasite’s complex developmental cycle. After a mosquito ingests the parasite from infected blood, the parasite invades the mosquito’s gut and forms a cyst. That cyst eventually ruptures and releases spores into the mosquito’s body, which migrate to the salivary glands. Then, when the mosquito bites another person, it transmits the parasite. Jacobs-Lorena and his colleagues engineered mosquitoes to produce a peptide called SM1 that blocks Plasmodium from invading the mosquito’s gut, thus interrupting the parasite’s development. Since it is not a naturally occurring peptide, SM1 doesn’t activate the mosquito’s immune system, according to Hurd. “This is a very different strategy than what other groups are working on,” she says. “If you induce an immune response … there is a fitness cost too.” Unlike other groups that conducted experiments, the researchers bred the genetically engineered mosquitoes with ordinary ones, so the insects in their study had just one copy of the SM1 gene instead of two. “Our hypothesis is that there are many genes throughout the genome that confer fitness disadvantage, but they’re recessive,” says Jacobs-Lorena. So in mosquitoes engineered to have two copies of SM1, the traits coded by those recessive genes express themselves and reduce the fitness of the mosquitoes. Having just one copy of SM1 doesn’t seem to reduce the insects’ resistance to the malaria parasite, he adds. Hurd cautions that the malaria-causing parasites used by the Johns Hopkins team infect mice, not humans. “Anyone taking this strategy needs to be certain that the molecule stops transmission of the human parasite,” she says. “Many of them don’t.” More work needs to be done before transgenic mosquitoes can be used in the field as a malaria-control method. “Transgenic mosquitoes by themselves will never be able to solve the problem,” Jacobs-Lorena says. “The only way is to use a combination of approaches: a coordinated attack using drugs, insecticides, transgenic mosquitoes, and perhaps vaccines. Then we have a chance to make a significant change in the transmission of the disease. No one should think of this as a silver bullet.” Become an Insider to get the story behind the story — and before anyone else.
Dust devil in convective boundary layer (CBL) was simulated by Euler-Lagrange approach. By means of large-eddy simulation method and smoothly stretched grid, flow fields of the small scale whirlwind were obtained. Movements of 20,000 dust particles were tracked by computing external forces and velocities with the aid of the simulated high-resolution air flow fields. Characteristics of the simulated airflow were in good accordance with field observations. Distribution of particles reproduced the shape of dust devil. Statistics of particle trajectories revealed basic properties of the movement of dust particles. Small particles with diameter of 0.04 mm were able to form a rolling cylinder and to be lifted easily to a certain height. 0.1 mm dust particles spiraled upwards like a cone with a small cone angle. Particles with diameters of 0.16 mm and 0.3 mm were obviously thrown outward with limited lifting height and fell back to the ground. The negative vertical pressure gradient in the dust devil strengthened particle lifting, unlike in the horizontal wind systems where the vertical pressure gradient isn’t the major driving force. Numerical results showed why total suspended particulates (TSP) concentration exceeded the standard value greatly all the year round on the Loess Plateau, where the loess dust from the local ground was one of the major sources of air pollutants. 90% loess dusts were smaller than 0.04 mm in diameter, which made them easily lifted to a high altitude by dust devils even without obvious horizontal wind. Because thermal plumes are common in CBL, dust devils can occur frequently in multi-locations of the Loess Plateau. According to nature circumstances of the Loess Plateau and thermodynamic characteristics of dust devil, the dust-devil-scale simulation indicated one source of the background TSP on the Loess Plateau and how they were lifted into the atmosphere.
Phosphorus is most abundantly seen in the body as a constituent of the molecule phosphate, one of the bone salts which add structural rigidity to the softer protein matrix of bone and teeth. Perhaps phosphorus most important metabolic role is as a constituent of the molecule phosphate. When this molecule links to an adenosine pyrophosphoric acid (ADP) molecule adenosine triphosphate (ATP) is formed, processing a high energy phosphate bond. When broken, this bond releases energy and the phosphate, reforming and ADP molecule. The ATP "energy" molecule is formed during glycolysis and other processes involving the release of chemical energy from food. ATP is used as the primary source of energy for many metabolic and enzymatic activities, especially muscle contraction, active transport, and the formation of DNA. Phosphate is an important constituent of RNA and DNA. It serves to link the individual bases with one another. The energy released from the high energy phosphate bond of ATP is essential for the operation of the sodium/potassium pump, which exchanges three sodium ions for two potassium ions across a biological membrane. This pump is used to regulate relative amounts of sodium and potassium excreted and retained in the body. Phosphate, from ATP, reacts with choline to initiate synthesis of phospholipids which are essential constituents of cell membranes. Phospholipids are instrumental in regulating cellular permeability and are found in the exterior membrane of nerve cells. They are also helpful in solubilizing relatively nonsoluble triglycerides and cholesterols. ADP, which contains two phosphate molecules, is a constituent of blood platelets and is secreted from platelet granules to stimulate platelet aggregation for blood clotting. Phosphate also plays an important role, due to its effective buffering action, in maintaining acid/base balance in blood Phosphorus absorption is about 50-70% efficient, as calcium, iron, and zinc tend to complex with phosphorus in the stomach, thus reducing absorption. Vitamin D tends to promote the absorption of both phosphorus and calcium from the intestine. Excretion through the urine regulated the bodys level of phosphorus.
THUNDER BAY – This weekend it is likely that the movie Apollo 18 will top the box office. The movie outlines the dangers of space. However the real dangers of space is the orbital debris floating in orbit around the earth is reaching a critical level. That is the findings of a National Research Council report on orbital debris. “The current space environment is growing increasingly hazardous to spacecraft and astronauts,” said Donald Kessler, chair of the committee that wrote the report and retired head of NASA’s Orbital Debris Program Office. “NASA needs to determine the best path forward for tackling the multifaceted problems caused by meteoroids and orbital debris that put human and robotic space operations at risk.” NASA states, “Space debris encompasses both natural (meteoroid) and artificial (man-made) particles. Meteoroids are in orbit about the sun, while most artificial debris is in orbit about the Earth. Hence, the latter is more commonly referred to as orbital debris. Orbital debris is any man-made object in orbit about the Earth which no longer serves a useful function. Such debris includes nonfunctional spacecraft, abandoned launch vehicle stages, mission-related debris and fragmentation debris”. “There are more than 20,000 pieces of debris larger than a softball orbiting the Earth. They travel at speeds up to 17,500 mph, fast enough for a relatively small piece of orbital debris to damage a satellite or a spacecraft. There are 500,000 pieces of debris the size of a marble or larger. There are many millions of pieces of debris that are so small they can’t be tracked. “Even tiny paint flecks can damage a spacecraft when traveling at these velocities. In fact a number of space shuttle windows have been replaced because of damage caused by material that was analyzed and shown to be paint flecks”. “The greatest risk to space missions comes from non-trackable debris,” said Nicholas Johnson, NASA chief scientist for orbital debris. Although NASA’s meteoroid and orbital debris programs have responsibly used their resources, the agency’s management structure has not kept pace with increasing hazards posed by abandoned equipment, spent rocket bodies, and other debris orbiting the Earth, says a new report by the National Research Council. NASA should develop a formal strategic plan to better allocate resources devoted to the management of orbital debris. In addition, removal of debris from the space environment or other actions to mitigate risks may be necessary. The complexity and severity of the orbital debris environment combined with decreased funding and increased responsibilities have put new pressures on NASA, according to the report. Some scenarios generated by the agency’s meteoroid and orbital debris models show that debris has reached a “tipping point,” with enough currently in orbit to continually collide and create even more debris, raising the risk of spacecraft failures, the report notes. In addition, collisions with debris have disabled and even destroyed satellites in the past; a recent near-miss of the International Space Station underscores the value in monitoring and tracking orbital debris as precisely as possible. The strategic plan NASA develops should provide a basis for prioritizing efforts and allocating funds to the agency’s numerous meteoroid and orbital debris programs, the report says. Currently, the programs do not have a single management and budget structure that can efficiently coordinate all of these activities. The programs are also vulnerable to changes in personnel, as nearly all of them are staffed by just one person. The strategic plan, which should consider short- and long-term objectives, a schedule of benchmark achievements, and priorities among them, also should include potential research needs and management issues. Removal of orbital debris introduces another set of complexities, the report adds, because only about 30 percent of the objects can be attributed to the United States. “The Cold War is over, but the acute sensitivity regarding satellite technology remains,” explained committee vice chair George Gleghorn, former vice president and chief engineer for the TRW Space and Technology Group. Although NASA has identified the need for removing debris, the agency and U.S. government as a whole have not fully examined the economic, technological, political, and legal considerations, the report says. For example, according to international legal principle, no nation may salvage or otherwise collect other nations’ space objects. Therefore, the report recommends, NASA should engage the U.S. Department of State in the legal requirements and diplomatic aspects of active debris removal. In its examination of NASA’s varied programs and efforts, the committee found numerous areas where the organization should consider doing more or different work. For example, NASA should initiate a new effort to record, analyze, report, and share data on spacecraft anomalies. This will provide additional knowledge about the risk from debris particulates too small to be cataloged under the current system yet large enough to potentially cause damage. In addition, NASA should lead public discussion of orbital debris and emphasize that it is a long-term concern for society that must continue to be addressed. Stakeholders, including Congress, other federal and state agencies, and the public, should help develop and review the strategic plan, and it should be revised and updated at regular intervals. Image: Space shuttles Endeavour and Discovery meet in a “nose-to-nose” photo opportunity as the vehicles switch locations Aug. 11 at NASA’s Kennedy Space Center, Fla. Now in Orbiter Processing Facility-1 (OPF-1), Discovery will go through more preparations for public display at the Smithsonian’s National Air and Space Museum Steven F. Udvar-Hazy Center in Virginia next spring. Endeavour will be stored in the Vehicle Assembly Building (VAB) until October, when it will be moved into OPF-2 to continue being readied for display at the California Science Center in Los Angeles next summer. Image credit: NASA/Frankie Martin
Table of Contents Where did the Reformation take place? The Protestant Reformation began in Wittenberg, Germany, on October 31, 1517, when Martin Luther, a teacher and a monk, published a document he called Disputation on the Power of Indulgences, or 95 Theses. What 3 things caused the Reformation? Money-generating practices in the Roman Catholic Church, such as the sale of indulgences. Demands for reform by Martin Luther, John Calvin, Huldrych Zwingli, and other scholars in Europe. The invention of the mechanized printing press, which allowed religious ideas and Bible translations to circulate widely. What are 3 facts about the Reformation? Facts – What you should know about Reformation - Martin Luther Didn’t Intend to Start a New Church. - There Have Been Many Reformations … - The Printing Press Played a Vital Role. - Martin Luther May Not Have Nailed His 95 Theses to the Door at Wittenberg. - It Propelled the Spread of Literacy. What countries were involved in the Reformation? Beginning in Germany and Switzerland in the 16th century, the Radical Reformation developed radical Protestant churches throughout Europe. The term includes Thomas Müntzer, Andreas Karlstadt, the Zwickau prophets, and Anabaptists like the Hutterites and Mennonites. Where did the Catholic Church start? Catholic Church/Place founded Why did Martin Luther start the Reformation? Luther sparked the Reformation in 1517 by posting, at least according to tradition, his “95 Theses” on the door of the Castle Church in Wittenberg, Germany – these theses were a list of statements that expressed Luther’s concerns about certain Church practices – largely the sale of indulgences, but they were based on … Why was Martin Luther excommunicated from the Church? Solution. Martin Luther was very much against the worldliness of Pope Leo X, the Clergy, and the spiritual emptiness of the Catholic Church. All his resentment provoked the Pope and he declared Martin Luther as a heretic and sent a letter, warning him that he would be excommunicated from the Church. When did Catholics and Protestants split? The 16th century began the Reformation which resulted in the formation of Protestantism as a distinct entity to Catholicism. In response, the Catholic Church began its own reformation process known as the “counter-reformation” which culminated in the Council of Trent. How many Protestants were killed during the Reformation? Many people were exiled, and hundreds of dissenters were burned at the stake, earning her the nickname of “Bloody Mary”. The number of people executed for their faith during the persecutions is thought to be at least 287, including 56 women. Which country was deeply divided between Catholic and Protestant? In Germany, the country of the Reformation, a deep animosity divided Catholic and Protestant Christians up until a few decades ago. This division had deepened over the centuries through religious conflicts and wars. Which countries remained Catholic after the Reformation? The Impact of the Reformation In Catholic countries, the Church gave more power to secular rulers to help fight Protestantism. In general, France, Italy, Spain and Southern Germany remained Catholic. Northern Germany, England, Holland, and Scandinavia became Protestant. Why did John Calvin leave the Catholic Church? By 1532, Calvin finished his law studies and also published his first book, a commentary on De Clementia by the Roman philosopher, Seneca. The following year Calvin fled Paris because of contacts with individuals who through lectures and writings opposed the Roman Catholic Church. Who was pope during Luther’s time? Pope Leo X Pope Leo X will forever be known as the pope of the beginning of Protestant Reformation. It was during his reign that Martin Luther felt forced to react to certain church excesses—in particular, excesses for which Leo himself was responsible. What was the pope’s reaction to Luther? In 1520, Leo issued the papal bull Exsurge Domine demanding Luther retract 41 of his 95 theses, and after Luther’s refusal, excommunicated him. Some historians believe that Leo never really took Luther’s movement or his followers seriously, even until the time of his death in 1521. How did the Reformation change the map of Europe? The reformation wave swept first the Holy Roman Empire, and then extended beyond it to the rest of the European continent. Germany was home to the greatest number of Protestant reformers. Each state which turned Protestant had their own reformers who contributed towards the Evangelical faith. What was the main effect of the Reformation? 1517: Luther takes the pope to task. What is the history of the Reformation? Introduction ⤒ 🔗. How did the Reformation affect society? The Reformation affected European society by establishing two conflicting religious orders that dominated the countries of Europe, by starting many religious wars, and by prompting a wave of self-reform in the Catholic church.
More Emphysema Information Pages below or do a CIC Health Search for Subjects of Interest Causes, Development & Risk Factors of Emphysema It is known from scientific research that the normal lung has a remarkable balance between two classes of chemicals with opposing action. The lung also has a system of elastic fibers. The fibers allow the lungs to expand and contract. When the chemical balance is altered, the lungs lose the ability to protect themselves against the destruction of these elastic fibers. This is what happens in emphysema. There are a number of reasons this chemical imbalance occurs. Smoking is responsible for 82% of chronic lung disease, including emphysema. Exposure to air pollution is one suspected cause. Irritating fumes and dusts on the job also are thought to be a factor. A small number of people with emphysema have a rare inherited form of the disease called alpha I-antitrypsin (AAT) deficiency-related emphysema, or early onset emphysema. This form of disease is caused by an inherited lack of a protective protein called alpha I-antitrypsin (AAT). How Does Emphysema Develop? Emphysema begins with the destruction of air sacs (alveoli) in the lungs where oxygen from the air is exchanged for carbon dioxide in the blood. The walls of the air sacs are thin and fragile. Damage to the air sacs is irreversible and results in permanent "holes" in the tissues of the lower lungs. As air sacs are destroyed, the lungs are able to transfer less and less oxygen to the bloodstream, causing shortness of breath. The lungs also lose their elasticity. The patient experiences great difficulty exhaling. Emphysema doesn't develop suddenly, it comes on very slowly. Years of exposure to the irritation of cigarette smoke usually precede the development of emphysema. A person may initially visit the doctor because they start to feel short of breath during activity or exercise. As the disease progresses, a short walk can be enough to bring on breathing difficulty. Some people may have chronic bronchitis before developing emphysema. Risk Factors for Emphysema By far, the single greatest risk factor for emphysema is smoking. Emphysema is most likely to develop in cigarette smokers, but cigar and pipe smokers also are susceptible, and the risk for all types of smokers increases with the number of years and amount of tobacco smoked. Men are affected more often than women are, but this statistic is changing as more women take up smoking. Second-hand smoke can also cause emphysema and lung disease. Other risk factors include: - Age. Although the lung damage occurring with emphysema develops gradually over time, most people with tobacco-related emphysema begin to experience symptoms of the disease between the ages of 50 and 60. - Exposure to second-hand smoke. Secondhand smoke, also known as passive or environmental tobacco smoke, is smoke that you inadvertently inhale from someone else's cigarette, pipe or cigar. - Occupational exposure to chemical fumes. If you breathe fumes from certain chemicals or dust from grain, cotton, wood or mining products, you're more likely to develop emphysema. The risk is even greater if you smoke. - Exposure to indoor and outdoor pollution. Breathing indoor pollutants such as fumes from heating fuel as well as outdoor pollutants — car exhaust, for instance — increases your risk of emphysema. - Heredity. A rare, inherited deficiency of the protein, alpha-1-antitrypsin (AAt) can cause emphysema, especially before age 50, and even earlier if you smoke. - HIV infection. Smokers living with HIV are at greater risk of emphysema — and of developing the disease at a relatively young age — than are smokers who don't have HIV infection. - Connective tissue disorders. Some conditions that affect connective tissue — the fibers which provide the framework and support for your body — are associated with emphysema. These conditions include cutis laxa, a rare disease that causes premature aging, and Marfan syndrome, a disorder affecting many different body organs, especially the heart, eyes, skeleton and lungs.
Subject-verb agreement is a fundamental grammar rule that must be observed by all writers, regardless of their level of proficiency. It refers to the conformity of the verb in a sentence with the number and person of the subject. In numerical terms, this is called subject-verb agreement of numbers. This article will explain what subject-verb agreement of numbers entails and how writers can apply it effectively in their writing to ensure their work is grammatically correct and SEO-friendly. Subject-verb agreement of numbers is a rule that requires the number of the subject to agree with the number of the verb. In other words, singular subjects take singular verbs, while plural subjects take plural verbs. For example, “The cat chases the mouse” is a grammatically correct sentence because “cat” is a singular subject and “chases” is a singular verb. Similarly, “The cats chase the mice” is also correct because “cats” is a plural subject and “chase” is a plural verb. However, things can get tricky when subjects are not as straightforward as “cat” or “cats”. For instance, when the subject and verb are separated by a phrase such as “as well as” or “along with”, the verb must agree with the subject closest to it. For example, “The cat, as well as its kittens, was sleeping” is grammatically correct because “cat” is the closest subject to the verb “was”, and it is a singular subject. Another instance when the subject-verb agreement of numbers can become challenging is when a subject is in the form of a numerical expression. For example, “Five percent of the population is unemployed” is correct because “population” is a singular subject, while “is” is a singular verb. Similarly, “Ten of the students are absent” is incorrect, as “students” is a plural noun, and “are” is a plural verb. The correct sentence would be “Ten students are absent.” Using proper subject-verb agreement of numbers is essential for SEO, as search engines prioritize grammatically correct content. Writing SEO-friendly content requires the use of relevant keywords that match the searcher’s queries. A poorly written article with grammatical errors and inconsistencies will not only affect readers` trust in the website but also affect its search engine rankings. Search engines use algorithms to analyze content, ranking pages based on relevance and quality. Proper subject-verb agreement of numbers ensures that the content is precise, accurate, and easy to read, making it more appealing to search engines. In conclusion, subject-verb agreement of numbers is critical in writing grammatically correct and SEO-friendly content. Writers must ensure that the verb agrees with the number of the subject, especially when the subject is a numerical expression or separated by a phrase. By following this fundamental grammar rule, writers can produce high-quality content that is both readable and SEO-friendly, increasing traffic and engagement on their website.
The Game of Mathematics Assessment Name: Date: Block: The Game of Mathematics Mathematics could be thought of as a game invented by a number of men and women over time. All games have rules and mathematics is no exception. Listed below are eleven basic rules or properties of the Mathematics game. ● ● ● ● ● ● ● ● ● ● ● Commutative Property of Addition Commutative Property of Multiplication Associative Property of Multiplication Associative Property of Addition Identity Property of Addition Identity Property of Multiplication Addition Property of Equality Subtraction Property of Equality Multiplication Property of Equality Division Property of Equality Distributive Property Your task is to create a flipchart (using the colored sheets provided). The flipchart should: □ Be created using the colored sheets provided. □ Include an appropriate cover/title page. □ Contain eleven individual properties (all of the above listed properties). □ Have one section for each property. □ Have the meaning of each property written in words. □ Have an algebraic example (i.e. using variables) that shows the correct use of the property. □ Have a numeric example that shows the correct use of the property. □ Include proper English (i.e. spelling, punctuation, grammar). □ Include final quality work. Note: The top sections of your flipchart may be smaller than the bottom sections. Plan carefully where you will place your flipchart content to ensure that it will all fit appropriately.
The use of internal combustion engines operating on gasoline, LPG, diesel fuel, or natural gas inside buildings presents a serious risk of carbon monoxide poisoning. During complete combustion, the typical combustion products from engines are carbon dioxide, nitrous oxides, particulates, water vapor, and numerous other contaminants. Several of these combustion products are linked to health problems. During incomplete combustion, carbon monoxide, a deadly toxin, is produced. Carbon monoxide (CO) is colorless, odorless, tasteless, and non-irritating. The effects of carbon monoxide at low concentrations mimic common influenza and often are not recognized. High concentrations of CO interfere with thought processes complicating the diagnosis. Carbon monoxide is a cumulative poison which can rise to harmful levels in the body in minutes. Does the concentration of carbon monoxide produced by engines vary? Yes, CO emitted from the tailpipe of engines burning gasoline, diesel, or LPG (propane) varies from over 100,000 parts per million (ppm) to less than 15 ppm. In 1968 the EPA regulated CO emissions from on-road motor vehicles. Engines used indoors were not originally regulated. Recent regulations affect small engines such as those used on lawn mowers, chain saws, weed eaters, electric generators, water pumps, and boats, although the regulations for small engines will continue to allow considerably higher CO concentrations than the tighter regulations for on-road motor vehicles. Why does the concentration of CO produced vary? Carbon monoxide is produced during incomplete combustion. Anything that leads to incomplete combustion increases CO production. Two major causes are a rich fuel mixture (more fuel than is needed), or restricted air supply (dirty or plugged air filter). A gasoline engine producing 10,000 ppm CO at the ideal air-fuel ratio will produce over 60,000 ppm when the fuel is increased. Other causes of high CO production include; a cold engine, misfiring, incorrect engine timing, defective or worn parts, exhaust system leaks, and defective catalytic converters. Why would a gasoline engine be set rich? The distribution of a liquid fuel, such as gasoline delivered through a carburetor, is not uniform. To ensure all cylinders obtain sufficient fuel to produce maximum power, the mixture must be rich. Excessively lean mixtures can cause engine problems, although fuel economy is improved with a lean mixture. Combustion analysis equipment showing the air/fuel ratio and/or carbon monoxide production assists in correctly tuning an engine. Tuning by “sound” and “performance” more likely will produce an excessively rich setting, with higher CO concentrations. Improvements in fuel delivery systems, such as fuel injection coupled with oxygen sensors in the exhaust stream, greatly improve the control of the air/fuel mixture, improve fuel economy, and reduce carbon monoxide production. What do catalytic converters do? The three-way catalytic converter reduces the amounts of nitrogen dioxide, hydrocarbons (unburned fuel), and carbon monoxide. CO concentrations as low as 15 ppm have been measured from a gasoline engine with fuel injection and a catalytic convertor. Is carbon monoxide a problem with diesel engines? Usually not, although any engine, including diesel, produces CO when combustion is incomplete. Diesel (compression ignition) engines run with an excess of air and often produce less than 1200 ppm CO. When diesel fuel is burned incompletely or when overloaded and over-fueled (rich mixture), diesel engines will produce high concentrations of CO. Diesels usually pollute the air with particulates and nitrogen oxides, not CO. Is carbon monoxide a problem with LPG engines? Yes, and the same precautions against running a gasoline engine in an enclosed space should be observed with an LPG engine. Industry sources report a properly tuned LPG engine will produce from 200 to 20,000 ppm, depending on load. A difference in CO production from an engine operating on LPG and one operating on gasoline usually results from more complete combustion of the LPG because it is already a vapor. Unfortunately, most LPG engines have simple fuel delivery systems which can easily be adjusted too rich, allowing extra fuel into the engine and the subsequent high production of carbon monoxide. On one new engine, adjustment of the idle mixture reduced CO concentrations from 44,500 ppm to 600 ppm. If LPG engines can produce high levels of CO, why are they used inside buildings? LPG burns cleaner than gasoline, and is a common fuel for forklifts and other engines used inside. The exhaust fumes are noticeably free from aldehydes, the odorous and eye-irritating compounds found in gasoline exhaust. Typically LPG engines produce less carbon monoxide than a straight gasoline engine, however new modern gasoline engines with catalytic converters and fuel injection, will produce less CO than an LPG engine. Remember that LPG engines do produce CO, and LPG engines running rich or misfiring produce extremely high concentrations of CO. NEVER USE LPG ENGINES IN AN UNVENTILATED AREA! What about other engines used inside, like those on gasoline powered electrical generators, concrete finishers, water pumps, and high pressure power washers? Small gasoline engines used on many tools typically use simple carburetor systems with limited control over the air-fuel ratio. The engines run rich with high concentrations of carbon monoxide, typically 30,000 ppm or more. Manufacturers stress that the engines are to be used only in well-ventilated outdoor areas, and are NEVER to be used indoors even with ventilation. A 1996 National Institute for Occupational Safety and Health Alert calculated carbon monoxide concentrations in a 10,000 cubic foot room (21 x 21 x 21 feet) when a 5-horsepower gasoline engine was operated. With one air change per hour CO concentrations reached over 1,200 ppm (the Immediately Dangerous to Life and Health level) in less than 8 minutes. Even with ventilation providing 5 air changes per hour, 1200 ppm was reached in less than 12 minutes. It is not safe to operate gasoline engines indoors! Are LPG powered floor buffers safe to use indoors? The combustion pollutants produced are a potential health risk and are known to have caused carbon monoxide poisoning. Special engines with oxygen sensors and catalytic converters which closely control the air-fuel ratio and reduce contaminant concentrations (including carbon monoxide) in the exhaust stream are available. Only buffers with low emission engines should be used indoors. Manufacturers’ recommendations must be followed; provide adequate ventilation, proper maintenance, training for workers, and using carbon monoxide detectors. Remember that high risk individuals, such as the elderly, the young, and the sick are at special risk of carbon monoxide poisoning. T.H. Greiner, Ph.D., P.E. Extension Agricultural Engineer The Iowa Cooperative Extension Service’s programs and policies are consistent with pertinent federal and state laws and regulations on nondiscrimination regarding race, color, national origin, religion, sex, age and disability.
Urban air pollution is a complex mix of gases and particulate matter that negatively affect communities living in and around urban areas. It’s most recognisable by that thick brown haze which blankets cities across the world, especially in summer, known as photochemical smog. Nitrogen dioxide, ground-level ozone and particulate matter are the three main air pollutants in modern cities and the health effects of these are well-documented. These three, as well as carbon monoxide and sulfur dioxide, are known as ‘criteria pollutants’. Criteria pollutants are included in national air quality standards that define allowable concentrations of pollutants in ambient air. Why measure nitrogen dioxide Inhalation of nitrogen dioxide (NO2) can impair lung function and increase susceptibility to infection, particularly in children. It can also aggravate asthma. NO2 is not only a toxic gas but it is also a precursor to several harmful secondary air pollutants such as ozone and particulate matter. It also plays a role in the formation of acid rain and photochemical smog. Why measure ozone? In the upper atmosphere ‘good’ ozone (O3) protects life on Earth from the sun’s ultraviolet rays. At ground level ‘bad’ ozone is a criteria pollutant that is a significant health risk, especially for people with asthma. It also damages crops, trees and other vegetation and is a main component of smog. Why measure particulate matter? In 2013, the World Health Organisation (WHO) classified particulate matter (PM) as carcinogenic to humans, responsible for the deaths of 3.7 million people worldwide per year. PM10 (particles ≤ 10 microns) is a criteria pollutant and is a serious health risk because PM10 particles can penetrate the lungs. PM2.5 (particles ≤ 2.5 microns) is also a criteria pollutant which has even greater health impact due to risk of penetration deeper into the respiratory system. Research has linked particulate pollution to lung and heart disease, strokes, cancer, and reproductive harm. Why measure carbon monoxide and sulfur dioxide? Carbon monoxide (CO) is a toxic, odorless gas. If inhaled it will displace oxygen from the hemoglobin molecule in our blood and lead to severe disability or even death. Sulfur dioxide (SO2) is a toxic gas with a strong irritating smell. Inhaling sulfur dioxide has been associated with respiratory disease and difficulty breathing. It is also a precursor to acid rain and atmospheric particulates.
I often find my students confuse ‘fair’ and ‘fare’, and incorrectly spell ‘welfare’ as ‘wellfair’. When helping children differentiate between homophones (words that sound the same but are spelled differently and have different meanings), it is often useful to employ several strategies: Research the etymology of the words, as an understanding the meaning and history of a word can assist in spelling. ‘Fair’ can be used as an adjective, a noun, or an adverb. ‘Fair’ as an adjective referring to a ‘pleasing sight’, ‘morally good’ or ‘clear, pleasant weather’, comes from the Old English word ‘fæger’. In the 1200s, ‘fair’ began to be used to refer to a ‘light complexion’ reflecting a colonial definition of beauty and in the 1300s it started to be used in reference to ‘justice, equity and free from bias’. The use of ‘fair’ in sport (fair ball, fair catch, etc.) appears in 1856. Interestingly, ‘fair play’ was not originally used in a sporting context but rather meant ‘pleasant amusement’ in contrast to ‘sinful amusement’. ‘Fair-weather friends’ was first used in 1736, ‘fair sex’ in reference to women is from the 1600s and ‘fair game’ meaning a legitimate target is a hunting term initially used in 1776. ‘Fair’ as a noun meaning a ‘market or a place of public entertainment’ is from the Anglo-French word ‘feyre’ which is derived from the Latin word ‘feriae’ meaning a ‘religious festival or holiday’. ‘Fair’ as an adverb meaning ‘without cheating’ also relates to the Old English word ‘fæger’. In contrast, ‘fare’ used as a noun or a verb relating to a ‘journey, travel or get along’, is derived from the Old English word ‘faran’. The use of ‘fare’ in reference to the ‘payment for passage’ dates back to the 1510s. In the 1200s, the word ‘fare’ also began to be used to mean ‘food or sustenance’. ‘Welfare’ meaning ‘a concern for the well-being of others’ combines the Old English ‘faran’ meaning ‘to get along’ and the Old English adjective ‘wel’ meaning ‘abundantly or to be sure’. An integral component of spelling is being able to firstly identify the sounds (phonemes) in a word and then to match these sounds with the letters or letter combinations (graphemes) representing those sounds. ‘Fair’ and ‘fare’ both have the same two sounds = /f/-/air/. To remember the correct grapheme, it is useful to make a link to a key cue word/picture. The cue word/picture I use for ‘air’ is ‘chair’. So, I ask my students to make a link between the meaning of ‘fair’ and ‘chair’. Some examples might include: - It’s not fair that you have a chair. - I went on the swinging chair at the fair. - The fair-haired girl sat on the chair. - In fair weather I sit on a chair outside. The cue word/picture I use for ‘are’ is ‘square’. So, I ask my students to make a link between ‘fare’ and ‘square’. For example: - I received a square ticket when I paid the bus fare. - ‘Square’ parents are concerned about their children’s welfare. - Eating traditional Italian fare would provide you with a ‘square’ meal.
|Test your knowledge| Z-score is a very frequently used term from statistics being applied in Machine Learning. In this blog, we discuss what those Z- (Z-score, Z-statistic, etc.) are and how to make use of them. The easiest goes first. It helps to make it clear that: Well, good enough! From the 4 terms in the title of this post, now we remain with only Z-score and Z-test. Assume we have a Normal distribution with mean and standard deviation (called std or sigma). We take a data point from this distribution, let’s call the value taken is x. Then: The z-score of x represents the direction and distance from to x, with unit . For example, suppose we have a Normal distribution with = 2 and = 5. The sample data point we take has value x = 9.5 . To calculate Z-score of x, first, we subtract from x (i.e. x – ) to get the distance (with sign) from to x. A positive value indicates that x is on the right of , while a negative one infers that x is on the left of . Then, we divide the result by to have as the unit of our distance. This means that x is 1.5 larger than . The official formula for Z-score is: Remember that the unit of Z-score is , and Z-score is negative if . So what is Z-score for? We know that Z-score infers how the value of x is compared to its distribution’s mean , but so what? why do we need Z-score? To answer, Z-score is a medium, which we use to compute a more insightful value: the likelihood of getting x. Let’s get through the concept with an example. From various studies and experiments, the scientists make a conclusion that human IQ is normally distributed with = 100 and = 15. Today, you took an IQ Test and your result is 125. You are so happy that you are more intelligent than the average but also curious about how you are compared to the others in more detail. Are you smarter than just 51% of the human community, or are you in the top 1%? To answer this question, we should first calculate the Z-score of your IQ-test: And here is the interesting fact: the Z-score is also normally distributed with = 0 and = 1. As per the normal curve above, the shaded region represents the percentage of people with lower or equal IQ to yours. And it is 95.2%, which means you are on the top 4.8% of the world on IQ. The number of 95.2% is taken from the cumulative distribution function (CDF) of a unit normal distribution (the normal distribution with = 0 and = 1), given your Z-score (1.67). In Python, we can query the percentile of your Z-score using the cdf function from scipy: Or, if don’t want to involve any programming, we can instead check a Z-table: To query the Z-table, for example, for z-score = 1.67, we find the row with value 1.6 and the column with value .07 (since 1.67 = 1.6 + .07). The table value at the intersection cell is our percentile (0.9525 in this case, which equals 95.25%). This Z-table is just a shorthand for a quick lookup of Z-score without a calculator or computer. Another practical use of the Z-score is on medical reports. Today, a Bone Density result should usually contain a z-score. This z-score compares your bone density with others who are around your age and of the same gender. Z-test is a type of hypothesis testing, where the test statistic is normally distributed (or say, where the test statistic follows a Z-distribution). Above, we tried to find your IQ percentile, which is effectively a Z-test, because the result (or the z-score) follows a normal distribution. In the subsequent blog posts, we will also introduce T-test and F-test. The Z-, or T-, or F- here all infer the distribution of the test result, Z-distribution, T-distribution or F-distribution. |Test your understanding| In this blog, we make acquaintance with the family of Z-, including: Z-score is the number of sigmas () our data point is away from the mean (), being negative, zero or positive. Z-statistic is the same as Z-score. Z-distribution is the same as Normal distribution. Z-table is a pre-computed table to lookup the percentile of our z-score. Z-test is the hypothesis test that the result (test statistic) follows Z-distribution.
Learning music theory? Don't go anywhere without this handy music reference from Wolfram—a world leader in technical software! Are you taking a basic course in music theory, just starting out in band, or learning an instrument? Now you can learn about notes, intervals, scales, and chords and even hear what they sound like! - Hear and view accidentals and octaves anywhere on the staff. - Choose from both common scales and hundreds of more advanced scales. - Explore triads and basic major, minor, and seventh chords. - Input up to four chords and hear their progression. - Learn how to identify music intervals by their name and what they sound like. - Find interval inversions for every interval type. - Reference musical terms like "andantino" and "solfa" in the abbreviated music dictionary. The Wolfram Music Theory Course Assistant is powered by the Wolfram|Alpha computational knowledge engine and is created by Wolfram Research, makers of Mathematica—the world's leading software system for mathematical research and education. The Wolfram Music Theory Course Assistant draws on the computational power of Wolfram|Alpha's supercomputers over a 3G, 4G, or Wi-Fi connection.
November 14, 2019 Assertive Behavior, I win, you win. Assertive behavior is described as standing up for one’s rights without violating the rights of others. The goal of assertion is to find a mutual solution and give straight communication. Behaving with appropriate assertion statements increases the likelihood of success in human interaction. Assertive behavior is active, direct, and honest. It communicates an impression of self-respect and respect for others. By being assertive, we view our wants, needs, and rights as equal to those of others. A confident person works toward “win-win” outcomes by influencing, listening, and negotiating so that others willingly choose to cooperate. This behavior leads to success without retaliation and encourages honest, open relationships. Assertive Behavior Payoffs • Helps build positive self-esteem • Fosters fulfilling relationships • Reduces fear and anxiety • Improves chances of getting desired results • Satisfies needs • Negative results may occur. • May get hurt • Difficult to alter ingrained habits Examples of Assertive Behavior • Respecting needs, opinions, and feelings, both their own and others’ • Apologizing when at fault, but allowing others to take responsibility for their actions as well • Respecting one’s rights and the rights of others • Asking for things one needs or wants • Dealing with conflict in healthy ways • Being mature enough to take responsibility for oneself • Approaching conflict from a position of respect and trying to seek out a win/win situation for all involved • Establishing a solid set of boundaries for oneself and communicating them clearly • Respecting the boundaries of others • Being aware of one’s strengths and weaknesses and accepting both • Not being manipulative • Feeling in control of one’s life. Assertion is a choice. A major goal of assertion is to enable people to take charge of their own lives. It helps them break out of ruts and away from stereotyped or compulsive behaviors. At its best, assertion helps people develop the power of choice over their actions. Sometimes it is wise to give in to others, and sometimes it may be necessary to defend one’s rights aggressively. Therefore, the ultimate goal of assertion is to help people choose their behaviors effectively, not have them behave assertively in every situation. Basic Assertive Guidelines • Actively listen • Use “I” statements rather than “you” statements. • Attack the problem, not the person • Use factual descriptions instead of judgments or exaggeration. • Express thoughts, feelings and opinions reflecting ownership • Use explicit, direct requests or directives when you want others to do something rather than hinting, being indirect, or presuming. • Stay focused • Practice, practice, practice As discussed in my book, Workplace Savvy, the self-fulfilling prophecy is a well-documented phenomenon. Many people would agree that the frequent use of the term translates to an attitude about events to come. The self-fulfilling prophecy is any positive or negative expectation about circumstances, occurrences, or people that may affect a person’s behavior in a manner that causes the expectation to be fulfilled. For example, a person stating, “I’m probably going to have a lousy day” might unwillingly approach every situation that day with a negative attitude and see only problems, thus fulfilling the prediction of a lousy day. Or vice-versa, a person who positively espouses a self-fulfilling prophecy, believing “I’m going to have a great day,” might act in ways that will make this prediction accurate. In most cases, this is a subconscious gesture. How does this relate to being assertive? Assertive behavior comes more naturally when you believe you can be more confident. Analyze the messages you are giving yourself. If they are continuously negative, based on “I can’t” statements, you need to reprogram your thinking. You must believe you can and will get the results you want. So practice, practice, practice.