content
stringlengths
275
370k
Smell and taste Odours travel slowly in water and cetaceans move too quickly to use this sense efficiently. For this reason, they lack the sense of smell… with one exception: in 2011, researchers revealed that bowhead whales had the ability to smell. They believe that bowheads use their olfactory sense to “sniff” the air in search of krill. Whales can also taste what they eat. They even use taste to distinguish water masses based on their salt content.
In this section Streams and rivers Most community groups carrying out freshwater biological monitoring focus on streams or rivers. Flowing habitats provide advantages and challenges for freshwater algae, depending on the taxa. A major advantage of flowing water habitats is that the supply of water-borne nutrients (required by algae for growth) is being constantly renewed. However, flowing water presents the risk that algae can be washed off the bed and carried into unsuitable habitats (including out to sea). To reduce this risk, some filamentous algae have holdfast structures that anchor the filament to stable streambed surfaces. The flexible nature of some filamentous algae also allows them to bend with the flow, reducing the drag effect of flowing water. Some filamentous algae form mats that hug the surfaces of stones where frictional forces mean the flow is much slower than a few centimetres above (the boundary layer effect). Many non-filamentous algae are so small that they can establish large populations on submerged surfaces living well within the boundary layer. Some algae can “glue” themselves to stream surfaces using mucilaginous stalks or pads. Other algae form colonies enclosed within a mass of mucilage that may be stuck to the streambed. Not surprisingly many algae thrive better in streams with stony beds than in streams with unstable sandy or muddy beds. Freshwater algae attached to the streambed are often referred to as “periphyton”. These algae attach to most submerged surfaces, including stones, woody debris and even freshwater invertebrates (examples below). Of course major flood events in streams and rivers create scouring conditions that will remove a high proportion of the algal community from the bed. This is nature’s way of re-setting the stream/river community, allowing the natural process of community colonisation to start over again. There will always be some algae that survives even the biggest floods (they may have survived on the sheltered side of boulders or they may recolonise from upstream reaches that were not badly affected by the floods) so there is always a source of recolonisation. Lakes, ponds and puddles Many algae are planktonic, i.e. they float or actively swim in the open water of lakes, ponds or puddles, and some have the ability to accumulate at the surface of still or slow-flowing waters. Planktonic algae are often referred to as “phytoplankton”. These groups have various mechanisms to prevent them sinking too deep (where there may be insufficient light for photosynthesis) such as gas vacuoles within the cells, propulsion mechanisms like flagellae (whip-like tails in various groups) or raphes (in diatoms) allowing these groups to swim/glide through the water, or simply the combination of small size and neutral buoyancy. Planktonic algae that discolour the water are often seen as undesirable, but they are important components of aquatic food chains, providing food for zooplankton and small invertebrates, that in turn provide food for important fish stocks. Planktonic algae are also used extensively in wastewater treatment ponds because they can absorb nutrients and undesirable contaminants, binding them into less harmful organic matter. Many algal groups live around the shallow margins of lakes, wetlands and ponds, including groups that are not well suited to open-water habitats. Many types of filamentous algae can attach to (or become entangled amongst) the bed or to aquatic plants in these shallow zones where there is always sufficient light for photosynthesis. These algae create the green or brown “fuzz” often seen on submerged surfaces in marginal habitats. The effect of light on algae Algae, like higher plants generally need light for photosynthesis (there are only a few exceptions where some protozoan algae can live without photosynthesis). Too much light can contribute to too much algae (undesirable blooms) if other conditions such as nutrient levels, streambed type and flow regime allow. The removal of shade-producing riparian vegetation is therefore one factor that contributes to the likelihood of undesirable blooms. Algae in the food chain Natural bush-covered streams have quite low light levels and they tend to support low algal biomass, but these habitats can still support a range of beneficial algae (often thin diatom films) that provide food for freshwater invertebrates (often grazing mayfly nymphs), which in turn provide food for fish. The artificial shading of streams through piping eliminates almost all algae from that section of stream. This prevents algae from providing food for grazing invertebrates, and it prevents algae from performing their natural “decontamination service” (the uptake of nutrients and other contaminants from the water column).
Pericardial effusion is the term that describes the fluid that is collected inside the pericardial cavity. This is usually related to the inflammation of the pericardium, which is also called as the pericardial cavity. The pericardial cavity or pericardium is the double layered membrane sac or space that surrounds and protects the heart. The pericardium can only hold a limited amount of excess fluid without causing problems. The inner layer of the pericardium is the one that is near and attached to the heart. The outer layer is a strong, fibrous and elastic tissue that is somewhat thick. The space between the outer and inner layer of the pericardium can accommodate a small amount of fluid. However, if the pericardium is diseased or injured, the resulting inflammation can lead to pericardial effusion. The material posted on this page relating to cardiac emergencies is for learning purposes only. To learn to recognize and manage major and minor cardiac emergencies sign up for a first aid and CPR class with one of our training providers. How Do You Know if you have Pericardial effusion? The symptoms that are produced by pericardial effusion depend on how fast the fluid is formed. The size of the effusion also influences the symptoms that you will have. You can develop a significant amount of pericardial effusion gradually, without manifesting any symptom or you may develop relatively small effusion rapidly that can compromise the functioning of your heart. The classic triad of symptoms of pericardial effusion include low blood pressure, muffled heart sounds upon auscultation and distension of the neck veins. The symptoms may include chest pain that is usually behind the breastbone or the left side of the chest. It can worsen when you take a deep breath but may be relieved when you sit up or lie down. You may also experience the difficulty of breathing or shortness of breath, cough, low grade fever, and rapid heart rate. When to ask for help? The pericardium can only hold a very small amount of fluid that without causing problems. Because the inner layer of the pericardium is softer than the outer layer, fluids that may build up may cause too much pressure inward towards your heart. When this happens, the pumping chambers of the heart are unable to fill completely. This may result in the collapse of one of the chambers, a condition called cardiac tamponade. This is a life threatening condition, thus medical management as soon as possible is necessary to prevent death potential. The symptoms vary depending on the size of the effusion as well as the time it takes to occur. First aid management, especially observable and life threatening potential may save you from cardiac tamponade. If you feel chest pain that lasts for more than a few minutes, difficulty of breathing or feeling faint, call your doctor right away. In case the person loses consciousness or is unable to breathe, call 911 and then give CPR immediately until help arrives . Better Health Channel. Pericarditis. Retrieved on July 5, 2014 from http://www.betterhealth.vic.gov.au/bhcv2/bhcarticles.nsf/pages/Pericarditis.
53 Activities Using the Possibility Chart by Author and Educational Consultant Laura Magner Creative and critical thinking are 21st century priority skills needed for 21st century careers. Using combinations of nouns from the provided Possibility Charts, students use critical thinking to produce creative products in 6 math, 4 science, 28 language arts, 7 social studies, and 31 critical/creative thinking activities. - that tackle vocabulary, parts of speech, grammar, & figurative language - in narrative, expressive, informational, & persuasive writing and dialog - that are springboards for in-depth investigations and creative projects in the content areasLessons include - Target skills - Definition of terms - Teaching strategy and procedure - Model examples - Common Core Anchor standards in Writing, Reading, Speaking and Listening, and Language; NAGC national critical thinking standards 1, 3, and 4
Lest we forget. April 25th is Anzac Day – a national day of remembrance in Australia and New Zealand, originally commemorated by both countries every year to honour the members of the Australian and New Zealand Army Corps (ANZAC) who fought at Gallipoli in the Ottoman Empire during World War I. It now more broadly commemorates all those who served and died in military operations for their countries. To our comrades in in Australia and New Zealand our thoughts are with you and all those lost at war. Freedom is never more than one generation away from extinction. We didn’t pass it to our children in the bloodstream. It must be fought for, protected, and handed on for them to do the same. — Ronald Reagan In 1915, Australian and New Zealand soldiers formed part of an Allied expedition that set out to capture the Gallipoli Peninsula, according to a plan by Winston Churchill to open the way to the Black Sea for the Allied navies. The objective was to capture Constantinople, the capital of the Ottoman Empire, which was an ally of Germany during the war. The ANZAC force landed at Gallipoli on 25 April, meeting fierce resistance from the Ottoman Army commanded by Mustafa Kemal (later known as Atatürk). What had been planned as a bold strike to knock the Ottomans out of the war quickly became a stalemate, and the campaign dragged on for eight months. At the end of 1915, the Allied forces were evacuated after both sides had suffered heavy casualties and endured great hardships. The Allied casualties included 21,255 from the United Kingdom, an estimated 10,000 dead soldiers from France, 8,709 from Australia, 2,721 from New Zealand, and 1,358 from British India. News of the landing at Gallipoli made a profound impact on Australians and New Zealanders at home and 25 April quickly became the day on which they remembered the sacrifice of those who had died in war. Though the Gallipoli campaign failed to achieve its military objectives of capturing Constantinople and knocking the Ottoman Empire out of the war, the actions of the Australian and New Zealander troops during the campaign bequeathed an intangible but powerful legacy. The creation of what became known as an “Anzac legend” became an important part of the national identity in both countries. This has shaped the way their citizens have viewed both their past and their understanding of the present.
Origins: Greek Age All ancient authors and scholars agree in representing Catania as a Greek colony named Katane. The exact date of its foundation is not recorded, even if is generally reported to have been around 730 BC.Catania had to suffer several assaults and dominations: Syracusans, Athenians, Carthaginians and many others tried and conquer the city in ancient times – and sometimes they succeeded. Anyway, one thing is certain: the city played the role of one of the most important cultural centre of the whole Magna Graecia: it was the birthplace of the philosopher and legislator Charondas, the place of residence of the great poet Stesichorus, while Xenophanes, the philosopher of Elea, also spent the latter years of his life there. In the First Punic War, Catania was under the submission of Rome: it seems that the city continued to maintain its friendly relations with the Romans, though not enjoying the advantages of a confederate city. However, Catania rose to a position of great prosperity under the Roman rule: Cicero repeatedly mentions it as, in his time, a wealthy and flourishing city.It subsequently suffered severely from the ravages of Sextus Pompeius, and was of the cities to which a colony was sent by Augustus. Catania retained its colonial rank, as well as its prosperity, throughout the period of the Roman Empire.After one of the most serious eruptions of Mount Etna happened, in 121 BC, and the city was overwhelmed by streams of lava, and consequently exempted, for 10 years, from its usual contributions to the Roman state. After being sacked by the Vandals and having been ruled by the Ostrogoths, Catania was conquered in 535 by the Eastern Roman Empire, under which it remained until the IX century. It was the seat of the Byzantine governor of the island.Later on, the city came to be under the Islamic emirate of Sicily until 1072, when it fell to the Normans of Roger I of Sicily, while it was subsequently ruled a bishop-count. In 1194–1197, German soldiers sacked the city, after the conquest of the island by emperor Henry VI. In 1232, it rebelled to Henry's son, Frederick II, who later built here a massive castle and also made it a royal city, ending the dominance of the bishops. Catania was then one of the main centres of the Sicilian Vespers revolt (1282) against the House of Anjou, as well as the seat of the crowning of the new Aragonese king of Sicily, Peter I. The city's importance grew at the point that it was chosen by the same dynasty as a Parliament and Ro yal se at. Catania lost its capital role when, in the early XV century, Sicily was turned into a province of the larger Kingdom of Aragon, but kept some of its autonomy and privileges.In 1434 King Alfonso V founded here the Siciliae Studium Generale, the oldest university in Sicily. With the unification of Castile and Aragon (early XVI century), Sicily became part of the Spanish Empire. It rebelled against the foreign government in 1516 and 1647.In 1693 the city was completely destroyed by earthquakes and by lava flows that ran over and around it into the sea. The city was then rebuilt in the Baroque architecture that nowadays characterizes it, mainly thanks to the projects of leading architect Giovanni Battista Vaccarini. From unified Italy to today Catania was one of the vanguards of the movement for the Sicilian autonomy in the early XIX century.In 1860 Giuseppe Garibaldi's Expedition of the Thousand conquered Sicily for Piedmont from the Kingdom of the Two Sicilies. Since the following year Catania was part of the newly unified Italy, whose history it shares since then.During WWII Catania was repeatedly bombed by the allies, and almost 100,000 of its inhabitants were moved to the neighbouring villages.After the conflict, from the early 1960s (and with a remarkable acceleration during the 1990s) to these days, Catania enjoyed a development and an economic, social and cultural effervescence.
Inquiry teaching calls for students to read a range of complex sources, engage in classroom interaction and discussion, and write arguments with claims supported by evidence and reasoning. These are challenging tasks for many middle school students. Read.Inquire.Write. has designed the investigations to support students who are learning English or are behind their grade level in reading and writing. Research shows that second language learning is supported when students have: - Opportunities for meaningful interaction about what they are learning - Support for close reading of complex texts, focused on language and meaning - Explicit guidance in writing, including support for achieving the purposes of each component of the genre to be written - Clear expectations for what is to be done and regular classroom routines that help them meet those expectations The investigations repeatedly engage students in interaction in pairs and small groups, and then in whole class discussion, to enable them to reason about what they are learning and attend to other students’ reasoning. They are supported in close reading where specific language is brought into focus to enable them to respond to the compelling question for each investigation. They receive robust support for writing, including attention to the articulation of claim and introduction of evidence and reasoning, as well as guidance about language choices they can make to achieve the disciplinary goals. Throughout, expectations are made clear as the compelling question for each investigation and the writing assignment are a focus of attention. Teachers are guided throughout the investigations to use routines and practices that engage students in interaction, focus students on language and meaning, and draw attention to disciplinary ways of making claims and presenting and reasoning about evidence. The Disciplinary Literacy tools can support English Learners in specific ways as they engage in close reading, reasoning about evidence, and argument writing: - The Bookmark asks questions that guide students in identifying important information in the sources and reasoning about the reliability of the sources. - The Weigh the Evidence chart creates a public record of classroom discussion as students think together across sources and draw evidence-based conclusions. - The Mentor Text offers a model of the kind of argument each investigation asks students to write and provides an opportunity for students to recognize how claim, evidence, and reasoning can be presented in a coherent text. - The Useful Language support offers students choices that help them get started in writing each stage of the argument without templates or sentence starters that force them into particular wordings. - The Planning Graphic Organizer lays out the stages of the argument to be written, showing the purpose of each stage and prompting students to take notes to create an outline of the text they will write. - The Reflection Guide offers students a checklist to review after they write, identifying the expectations of the writing task and orienting them to what their draft has to accomplish. August, D., & Shanahan, T. (2008). Developing reading and writing in second-language learners. New York: Routledge. Ellis, N., & Larsen-Freeman, D. (2006). Language emergence: Implications for applied linguistics–Introduction to the special issue. Applied Linguistics, 27(4), 558-589. Gibbons, P. (2006). Bridging discourses in the ESL classroom: Students, teachers and researchers. London: Continuum. Schleppegrell, M. J. (2004). The Language of Schooling: A functional linguistics perspective. Mahwah, NJ: Lawrence Erlbaum. Schleppegrell, M. J. and Catherine L. O'Hallaron. (2011). Teaching academic language in L2 secondary settings. Annual Review of Applied Linguistics, 31, 3-18. Turkan, S., de Oliveira, L. C., Lee, O., & Phelps, G. (2014). Proposing a Knowledge Base for Teaching Academic Content to English Language Learners: Disciplinary Linguistic Knowledge. Teachers College Record, 116, 1-30.
Sport creates rituals and habits. Often girls are not taught effective hygiene habits at home. A sport environment can be a place to learn about personal caretaking and regularise healthy habits. For example, older players and coaches can emphasise and model washing up after a game or practice. Leaders can emphasise the importance of being healthy in order to excel and perform optimally on the field. A facilitator or coach can educate girls on very basic issues, such as how to wash up properly and answer questions about perspiration, bodily function and puberty changes, what is normal and what is not. Useful Example – Partners in Healthcare and Hygiene Boxgirls is a boxing programme in the slums of Nairobi that caters to nearly 700 girls. Due to rampant domestic and sexual abuse among girls in the slums, the programme uses self-defence training to teach the girls how to defend themselves against GBV. The slums are so dangerous that the girls simply do not walk out at night; even using the toilet becomes a challenge. The Foundation for International Cardiac and Children's Services (FICCS) has partnered with Boxgirls to provide them healthcare and hygiene support, including sending nurses on a weekly basis to support the girls. In addition to medical support, FICCS donated much needed boxing gloves and first aid supplies, including sanitary pads, which assists the girls in their training, and most importantly, they now have proper medical and hygiene supplies as they travel to different tournaments. Useful Example – Safe Drinking Water and Sanitation WASH United uses three key tactics to fight for safe drinking water, sanitation and hygiene for all people and to put sanitation and hygiene in the spotlight: - Harness the passion for sport and the role model status of super stars - Use fun, interactive games and storytelling to promote learning and facilitate long-term behaviour change - Focus on positive, aspirational communications that appeal to higher-level human needs and desires Adolescent Girl Life Skills:
THE BACKGROUND TO PHONETIC ENGLISH BIBLE Alphabets were brought into being in ancient times. Around 3200 years ago, the Hebrew, Arabic, Ancient Greek and Roman civilizations each developed an alphabetic system of its own. These original alphabets were used to communicate the spoken sounds of Arabic, Ancient Greek, Hebrew and Latin to people who were at any distance beyond reach of human voices. All of these alphabets are still in use today. Phonetic English Bible is based on Virtual Phonetics, a very simple learner alphabet for any person who is learning to read English words at home, in early schooling or even later as an adult worker. As we have seen in this trial version manual and student text book, Virtual Phonetics can remove most of the difficulties that many students experience so uniquely with the English alphabetic system. In essence, the English alphabet is the one that the Romans used for their Latin language some three thousand years ago. Most European languages today also use the Roman alphabet with slight variations from language to language. Languages in South East Asia, Polynesia and elsewhere in the world too, use modified versions of the Roman alphabet. The Hebrew alphabetic system incorporated the only ancient alphabetic marking system that was specifically designed for learners in their beginning stages. Remarkably too, this method for the early teaching of Hebrew reading and spelling skills, still exists today and is still used in Jewish schools and university departments throughout the world. It is appropriate to be assured. At its heart, the early teaching concept behind Virtual Phonetics has been trialled for over a millennium and is still proving its educational effectiveness with millions daily. In short. Virtual Phonetics does for English words what the vocalization signs do for students of beginning Hebrew writing. From the time of Moses, Hebrew words for adults who were already skilled readers were written with all except 2* of their 13 vowel letters simply not represented. Hebrew was originally written only for persons who spoke Hebrew, and all of its vowel letters were not strictly necessary once the readers had become skilled. When English words are written in a similar manner we, as competent native-to-English readers, are still able to read them as follows: Fr-m th- t-m- -f M-s-s, H-br-w w-rds h-v- b—n wr-tt-n w-th -ll -xc-pt tw- -f th- v-w-l s—nds -f sp-k-n H-br-ws-mply n-t p-t d-wn. * roughly the equivalent of the English letters “A” and “I” English too, just like Hebrew, needs all of its vowel letters only for those readers for who are beginning to learn the discipline of alphabetic decoding. But whenever we write English words there are no short cuts at all: we need to know the precise position of every letter in every word or we get it wrong. From a technical standpoint, the marking signs around the letters of Virtual Phonetics, like the vocalization signs around Hebrew letters, are often called diacritical marks. Diacritical marks have been linked to most European languages for centuries. Students of French for example, are introduced to the ‘accenting’ marks around French words in the earliest lessons. Such markings on the letters show students how the pronunciations of these letters change from word to word. From the mid 1960’s onward, a few commercially produced English language systems for the teaching of early reading to school children also used publications with diacritical marks. With the notable exception of the DISTAR materials, few of these programs have endured the test of time. The structure of English spelling is relentless. It cannot be changed because there are far too many people in the world whose spelling habits in English would need to be changed too. The true purpose of any diacritical marking system for English is to convey the impression that our spelling is really a lot more forgivable than it is. This amounts to a benevolent form of deception. I am quite blunt about this because I respect accusation of false academy. But I have a teaching job to do, and this job is mainly to help learners of written English to get to grips with at least the sensible bits that go together to make English words. The teaching aspiration here is, that once any student gets to master all of the sensible spelling bits, then all of the later spelling ‘idiosyncrasies’ will be easier to cope with. As a retired teacher as well as a disciplined analyst of the English spelling system, I have had to make a number of decisions with Virtual Phonetics that many, including myself, will remain ‘irritated’ by. In short, the design of around 11,000 different English words has forced the ways in which I have decided to use the 10 signs of Virtual Phonetics. And on occasions these decisions have been arbitrary. About Phonetic English ESL For the first time ever, the progress of most ESL students will not be quite so impeded by the near 400 pronunciation ‘rules’ that, for up to a millennium, have made written English so utterly frustrating: The student notes to each of the 25 audio modules on this site begin with a READ ALOUD CHECK that is written in a new and ‘phonetically regular’ English code. 100 PROFESSIONALLY GRADED AUDIO LESSONS Between the 1960s and 1970s, the Australian government, through its Department of Immigration, employed a group of truly ingenious ESL teachers. These teachers produced a group of 120 lessons that were used to teach many thousands of ESL students all over Australia. In particular, the oral dialogue sections of the first 100 of these lessons were regularly broadcast over Australia’s national radio networks. Vinyl records of the broadcasts were also used by teachers in classrooms all around the country. It is now some years since these out-of-date, out-of-print and out-of-copyright materials were abandoned: yet upon careful analysis they still prove to be second to none for ESL teaching excellence. Years ago, the student notes for the 120 lessons were printed in what are now the 6 ‘heritage era’ books that were once distributed freely by the Australian Government. The 17 vinyl records of the broadcasts were packaged in 2 boxes that had roughly the weight and volume of a fully packed briefcase. In this re-launch of Australia’s heritage era ESL books and recordings, Virtual Phonetics has renamed the set of 17 records and 6 books as Talk All the Way to Fluent English. With an abundance of gratitude and admiration for the era of teachers who taught before, Virtual Phonetics has improved on the scripts that were originally created with the restrictions of typewriter fonts of half a century ago. These changes have converted a very bulky teaching kit into a very convenient one that most ESL students can now use with any Internet enabled device. The old lesson plans for learning to speak English still apply to our modern ESL students because… “English as ‘she’ is still spoken around the world”… has hardly changed at all in the intervening half century. The Virtual Phonetics program is a simple first stage toward helping ESL students to read and spell English words with greater accuracy. It is based upon a method for the teaching of basic literacy that first originated some 3,200 years ago. This ancient method is still being used today for the teaching of Hebrew literacy skills throughout the world. Virtual Phonetics basically does for modern English words what the ‘vocalization marks’ of Hebrew have done for Hebrew words for millennia. In this sense, no other method for the teaching of beginning reading skills has ever been more thoroughly trialed. Virtual Phonetics puts 10 special marks around the letters of English words to show how they are pronounced. See this sample: There w1s once a king of Persia wh& t$$k delight in d&ing tings in very uncommon ways. At one time he w1s in need of a man that wo5ld 3lways d& wh1t he told him to d&, and he t$$k a very strange way to find him. He sent %ut his w!rd that he w1nted a man to w!rk f@r him in his g2rden. M@re than a hundred came, and, from am#ng them he chose tw&. He showed them a l2rge b2sket in the g2rden, and told them to fill it with w3ter from the well near by. After they had begun their w!rk he left them, saying “ When the sun is d%wn I shall c#me and see y@ur w!rk; and, if I find that you have d#ne it well, I shall pay you. ” Then the king left them alone. These 10 marks reduce the complexity of our English spelling or sounding-out ‘rules system’ down from near 400 ‘rules’ to around 60 main ones: and this without changing the spelling of any of the words in the text. This promises to make Virtual Phonetics especially useful for both beginner ESL students and for the re-teaching of basic spelling and reading skills to some older age (born-to-English) students and workers. The shorter the alphabetic code, the easier it is to crack. ENGLISH TEXT CONVERSION APPLICATION Teachers, students and professional text book writers are all likely to be intrigued by an ENGLISH TEXT CONVERSION APP that enables the INSTANT conversion of any ordinary English text into a phonetically regular one! This app is the first ever of its type. Above all, it promises to be a very practical app that is currently able to convert 15,500 English words. TESTING AND TEACHING ENGLISH ARTICULATION SKILLS WITH ASIAN (‘TONAL’) LANGUAGE STUDENTS The teachers resource is a direct teaching program designed to help teachers with students who have English pronunciation problems that are typical of students who have been born into a ‘tonal’ language community. The program provides highly specific speech articulation. It is based rigorously upon the type of precise science that is involved in the field of SPEECH PATHOLOGY. It also integrates a thorough breakdown of what is known as ‘the acoustic design’ that underpins the pronunciation of English words. Whilst this program does sound very technical in its presentation, it is in practice, a very simple program to implement. No jargon is used. The program was originally designed here in Australia, during the 1980s, among students born into tonal language communities such as: Vietnam, Cambodia, China and Taiwan. It is the only program of its type that I know of… and it works!
Energy and Work | Protocol Work and Energy Conservation. Mechanical energy describes the ability of an object to do work. The work done on an object arises from a force applied over a . Source: Ketron Mitchell-Wynne, PhD, Asantha Cooray, PhD, Department of Physics &. The increase in the mechanical energy of the shot equals only the increase in According to the relation between work and energy the velocity is maximized by. Thus, you arwe doing work against inertia you are doing work against inertia, such that the work equals the change in kinetic energy of the object. When you are doing work against continuous resistive forces, such as gravity or spring tension, work done equals the change in potential energy of the object. Questions you may have include: What is the equation for work? How is work the change in kinetic energy? When is work the change in potential energy? This lesson will answer those questions. Units Conversion Equation for work The definition of mechanical work as opposed to thermodynamic work is that it equals a force against some resistance times the distance traveled while that force is being applied. Unfortunately, many Physics textbooks carelessly omit the idea that there is the resistive force force of inertia, as well as other possible resistances. The equation for work is: The increase in the mechanical energy of the shot equals only the increase in its kinetic energy, because we decided to neglect its potential energy In order to maximize kinetic energy of human body or sport equipment we must exert the greatest possible force along the longest possible distance. This way we can make us of the knowledge of the relation between energy and work to improve our technique in certain sports, especially in athletics. According to the relation between work and energy the velocity is maximized by the greatest possible force acting along the longest possible distance. Mechanical energy - Wikipedia Shot-putters therefore often start their throw by standing on one foot, bent forward over the edge of the shot-put circle, with their back towards the direction of the throw, to maximize the distance along which their force will act on the shot and thus to also maximize the initial velocity of the shot at the moment of the throw Fig. The longer distance along which the force acts on the shot and the ability to use larger muscle groups thus leads to longer throws and better results. Figure 13 Initial phases of shot put allowing to maximize the work performed during the throw. This happens mostly in catching projectiles, landing, etc. - Relationship between Work and Mechanical Energy - Mechanical energy Human muscles also perform negative work when our body lands on the ground. During landing it is important to maximize the distance along which the projectile is decelerating. By making stopping distance longer we make impact forces smaller. We must realize, however, that prolonging the stopping distance by bending our knees deeply, for example, does not necessary lead to smaller reaction forces in specific joints. To decrease impact forces and increase stopping distance we also use various materials: This law can be used in studying the motion of projectiles. This kinetic energy is transformed into deformation energy of the pole and subsequently into the increase in potential energy of the athlete. Energy and Work In other words, the faster the pole vaulter runs and the better his pole is able to transform kinetic energy into potential energy through deformation energy, the higher he jumps. Part of energy is of course transformed in other types of energy, for example internal energy of the pole, resulting in heat. In mechanics such ability is described by the quantity called power Mathematically this can be expressed as:
Fourth-generation programming language A fourth-generation programming language (4GL) is any computer programming language that belongs to a class of languages envisioned as an advancement upon third-generation programming languages (3GL). Each of the programming language generations aims to provide a higher level of abstraction of the internal computer hardware details, making the language more programmer-friendly, powerful, and versatile. While the definition of 4GL has changed over time, it can be typified by operating more with large collections of information at once rather than focusing on just bits and bytes. Languages claimed to be 4GL may include support for database management, report generation, mathematical optimization, GUI development, or web development. Some researchers state that 4GLs are a subset of domain-specific languages. In the 1980s and 1990s, there were efforts to develop fifth-generation programming languages (5GL). - 1 History - 2 Types - 3 Low code environments - 4 Examples - 4.1 General use / versatile - 4.2 Database query languages - 4.3 Report generators - 4.4 Data manipulation, analysis, and reporting languages - 4.5 GUI creators - 4.6 Mathematical optimization - 4.7 Database-driven GUI application development - 4.8 Low code / No code development platforms - 4.9 Screen painters and generators - 4.10 Web development languages - 5 What's previous - 6 What's next - 7 See also - 8 References Though used earlier in papers and discussions, the term 4GL was first used formally by James Martin in his 1981 book Applications Development Without Programmers to refer to non-procedural, high-level specification languages. In some primitive way, early 4GLs were included in the Informatics MARK-IV (1967) product and Sperry's MAPPER (1969 internal use, 1979 release). The motivations for the '4GL' inception and continued interest are several. The term can apply to a large set of software products. It can also apply to an approach that looks for greater semantic properties and implementation power. Just as the 3GL offered greater power to the programmer, so too did the 4GL open up the development environment to a wider population. The early input scheme for the 4GL supported entry of data within the 72-character limit of the punched card (8 bytes used for sequencing) where a card's tag would identify the type or function. With judicious use of a few cards, the 4GL deck could offer a wide variety of processing and reporting capability whereas the equivalent functionality coded in a 3GL could subsume, perhaps, a whole box or more of cards. The 72-character metaphor continued for a while as hardware progressed to larger memory and terminal interfaces. Even with its limitations, this approach supported highly sophisticated applications. As interfaces improved and allowed longer statement lengths and grammar-driven input handling, greater power ensued. An example of this is described on the Nomad page. - Another example of Nomad's power is illustrated by Nicholas Rawlings in his comments for the Computer History Museum about NCSS (see citation below). He reports that James Martin asked Rawlings for a Nomad solution to a standard problem Martin called the Engineer's Problem: "give 6% raises to engineers whose job ratings had an average of 7 or better." Martin provided a "dozen pages of COBOL, and then just a page or two of Mark IV, from Informatics." Rawlings offered the following single statement, performing a set-at-a-time operation... The development of the 4GL was influenced by several factors, with the hardware and operating system constraints having a large weight. When the 4GL was first introduced, a disparate mix of hardware and operating systems mandated custom application development support that was specific to the system in order to ensure sales. One example is the MAPPER system developed by Sperry. Though it has roots back to the beginning, the system has proven successful in many applications and has been ported to modern platforms. The latest variant is embedded in the BIS offering of Unisys. MARK-IV is now known as VISION:BUILDER and is offered by Computer Associates. Santa Fe railroad used MAPPER to develop a system, in a project that was an early example of 4GL, rapid prototyping, and programming by users. The idea was that it was easier to teach railroad experts to use MAPPER than to teach programmers the "intricacies of railroad operations". One of the early (and portable) languages that had 4GL properties was Ramis developed by Gerald C. Cohen at Mathematica, a mathematical software company. Cohen left Mathematica and founded Information Builders to create a similar reporting-oriented 4GL, called FOCUS. Later 4GL types are tied to a database system and are far different from the earlier types in their use of techniques and resources that have resulted from the general improvement of computing with time. An interesting twist to the 4GL scene is realization that graphical interfaces and the related reasoning done by the user form a 'language' that is poorly understood. A number of different types of 4GLs exist: - Table-driven (codeless) programming, usually running with a runtime framework and libraries. Instead of using code, the developer defines their logic by selecting an operation in a pre-defined list of memory or data table manipulation commands. In other words, instead of coding, the developer uses table-driven algorithm programming (see also control tables that can be used for this purpose). A good example of this type of 4GL language is PowerBuilder. These types of tools can be used for business application development usually consisting in a package allowing for both business data manipulation and reporting, therefore they come with GUI screens and report editors. They usually offer integration with lower level DLLs generated from a typical 3GL for when the need arise for more hardware/OS specific operations. - Report-generator programming languages take a description of the data format and the report to generate and from that they either generate the required report directly or they generate a program to generate the report. See also RPG - Similarly, forms generators manage online interactions with the application system users or generate programs to do so. - More ambitious 4GLs (sometimes termed fourth generation environments) attempt to automatically generate whole systems from the outputs of CASE tools, specifications of screens and reports, and possibly also the specification of some additional processing logic. - Data management 4GLs such as SAS, SPSS, and Stata provide sophisticated coding commands for data manipulation, file reshaping, case selection, and data documentation in the preparation of data for statistical analysis and reporting. Some 4GLs have integrated tools that allow for the easy specification of all the required information: - James Martin's version of Information Engineering systems development methodology was automated to allow the input of the results of system analysis and design in the form of data flow diagrams, entity relationship diagrams, entity life history diagrams etc., from which hundreds of thousands of lines of COBOL would be generated overnight. - More recently Oracle Corporation's Oracle Designer and Oracle Developer Suite 4GL products could be integrated to produce database definitions and the forms and reports programs. Low code environments In the twenty-first century, 4GL systems have emerged as "low code" environments or platforms for the problem of rapid application development in short periods of time. Vendors often provide sample systems such as CRM, contract management, bug tracking from which development can occur with little programming. General use / versatile - Accelerator (Productivity) - Accell/SQL (4GL) from Unify Corporation. - CA-Telon 4GL Cobol/PLI generator - Cognos PowerHouse 4GL - Forté TOOL (transactional object-oriented language) - IBM Rational EGL (Enterprise Generation Language) - Omnis Studio SDK - Oracle Application Development Framework - OutSystems (Productivity/PaaS) - DEC RALLY - SheerPower4GL (Microsoft Windows only) - SQLWindows/Team Developer - Unix Shell - Visual DataFlex (Microsoft Windows only) - Visual FoxPro Database query languages Extract data from files or database to create reports in a wide range of formats is done by the report generator tools. Data manipulation, analysis, and reporting languages - Ab Initio - Audit Command Language - Clarion Programming Language - ADS/Online (plus transaction processing) - IGOR Pro - MAPPER (Unisys/Sperry) now part of BIS - MARK-IV (Sterling/Informatics) now VISION:BUILDER of CA - Simulink a component of MATLAB - Progress 4GL - SQL PL - Wolfram Language Database-driven GUI application development Screen painters and generators Web development languages - First-generation programming language - Second-generation programming language - Third-generation programming language - List of fourth-generation programming languages - Domain-specific programming language - Rapid application development - 35th Hawaii International Conference on System Sciences - 1002 Domain-Specific Languages for Software Engineering Archived May 16, 2011, at the Wayback Machine - Arie van Deursen; Paul Klint; Joost Visser (1998). "Domain-Specific Languages:An Annotated Bibliography". Archived from the original on 2009-02-02. Retrieved 2009-03-15. - Martin, James. Application Development Without Programmers. Prentice-Hall, 1981. ISBN 0-13-038943-9. - "IBM Punch Cards". www.columbia.edu. - "Data Mining Software, Data Analysis, and More: Unisys Business Information Server Features". Unisys. 2006-08-21. Archived from the original on 2006-08-21. Retrieved 2019-02-03. - Louis Schlueter (1988). User-Designed Computing: The Next Generation. [book on report generator and MAPPER systems] - Barbara Canning McNurlin; Ralph H. Sprague (2004). "Ch. 9". Information Systems Management in Practice (PDF). Pearson/Prentice Hall. ISBN 978-0-13-101139-7.[permanent dead link] - Forrest, Conner. "How Salesforce is using low-code orchestration to save 'floundering IoT projects'". ZDNet. - Marvin, By Rob; August 10, 2018 1:50PM EST; August 10, 2018. "The Best Low-Code Development Platforms for 2019". PCMAG. - Sayer, Martin Heller and Peter (6 April 2018). "25 simple tools for building mobile apps fast". InfoWorld. - "DronaHQ. Build apps without coding". www.dronahq.com. - "K2 - Digital Process Automation". www.k2.com. - "Kony. Accelerate digital success". Kony. - This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.
E/MS Unit I Activity 2: The Fate of Indian “Praying Towns” Ask students to imagine what happened to the Christian Indians during King Philip’s War. Then Read aloud or share key points from the excerpts from “John Hoar’s Stand.” - Why do students think the colonists treated the Christian Indians so badly? - What might have been a reasonable and fair solution to the conflict? - Can students think of other instances in history when innocent people were imprisoned in this way (such as the internment of Japanese-Americans during WW II)?
Shielded metal arc welding (SMAW), more commonly referred to as stick welding, is a technique that runs an electrical current through a flux-coated electrode into the metal being welded. The process creates an electrical arc when the welding rod touches the metal that creates sufficient heat to melt the parent metals and the rod, fusing them in a weld bead. Selecting a welding rod depends on the thickness, type and condition of the metal you are attempting to weld, as well as the type of electrical current you are using. Read the welding rod codes. Each welding rod is stamped with a five-digit code designation such as E6011. The letter "E" refers to the electrical arc welding process. The first two numbers refer to the tensile strength of the metal in the rod after it forms a weld bead. The third number refers to the electrode position. The last two digits refer to the flux coating on the rod, which correlates to the type of current to be used. Select the proper welding rod diameter. The diameter is correlated to the thickness of the metal sheet you are welding. Therefore, for 1/4-inch-thick sheeting, use a 1/4-inch-diameter welding rod. When welding a sheet that is in between rod diameter sizes, select a rod that is slightly larger in diameter than the thickness of the sheeting. Determine the type of electrical current needed for the welding you are performing. DC current can be used with either the welding rod attached to the positive or negative electrode. A DC positive current will leave a deeper penetrating weld, whereas a DC negative current will produce a weld seam of medium depth. Identify the tensile strength required for the welding rod. A rod that begins with the number 70 will have a strength of 31751 Kilogram per square inch, a rod that begins with the number 65 will have a strength of 65,000, and so on. Although the tensile strength required for a weld is determined by the type and application of the metal being welded, to avoid joint failure you should choose a rod tensile strength that is greater than the rated strength of the metal being welded.
What is honey? Bees produce honey from the nectar and other secretions of flowers and other living parts of trees and plants. The harvest is first mixed with bees’ own special substances and thereafter deposited and dehydrated and eventually stored in the honeycomb cells to ripen. TRANSFORMING NECTAR INTO HONEY Nectar starts to transform into honey as soon as it reaches a bee’s honey stomach. Nectar is thus beginning to become dehydrated and, by the help of enzymes, polysaccharides transformed into monosaccharides while the forager bee is still flying back to its beehive. Nectar placed in a honeycomb is thus no longer of the same composition as it was while still in a flower. But the little drops of liquid placed in a honeycomb cell or upon its walls are still far from being ripe honey – they still contain too much water as well as a good measure of polysaccharides. Fro honey to achieve ripeness, bees have to repeatedly transport it from one honeycomb cell into another. That requires lots of comb surface as well as plenty of workforce. Once the percentage of water has been reduced to about 17 to 20 of the total content and most of the polysaccharides have turned into glycose and fructose, and the honeycomb cells are full and honey has achieved the necessary viscidity, the cells will be coated. Honey is now a ripe, concentrated, carbohydrate-rich feed which will not ferment and which bees can eat. BOTANIC CLASSIFICATIONS OF HONEY Flower honey. Flower honey is produced out of the nectar harvested from flowers. Honey is called monofloral (single plant honey) if the pollen grain percentage of a particular species of plant makes up more than 50% of its contents. Honey is called multifloral (honey of multiple plants) if the pollen grain percentage of any particular plant is less than 50% of the total. Honeydew honey. Bees produce honeydew honey out of the saccharine excreta (honeydew) of aphids or the sweet secreta (nectar) of the leaves and twigs of some of the trees. Honeydew honey is slow to crystallize or, in some occasions, will never crystallize. COLOUR, FLAVOUR AND AROMA OF HONEY Honey’s flavour, colour and aroma express the type of plants from which the bees have harvested the nectar. The colour of honey depends on the pigments (carotene, xanthophyll, chlorophyll) contained in the nectar. CHEMICAL COMPOSITION OF HONEY The chemical composition of a particular honey depends on the harvested plants, available climate and methods of processing applied. Honey consists of about 20% of water and 80% of solids. The solids include mainly monosaccharides, grape sugar (glucose) and fruit sugar (fructose). Apart from water and monosaccharides, honey contains vitamins (B1, B2, B3, B6, C, H, K, E) and other phytochemicals – enzymes (catalase, lipase, invertase, diastase), pollen grains, flavorings, colorants, fragrance, acids (aminoacids, malic and citric acids, et al), proteins and mineral nutrients (potassium, calcium, cobalt, magnesium, manganese, sodium, iron, tin and copper). Additionally, honey contains special substances called inhibitors that give honey antibacterial properties, i.e. make honey capable of destroying numerous infectious micro-organisms or at least checking their progress. Such properties of honey allow for its effective use as an external cure for infected wounds or ulcers. But honey is treasured even more as an internal cure for all types of bodily exhaustion as well as gastric and duodenal ulcers and jaundice. Mono- and polysaccharides. The chemical composition of honey is radically different from other sweeteners, e.g. regular cane and beetroot sugar and products made of them – honey contains monosaccharides, while beetroot sugar and all of its subsequent articles are classified as polysaccharides that human organism needs to transform into monosaccharides before blood is able to process them and channel into the metabolism of the body. CRYSTALLIZATION OF HONEY The higher the level of glucose in the honey, and the higher the concentration of crystallizing agents, the quicker it will crystallize. Honey crystallizes quickest at 13-14°C. At a lower or a higher temperature, the crystallization process is slower and at about 27-32°C it ceases completely (the temperature inside a beehive is about 30°C). At 40°C, honey loses its inherent structure. FERMENTATION OF HONEY Fermentation of honey is the process of transforming saccharides contained in the honey into wine spirits and later on into vinegar. Fermentation of honey is caused by a type of yeast-fungus that is able to develop in high concentration saccharin solutions. However, honey will ferment only in a raw state, i.e. before dehydration has reached its later stages. STORAGE OF HONEY Honey is best suited for storage in a cold, dark room where temperature stays at around 10°C. However, long-term storing requires right containers. Hermetic, glazed clay, glass or food plastic containers are ideal, for honey is inclined to absorb moisture and imbibe all surrounding aromas. We strongly advise against storing honey in iron, copper or zinc-plated containers. Chemical reactions that take place between iron and saccharides or zinc and organic acids contained in the honey produce chemical compounds that are dangerous for human health. QUALITY OF HONEY The quality of honey can be checked from its appearance, its aroma and flavour, and yet it would be inconsiderate to make a full assessment of any particular honey solely upon these criteria. The precise quality and condition of honey can only be determined by physicochemical analysis.
Saturn’s moon Iapetus is one of the more mysterious objects in the Solar System. It’s fairly large, the 11th largest moon in the Solar System, and made mostly of ice. But Iapetus is a puzzle. Half of the moon is dark-coloured and the other half light, with no shades of grey. It is bulging around its equator, as if it is rapidly spinning. But it actually rotates rather leisurely, once every 79 Earth-days. Most puzzling of all is a ridge that stretches almost half way round its equator (see picture above). Today Iapetus is slowly giving up its secrets. Planetary geologists recently solved the puzzle of its two-tone appearance. It turns out that the dark stuff is the chemical residue left when water ice sublimates. The thinking is that about a billion years ago, one side of Iapetus began to sublimate a little more quickly than the other, leaving a dark residue. The ice then condensed on the other side of the moon making it lighter. The darker side then absorbed more sunlight, making it warmer and increasing the rate of sublimation in a positive feedback cycle. It is this cycle that has left the moon in its current two tone state. Today, Harold Levison and buddies at the Southwest Research Institute in Boulder, Colorado, have hit on an explanation for the other two puzzling features: Iapetus’s bulge and ridge. Their theory is that early in its history, Iapetus spun very quickly, probably at a rate of once every 16 hours or so. This caused it to bulge around its equator. At this time it was hit by another large moon, which catapulted a huge volume of ejecta into orbit (a little like the collision that formed our Moon). This orbiting mass of rubble then fell victim to two separate processes. To understand these forces, we first need a little bit of background about orbital dynamics. Astronomers have long known that there is a particular distance from any gravitational object beyond which rubble can condense to form a solid body. This is called the Roche radius. However, anything closer than this gets torn apart by tidal forces and so never condenses. Levison and co say the ejecta around Iapetus must have spanned the Roche radius. The stuff beyond this limit condensed to form a new moon, which gradually spiralled away from Iapetus. It was this loss of its own satellite that slowed Iapetus’s rotation to the sedate rate we see today. However, its frozen body preserved the shape of the original bulge. But the ejecta inside the Roche radius could not have formed a solid body and so must have formed a ring around Iapetus instead. Levison and co think this ring was unstable and must have slowly closed in on the moon. So the equatorial ridge we see today is the leftovers of this ring that settled onto the moon’s surface. That looks like a neat idea. It explains at least two of the great mysteries of Iapetus. And that ain’t bad for a single theory. Ref: arxiv.org/abs/1105.1685: Ridge Formation And De-Spinning of Iapetus Via An Impact-Generated Satellite
Main Difference – Aardvark vs Anteater Anteaters and aardvarks are two mammals that have very similar appearances but belong to two different species. Anteaters belong to the order Pilosa, and aardvarks belong to the order Tubulidentata. Since both are known for eating ants and termites; they share some similar characteristic features related to their diet. These features include the presence of long snouts with long, sticky tongues, reduced or absent dentition, large claw-like nails, tough skin or scales that protects from biting ants and termites. Due to these features, both aardvark and anteater are mistakenly identified as one species. Despite the similarities, they show many differences and put into completely different orders. The main difference between aardvark and anteater is the presence of teeth in aardvark unlike in anteater. More differences between these two mammals will be discussed in this article. Aardvark – Facts, Characteristics, and Behavior Aardvark is the only living species of order Tubulidentata and found in savannahs and wooded grasslands in sub-Saharan Africa. The name ‘aardvark’ means ‘earth pig’ in the African language. Aardvark has a heavy body with short, thick legs and a long head and snout, which is unique to this species. The tip of the snout is blunt and have two nostrils like in pigs. The ears are long and tubular in shape. Their pinkish-gray can be seen through the hairs, which are dull brownish or yellow-gray in color. The tail is long, muscular and has no hairs. These creatures are well adapted to eat ants and termites, thus are often called pure anteaters. An adult aardvark may weigh up to 60 kg and measure about 1.5m in length. Their eyes are smaller with weak sight. However, their hearing and olfactory organs are extremely well-developed. The hind feet have five toes, and the forefeet four, each with a powerful claw-like nail. These nails are important for digging. Their head is elongated and has a small tubular mouth with a long, sticky tongue, which is used to collect termites and ants into the mouth. Unlike anteaters, aardvark has teeth. Incisors and canines are present only during the fetal stage. Adult aardvarks have 20-22 premolars and molars, which grow continuously. Teeth are hexagonal in shape without enamel and have a tubular pulp cavity. These creatures are nocturnal and live in burrows during the day. Anteater – Facts, Characteristics, and Behavior Anteaters belong to the family Myrmecophagidae, which includes 2 genera and 3 species. Anteaters are found only in savannahs and forest habitats in South America. They are well distinguished from other mammals by their elongated skulls with long rostrum. Mouth is very small with long, sticky tongue similar to aardvark. But unlike aardvarks, anteaters lack teeth. Hence, they are called edentate mammals. The tongue is covered with viscous secretion produced by submaxillary glands and also has barb-like spins on it. Due to these adaptations, they can trap ants and termites easily. The giant anteater is the largest species that reaches up to about 2 m in length and weighs as much as 40 kg. Anteaters have coarse gray hair and a bushy tail. These creatures are terrestrial and can be either nocturnal or diurnal. They are not burrowers. Long powerful claws are used for foraging. They are considered as endangered species due to rapid habitat loss, hunting, and illegal trade. Difference Between Aardvark and Anteater Aardvark belongs to order Tubulidentata, which includes only this species. Anteater belongs to order Pilosa, which includes 3 species. Aardvarks usually weigh up to 60 kg. Anteaters usually weigh up to 40 kg. Aardvarks live in Africa. Anteaters live in South America. Aardvarks are burrowers. Anteaters are entirely terrestrial. Aardvarks are nocturnal. Anteaters are either diurnal or nocturnal. Aardvarks have teeth. Anteaters lack teeth. Aardvarks have muscular tails with no hairs. Anteaters have bushy tails. “Ant eater” by Fernando Flores from Caracas, Venezuela – Giant anteater | Oso hormiguero (Myrmecophaga tridactyla), via “Aardvark” by Masur, via
Encaustic - The Ancient Painting Technique Coming from the Greek word enkaustikos which means “to heat” or “to burn”, encaustic is the name for both a type of pigmented wax and the process involving heat by which the colored wax is melted and later applied to a variety of surfaces. The encaustic painting, also known as hot wax painting, uses the heated beeswax to which colored pigments are added, and the results are some of the most interesting and elaborate abstract paintings but more famously the known Fayum mummy portraits from Egypt (produced around 100 – 300 AD), and the icons of the Greek Art, dating as far back as the 5th-century. Wax is an excellent preserver of materials and it was from this application that the style of encaustic painting developed. The simplest mixture is the adding of the colored pigments to beeswax but other recipes suggest the application of other wax, damar resin, or linseed oil. Special metal instruments and brushes are employed to shape the liquid or paste before it cools down, and today, the process of melting the wax is made easier with the help of heat lamps, heat guns, irons, and other methods of applying the heat onto the canvas, or prepared wooden surfaces. The mixture can be polished to a high gloss but it can be also modeled, textured, sculpted and combined with collage materials. The Origin of Encaustic Encaustic painting was practiced by Greek painters as far back as the 5th-century. Most of our knowledge about the origin of the hot wax painting comes from the writings by the Roman scholar Pliny the Elder, who in his Natural History from the 1st Century AD described the technique. It is suggested that Pliny lacked the practical knowledge about what it actually takes to work with wax and pigments in the artist’s studio, but nonetheless, his accounts about the method are the earliest recorded ones. From him, we also learn about the variety of applications and we learn that the method of heating the wax was also applied in the paintings of portraits and different mythological scenes on panels, for the coloring of the marble and terra cotta, and work on ivory. In his writings, Pliny even mentions the application of coatings of wax and resin to waterproof the ships and to also produce decorations for the warships, and in his text, he refers to two painters that started their career creating elaborate decorations as ship painters. There are also records, which show the existence of this process in the architecture and monuments of Greek antiquity that are today white but in their glory days colored with wax. The Magic of the Fayum Portraits The hot wax painting, as mentioned above, was notably used in the Fayum mummy portraits from Egypt as well. This modern term is given to a type of naturalistically painted portrait on wooden boards attached to mummies from the Coptic period. The images are a part of a long tradition of panel painting that was regarded as the highest form of production in the Classical period. The painted portraits covered the faces of the mummified bodies prepared for the burial, and as they are today detached from the mummies, represent examples of this highly regarded civilization, which, in terms of artistic tradition derives more from Graeco-Roman than Egyptian traditions. It is also important to point out that two categories of these portraits exist, the ones created with the adoption of encaustic and the other, with tempera. The wax method actually rivaled the application of tempera. As much as tempera was faster and a cheaper process for the production of the portraits and also for portable easel paintings, the wax allowed for a better finish and manipulation of the surface and made the finished work startlingly life-like. The Modern Day Encaustic After the decline of the Roman Empire, encaustic fell into disuse. Some work, particularly the painting of icons, was carried on as late as the 12th-century but for the most part, it became a lost art. The revival is seen in the 18th-century, used mostly by amateur painters to rediscover the technique of the ancient authors, while in the 19th century it helped to solve the dampness that mural painters faced in northern climates. In the 20th century, Fritz Faiss, a student of Paul Klee and Wassily Kandinsky at Bauhaus, accompanied by Dr. Hans Schmid, rediscovered the technique and improved it, raising the temperature of melting from 60 to 100 °C. In the early 20th-century modern art, the experimentation of approaches to the painted surface showcases the manipulation of wax, dust, sand, nails, cloth, and other materials in the non-objective art. Today, we see a rise of this technique used by amateur artists and lovers of the arts and crafts, as well as by the various contemporary artists. The portable electric heating devices and the variety of tools, as well as the development of materials, make the heating process much easier and to a certain degree more playful, even though the technique is not an easy one to master. Today, there are also a variety of different examples apart from paintings on canvas or wooden surfaces that showcase the rise of interest towards this tradition. Different materials such as paper, card, and even pottery are decorated with complex design patterns or images produced by the encaustic. The dimensional quality and luminous color and the “Its effects, its visual and physical properties, and its range of textural and color possibilities make it eminently suitable for adoptation in several different contemporary styles of painting that are not adequately served by our traditional oil-painting process.” Famous Encaustic Artworks Although encaustic painting is not often discussion in the literature dedicated to the story of art, it has nevertheless had an important role throughout its course. As a method, it hasn’t gone through much change, but it did evolve stylistically over the centuries, adapting to the needs of its time and surrounding culture. Between the Middle ages and the 18th century, this form of production lost its presence and could only be traced in the artworks remaining from the past; painters turned to other media and techniques, such as tempera and fresco, as they were easier to implement and did not require the application of fire. In Constantinople and Russia during the Medieval period, however, encaustic was apparently used for the painting of icons, while it is also believed that authors like Lucas Cranach and Andrea Mantegna employed it in a selection of their Renaissance artworks as well. The revival of encaustic started with the discovery of artifacts from Herculaneum and Pompeii (c. 79 B.C.) in the 18th century and the Fayum Portraits in the 19th century. Finally, it culminated in the practice of one great innovator of the last century – Jasper Johns. The little-known origin of encaustic wax painting can be followed through its most famous artworks, from ancient times to the most recent contemporary pieces. The Fayum Mummy Portraits Their discovery in the 19th century was a true revelation. Probably the best known of all encaustic artworks have to be the Fayum funeral portraits, dating back to the 1st and 2nd century AD. Attributed to the Greek painters in Egypt, which arrived in the Faiyum basin after it was conquered by Alexander the Great, these portraits were painted in the prime of life or after death. They were placed over the person’s mummified body (also a practice adopted during that time) as a memorial, and lasted as a funerary custom for about two centuries, until Egypt was taken over by the Romans. Heavily influenced by Greek-Roman painting, rather than the Egyptian one, they usually showed the head and the upper chest of a person viewed frontally. When they were discovered, the scientists were in awe of their almost perfect quality, as their color has remained as fresh as if it were painted most recently. Theotokos of Blachernae, or Our Lady of Blachernae, or simply Blachernitissa, is a 7th century encaustic icon depicting Virgin Mary and her child. It is quite unusual among Orthodox icons because it’s not flat, but is formed in bas relief, using wax combined with the ashes of Christian martyrs who had been killed in the 6th century. The work was lost between 1434 and the mid 17th century and it is considered one of the most important portraits of its kind even today. Described by many as “outstanding” for its bodily form and remarkable relief, the painting was restored a few times, which made it harder for the experts to determine its exact age, although they are reasonably certain that it is ”of ancient date”. Today, the Blachernitissa is being held by Moscow’s Tretyakov Gallery. James Ensor – Fireworks, 1887 James Ensor was a major figure of the Belgian avant-garde of the late 19th century and a forerunner of the Expressionism movement. Although he was an atheist, he often explored religious narratives, political satire and carnivalesque imagery, of which perhaps his 1887 Fireworks is an example. As we are talking about a painting from the later 19th century, we can conclude that the artist was influenced by the reappearance of encaustic as a medium and he began experimenting with it. This artwork is very likely his only piece which uses this form of painting, and it seems to have helped the artist’s intention to emphasize the brightness of fireworks, whose transient nature he managed to immortalize through a durable material such as wax. Diego Rivera – Creation, 1922-23 That encaustic could be used for mural paintings as well proves this piece by Diego Rivera, created for over a year and covering approximately one thousand square feet of a wall of Escuela Nacional Preparatoria of the San Ildefonso College in Mexico City. It was the very first government-commissioned artwork for the famous muralist and it revealed his rebellious, communist side as well, which would remain one of his most emphasized aspects for the rest of his life. The mural was done in encaustic and gold leaf, both of which he abandoned in favor of fresco. Truth be told, Diego Rivera was never very fond of this particular work, as he described it “too Italian” – perhaps precisely because of the encaustic? In 1931, however, he used encaustic again, for a painting on canvas entitled Flower Festival: Feast of Santa Anita. Jasper Johns – Flag, 1955 Perhaps it is due to the fact that encaustic was quite a difficult and cumbersome painting material that it wasn’t overly popular with creatives – yet Jasper Johns found it to be a unique experience. His foray into the technique seems to have begun in the early 1950s, and although Flag is his most famous artwork executed through wax painting, it is the 1954 Star that is his actual first. A curious mind, Jasper Johns experimented with different painting methods and tried to beat the slow-drying nature of his oil paints by mixing beeswax with tube paint and melting it on a hot plate. For Flag, he went to create numerous layers of paint, encaustic, newspaper and collage one overlaying the other, first dipping strips of newspaper into wax, either pigmented or clean, and adhere them to the canvas with the same medium. Then, he would apply strokes of wax with brushes and palette knives to create different textures on the surface, thus turning the medium into an integral part of the piece. In fact, critic Jonathan Jones came to describe Flag as “a monument: the flag mummified”, citing the artist’s inspiration from the aforementioned Fayum mummy portraits. Between 1954 and 1958, as well as during the 1980s, Jasper Johns continued using encaustic in his practice, and often as a “mask” of his many compositions – this helped him simultaneously shield and reveal parts of his pieces. Lynda Benglis – Tres Memoria, 1969-70 American artist Lynda Benglis began experimenting with encaustic in the 1960s, in part inspired by Jasper Johns and his dedication to wax. She purchased the material from a lipstick company and mixed it with dammer resin crystals and powdered pigments, possibly becoming the first to apply this particular formula. Between 1966 and 1975, she gave life to a group of paintings that became seminal among those celebrating strong female sensibility, as they followed the early works which consisted of wax layered into sculptural forms on masonite. Tres Memoria, which you see here, represents one of Lynda Benglis’s smooth-surfaced encaustic paintings executed on narrow, vertical supports using multicoloured liquid wax in even brush strokes, which she then manipulated with a blow torch to obtain a marbleized color effect. Part painting and part sculpture, these works all follow the same format, determined by the width of the brush and the length of the artist’s arm. Tony Scherman – The 1789: Napoleon and the French Revolution Series Over the years, contemporary artist Tony Scherman proved to be one of the most successful practitioners of encaustic painting. His subject matter ranges from animals, food, flowers and, particularly, figures drawing from ancient stories, mythology, popular culture or literature. His most famous series to date surely is 1789: Napoleon and the French Revolution, for which he created ”forensic portraits” of the French leader facing a mirror during various stages of his life. For Tony Scherman, encaustic provides an opportunity to achieve subtle, yet gritty atmosphere, soaked in depth of both space and thought and a certain lushness that adds even more quality to the whole picture. As such, his artworks recall the masterpieces of the Old Masters with a modern twist, something that is clearly present in his project featuring actress Gillian Anderson as the central figure in Gustav Flaubert’s Madame Bovary. The Encaustic Painting Technique There is a number of guides on mastering this beautiful yet demanding medium. Working with wax in order to produce a creative product is not easy, but we will try to bring you closer to technical sides of this method. The encaustic practice involves the application of hot wax to wood panels typically layered to create more opaque or translucent effects. Each of these layers can also be scraped, textured or polished for a variety of finishes. The wax itself is a very sensual medium to work with, and authors can incorporate their own style to get some stunning results. Encaustic involves materials such as wood blocks, wax, electric hot plate, scratching and scraping tools, coloured waxes, fusing tools such as small heat gun or propane torch, natural bristle brushes, coloured encaustic materials, and other media of your choice that you would like to combine. You first start off by melting the wax into a hot plate until it is liquid. A heated palette is an essential element that provides a surface for heating and mixing encaustic paint and mediums, but other options include electric skillets, crock-pots or electric griddles. The simplest encaustic paint could be a basic mixture of beeswax and an earth pigment, but there are numerous types of waxes you could apply that have their own unique transparency, heat curve and character. There is a wide range of coloured waxes you could use, but you can also add colour and pigments yourself to colourless ones for more nuanced results. There are also different resins you could manipulate with to control final working qualities such as the melting point, flexibility, hardness, adhesion or durability. Layers of melted wax are applied with natural bristle brushes only, as synthetic ones would certainly melt. In order to familiarize yourself with the method, you would have to feel how it behaves on the surface and play with it. Each layer of paint is fused in order to get adhered to the surface, and it can be done with either indirect method such as a heat gun or a torch, or direct one such as a tracking iron, spatula, heated brush, plaster tools, paint knives, etc. Once you get some layers down, it’s fun to play with it and texturize it to your liking. You can apply various mark-marking tools and methods such as etching, wood carving dental, sculpture or clay working tools. This is just a beginning, and once you get accustomed to these basic steps you can include various other methods and approaches to create some stunning works that reflect your personal style. Learn More About the Encaustic Process Encaustic in Contemporary Art In the past decades, we have been witnessing an encaustic renaissance of sorts. The development and accessibility of modern heating tools that have substituted a long and complicated heating process and made it much easier remain very important factors for the rising interest in this ancient technique. Contemporary authors have realized the full potential and creative possibilities by the application of heat to transform the structure from a solid to liquid without losing anything but only relaxing it into a workable state. Being a tremendously versatile medium that lends itself to all styles of the genre, it combines painting, printing, collaging and sculpting. Contemporary Authors Working With Encaustic Contemporary encaustic works exhibit a wide range of styles and imagery. Contemporary painters employ an exciting array of approaches, methods, and exploration into unexpected combinations of encaustic and a surprising variety of other materials. During the 1950s and 1960s, Jasper Johns was one of the first creatives who managed to create encaustic artworks that entered the mainstream. He created a flag series of encaustic paintings that generated a lot of attention and brought this medium closer to the forefront of the artistic community. Featuring simple schema designs such as flags, maps or letters, he became one of the most widely exhibited painters of our time. Today, there is a significant number of acclaimed artists experimenting with the medium. Works by Jean Lebreton are an excellent example of a remarkable combination of contemporary and ancient methods. Following a long career in photography, Lebreton revealed the encaustic and started applying it onto his photographs that were previously developed and glued on wood. Combining several applications, his works are transformed from common images into fantastic creations. The practice of an American artist Robin Cole Smith involves a different approach to this ancient technique. After layering the drawing transmitted onto the wax, it becomes suspended at intervals between layers of beeswax. An interesting thing is that there is no paper involved in the final product. Madrid-born artist José María Cano creates large scale wax paintings based on newspapers cut-outs, photographs, and comics. He is most famous for his series The Wall Street 100 consisted of large-scale portraits of some of the most influential people from an economic viewpoint. An American artist Christopher Kier is specialized in encaustic painting. His sensual and earthy paintings have a sculptural presence and evoke the mysterious qualities of old ruins or archaeological relics. Encaustic oeuvre by a Canadian artist Tony Scherman consists of stunning portraits of historical icons. Imbued both with stoicism and emotion, his portraits explore the volatility of human experience. A painter Janise Yntema is another American artist working with the encaustic medium. Her abstract works that make references towards figuration and landscape are created from numerous layers of translucent applications of pigmented way fused together to create a smooth and glossy finish. The Myriad of Possibilities It is somewhat an irony that in the modern age characterized by the advancement of technology, an ancient and demanding painting technique such as encaustic is receiving such a widespread interest. Today, there is a number of annual conferences, trade shows, and organizations that are focused on the vitality of this medium. The commercial manufacture of encaustic paint and adaptation tools is growing, as well as the knowledge of improved practices. Numerous books on the subject are being published, there is a number of technical workshops organized and the medium is studied through various scholarly exhibitions. There are so many reasons why contemporary painters are attracted to this unique medium characterized by the exquisite beauty of the surfaces. Painters are drawn to it for its texture, malleability, quick drying time, stunning and long-lasting colours, as well as its aroma. Also, the luminous effect of light penetrating the transparent layers of wax is one of its most captivating aspects. With remarkable adhesive qualities, it is a natural collage medium and it is a perfect technique to be used in many other mixed media applications such as photography, paper arts, and digital art. Authors can mix their own colours, create three-dimensional textures or smooth surfaces, work in thin transparent glazes of heavy impasto, etc. Possibilities seem endless. It frees up the creative juices and leads to a continuous loop of inspiration. Since the melting process can be somewhat unpredictable and some strange results can occur, it also forces the painter to be flexible and remain open to countless possibilities that this medium allows. All images used for illustrative purposes only. Featured image: Encaustic Art Materials. Image via grunewaldguild.com; Faiyum mummy portrait of a young man. Antikensammlungen Munich, detail. Image via wikipedia.org
Obesity is a chronic condition defined by an excess of body fat. Body fat has several important functions in the body, such as storing energy and providing insulation. Excess body fat, however, may interfere with an individual's health and well-being, particularly if a patient becomes morbidly obese. Not only does obesity interfere with everyday activities, it also increases the risk of developing serious medical conditions, such as high blood pressure and diabetes. Obesity is a serious health issue presently reaching epidemic proportions in society. It results in medical complications and early morbidity for a great many people. Other health conditions caused or exacerbated by obesity may include heart disease, osteoarthritis, sleep apnea, high cholesterol and asthma. The good news is that obesity is a treatable ailment and that modern medicine provides more remedies for the condition than previously existed. Causes of Obesity The balance between calorie intake and energy expenditure determines a person's weight. If a greater number of calories is consumed than expended through exercise and daily activities, a person will gain weight since their body stores excess calories. Obesity is, however, a complex problem. Research has shown that obesity does not simply result from lack of self-control. Causes of obesity are varied and may include hereditary, social, psychological and environmental factors as well as metabolic ones. Factors that may contribute to obesity include: - Psychological stress Certain medical conditions, such as hypothyroidism and Cushing's syndrome may contribute to obesity. Diagnosis of Obesity Obesity is diagnosed not only by the number of excess pounds an individual carries, but by the individual's body mass index, or BMI. The BMI is calculated by dividing weight in kilograms by height in meters squared. Designations of normal and abnormal weight are as follows: - Underweight: BMI below 18.5. - Normal weight: BMI 18.5 to 24.9 - Overweight: BMI 25 to 29 - Obese: BMI 30 to 39 - Morbidly obese: BMI 40 or higher Since BMI doesn't directly measure body fat, it is possible for some people, such as extremely muscular individuals or geriatric patients, to be inaccurately categorized using this system. Obesity is a serious problem. Overcoming obesity requires a commitment to lifestyle changes. There are several methods of treatment for obesity. A safe and effective long-term weight-loss diet must contain balanced, nutritious foods to avoid vitamin deficiencies and malnutrition. A healthy diet should: - Be high in whole grains, fiber, fruits and vegetables - Contain lean meat, fish or vegetable protein - Be low in sweets, fats, and fried foods - Include whole grains and high fiber - Eliminate consumption of fast food Adopting good eating habits is essential in achieving a healthy weight. Regular exercise is an important part of a healthy life. Physical activity and exercise help to burn calories. The number of calories burned depends on the type, duration, and intensity of the activity. Treating obesity with exercise is most effective when combined with a healthy diet since exercise alone will have a limited effect on weight loss. Recommendations for healthy exercise habits include getting 30 minutes of moderate exercise 5 to 7 days a week. Prescription Weight-loss Medication Weight loss is best achieved through a healthy diet and regular exercise. For some individuals, weight-loss medications may be of help. Such medications are not usually prescribed unless diet and exercise have already been tried and failed. Even with prescribed medications, lifestyle changes must be implemented for obesity to be treated successfully. Counseling or Psychotherapy In many cases, talking with a counselor is helpful to a patient suffering from obesity, almost always in combination with other methods of weight loss. Most people who are obese have psychological issues around food and may have a family history which has predisposed them to eating when they are under stress. While diet and exercise must always be included in weight loss treatment, talking to a trained professional can also be of help. When other methods of weight loss are unsuccessful, particularly when an individual suffers from obesity-related medical conditions, bariatric surgery may be considered. Under the proper conditions, bariatric surgery may be a lifesaving procedure, though the patient will still be required to make permanent lifestyle changes. Before and after weight loss surgery, counseling is necessary to assist the patient in achieving and maintaining lasting positive results. Too often, people think they can lose weight quickly through a brief, strenuous dieting or sudden spurts of exercise. In fact, however, a great majority of people who lose weight rapidly regain that weight within 5 years. More methodical, ongoing treatment is usually more effective and more long-lasting.
Fetal alcohol syndrome |Fetal alcohol syndrome| |Classification and external resources| Baby with fetal alcohol syndrome. Fetal alcohol syndrome (FAS) or foetal alcohol syndrome is a pattern of mental and physical defects that can develop in a fetus in association with high levels of alcohol consumption during pregnancy. Alcohol crosses the placental barrier and can stunt fetal growth or weight, create distinctive facial stigmata, damage neurons and brain structures, which can result in intellectual disability and other psychological or behavioral problems, and also cause other physical damage. The main effect of FAS is permanent central nervous system damage, especially to the brain. Developing brain cells and structures can be malformed or have development interrupted by prenatal alcohol exposure; this can create an array of primary cognitive and functional disabilities (including poor memory, attention deficits, impulsive behavior, and poor cause-effect reasoning) as well as secondary disabilities (for example, predispositions to mental health problems and drug addiction). Alcohol exposure presents a risk of fetal brain damage at any point during a pregnancy, since brain development is ongoing throughout pregnancy. As of 1987, fetal alcohol exposure was the leading known cause of intellectual disability in the Western world. In the United States and Europe, the FAS prevalence rate is estimated to be between 0.2–2 in every 1000 live births. FAS should not be confused with Fetal Alcohol Spectrum Disorders (FASD), a condition which describes a continuum of permanent birth defects caused by maternal consumption of alcohol during pregnancy, which includes FAS, as well as other disorders, and which affects about 1% of live births in the US (i.e., about 10 cases per 1000 live births). The lifetime medical and social costs of FAS are estimated to be as high as US$800,000 per child born with the disorder. Surveys found that in the United States, 10–15% of pregnant women report having recently drunk alcohol, and up to 30% drink alcohol at some point during pregnancy. The current recommendation of the Surgeon General of the United States, the British Department of Health and the Australian Government National Health and Medical Research Council is to drink no alcohol at all during pregnancy. - 1 Signs and symptoms - 2 Cause - 3 Diagnosis - 4 Prevention - 5 Treatment - 6 Prognosis - 7 History - 8 References - 9 Further reading - 10 External links Signs and symptoms Growth deficiency is defined as below average height, weight or both due to prenatal alcohol exposure, and can be assessed at any point in the lifespan. Growth measurements must be adjusted for parental height, gestational age (for a premature infant), and other postnatal insults (e.g., poor nutrition), although birth height and weight are the preferred measurements. Deficiencies are documented H191 when height or weight falls at or below the 10th percentile of standardized growth charts appropriate to the patient's population. The CDC and Canadian guidelines use the 10th percentile as a cut-off to determine growth deficiency. The "4-Digit Diagnostic Code" (4-DDC), allows for mid-range gradations in growth deficiency (between the 3rd and 10th percentiles) and severe growth deficiency at or below the 3rd percentile. Growth deficiency (at severe, moderate, or mild levels) contributes to diagnoses of FAS and PFAS (Partial Fetal Alcohol Syndrome), but not ARND (Alcohol-Related Neurodevelopmental Disorder) or static encephalopathy. Growth deficiency is ranked as follows by the 4-DDC: - Severe — Height and weight at or below the 3rd percentile. - Moderate — Either height or weight at or below the 3rd percentile, but not both. - Mild — Both height and weight between the 3rd and 10th percentiles. - None — Height and weight both above the 10th percentile. Several characteristic craniofacial abnormalities are often visible in individuals with FAS. The presence of FAS facial features indicates brain damage, though brain damage may also exist in their absence. FAS facial features (and most other visible, but non-diagnostic, deformities) are believed to be caused mainly during the 10th and 20th week of gestation. Refinements in diagnostic criteria since 1975 have yielded three distinctive and diagnostically significant facial features known to result from prenatal alcohol exposure and distinguishes FAS from other disorders with partially overlapping characteristics. The three FAS facial features are: - A smooth philtrum — The divot or groove between the nose and upper lip flattens with increased prenatal alcohol exposure. - Thin vermilion — The upper lip thins with increased prenatal alcohol exposure. - Small palpebral fissures — Eye width decreases with increased prenatal alcohol exposure. Measurement of FAS facial features uses criteria developed by the University of Washington. The lip and philtrum are measured by a trained physician with the Lip-Philtrum Guide, a 5-point Likert Scale with representative photographs of lip and philtrum combinations ranging from normal (ranked 1) to severe (ranked 5). Palpebral fissure length (PFL) is measured in millimeters with either calipers or a clear ruler and then compared to a PFL growth chart, also developed by the University of Washington. - Severe — All three facial features ranked independently as severe (lip ranked at 4 or 5, philtrum ranked at 4 or 5, and PFL two or more standard deviations below average). - Moderate — Two facial features ranked as severe and one feature ranked as moderate (lip or philtrum ranked at 3, or PFL between one and two standard deviations below average). - Mild — A mild ranking of FAS facial features covers a broad range of facial feature combinations: - Two facial features ranked severe and one ranked within normal limits, - One facial feature ranked severe and two ranked moderate, or - One facial feature ranked severe, one ranked moderate and one ranked within normal limits. - None — All three facial features ranked within normal limits. These distinctive facial features in a patient do strongly correlate to brain damage. Sterling Clarren of the University of Washington's Fetal Alcohol and Drug Unit told a conference in 2002: “I have never seen anybody with this whole face who doesn't have some brain damage. In fact in studies, as the face is more FAS-like, the brain is more likely to be abnormal. The only face that you would want to counsel people or predict the future about is the full FAS face. But the risk of brain damage increases as the eyes get smaller, as the philtrum gets flatter, and the lip gets thinner. The risk goes up but not the diagnosis.” “At one-month gestation, the top end of your body is a brain, and at the very front end of that early brain, there is tissue that has been brain tissue. It stops being brain and gets ready to be your face ... Your eyeball is also brain tissue. It's an extension of the second part of the brain. It started as brain and "popped out." So if you are going to look at parts of the brain from alcohol damage, or any kind of damage during pregnancy, eye malformations and midline facial malformations are going to be very actively related to the brain across syndromes ... and they certainly are with FAS.” Central nervous system Central nervous system (CNS) damage is the primary feature of any Fetal Alcohol Spectrum Disorder (FASD) diagnosis. Prenatal exposure to alcohol — which is classified as a teratogen — can damage the brain across a continuum of gross to subtle impairments, depending on the amount, timing, and frequency of the exposure as well as genetic predispositions of the fetus and mother. While functional abnormalities are the behavioral and cognitive expressions of the FAS disability, CNS damage can be assessed in three areas: structural, neurological, and functional impairments. All four diagnostic systems allow for assessment of CNS damage in these areas, but criteria vary. The IOM system requires structural or neurological impairment for a diagnosis of FAS. The 4-DDC and CDC guidelines state that functional anomalies must measure at two standard deviations or worse in three or more functional domains for a diagnosis of FAS. The 4-DDC further elaborates the degree of CNS damage according to four ranks: - Definite — Structural impairments or neurological impairments for FAS or static encephalopathy. - Probable — Significant dysfunction of two standard deviations or worse in three or more functional domains. - Possible — Mild to moderate dysfunction of two standard deviations or worse in one or two functional domains or by judgment of the clinical evaluation team that CNS damage cannot be dismissed. - Unlikely — No evidence of CNS damage. Structural abnormalities of the brain are observable, physical damage to the brain or brain structures caused by prenatal alcohol exposure. Structural impairments may include microcephaly (small head size) of two or more standard deviations below the average, or other abnormalities in brain structure (e.g., agenesis of the corpus callosum, cerebellar hypoplasia). Microcephaly is determined by comparing head circumference (often called occipitofrontal circumference, or OFC) to appropriate OFC growth charts. Other structural impairments must be observed through medical imaging techniques by a trained physician. Because imaging procedures are expensive and relatively inaccessible to most patients, diagnosis of FAS is not frequently made via structural impairments, except for microcephaly. During the first trimester of pregnancy, alcohol interferes with the migration and organization of brain cells, which can create structural deformities or deficits within the brain. During the third trimester, damage can be caused to the hippocampus, which plays a role in memory, learning, emotion, and encoding visual and auditory information, all of which can create neurological and functional CNS impairments as well. As of 2002, there were 25 reports of autopsies on infants known to have FAS. The first was in 1973, on an infant who died shortly after birth. The examination revealed extensive brain damage, including microcephaly, migration anomalies, callosal dysgenesis, and a massive neuroglial, leptomeningeal heterotopia covering the left hemisphere. In 1977, Dr. Clarren described a second infant whose mother was a binge drinker. The infant died ten days after birth. The autopsy showed severe hydrocephalus, abnormal neuronal migration, and a small corpus callosum (which connects the two brain hemispheres) and cerebellum. FAS has also been linked to brainstem and cerebellar changes, agenesis of the corpus callosum and anterior commissure, neuronal migration errors, absent olfactory bulbs, meningomyelocele, and porencephaly. When structural impairments are not observable or do not exist, neurological impairments are assessed. In the context of FAS, neurological impairments are caused by prenatal alcohol exposure which causes general neurological damage to the central nervous system (CNS) and the peripheral nervous system (PNS). A determination of a neurological problem must be made by a trained physician, and must not be due to a postnatal insult, such as a high fever, concussion, traumatic brain injury, etc. All four diagnostic systems show virtual agreement on their criteria for CNS damage at the neurological level, and evidence of a CNS neurological impairment due to prenatal alcohol exposure will result in a diagnosis of FAS, and functional impairments are highly likely. Neurological problems are expressed as either hard signs, or diagnosable disorders, such as epilepsy or other seizure disorders, or soft signs. Soft signs are broader, nonspecific neurological impairments, or symptoms, such as impaired fine motor skills, neurosensory hearing loss, poor gait, clumsiness, poor eye-hand coordination. Many soft signs have norm-referenced criteria, while others are determined through clinical judgment. "Clinical judgment" is only as good as the clinician, and soft signs should be assessed by either a pediatric neurologist, a pediatric neuropsychologist, or both. Those affected have mild retardation. When structural or neurological impairments are not observed, all four diagnostic systems allow CNS damage due to prenatal alcohol exposure to be assessed in terms of functional impairments. Functional impairments are deficits, problems, delays, or abnormalities due to prenatal alcohol exposure (rather than hereditary causes or postnatal insults) in observable and measurable domains related to daily functioning, often referred to as developmental disabilities. There is no consensus on a specific pattern of functional impairments due to prenatal alcohol exposure and only CDC guidelines label developmental delays as such, so criteria vary somewhat across diagnostic systems. The four diagnostic systems list various CNS domains that can qualify for functional impairment that can determine an FAS diagnosis: - Evidence of a complex pattern of behavior or cognitive abnormalities inconsistent with developmental level in the following CNS domains — sufficient for a PFAS (partial fetal alcohol syndrome) or ARND (alcohol-related neurodevelopmental disorder) diagnosis using IOM guidelines - Performance at two or more standard deviations on standardized testing in three or more of the following CNS domains — sufficient for a FAS, PFAS or static encephalopathy diagnosis using the 4-DDC. - General cognitive deficits (e.g., IQ) at or below the 3rd percentile on standardized testing — sufficient for an FAS diagnosis using CDC guidelines - Performance at or below the 16th percentile on standardized testing in three or more of the following CNS domains — sufficient for an FAS diagnosis using CDC guidelines - Performance at two or more standard deviations on standardized testing in three or more of the following CNS domains — sufficient for an FAS diagnosis using Canadian guidelines Other conditions may commonly co-occur with FAS, stemming from prenatal alcohol exposure. However, these conditions are considered Alcohol-Related Birth Defects and not diagnostic criteria for FAS. - Cardiac — A heart murmur that frequently disappears by one year of age. Ventricular septal defect most commonly seen, followed by an atrial septal defect. - Skeletal — Joint anomalies including abnormal position and function, altered palmar crease patterns, small distal phalanges, and small fifth fingernails. - Renal — Horseshoe, aplastic, dysplastic, or hypoplastic kidneys. - Ocular — Strabismus, optic nerve hypoplasia (which may cause light sensitivity, decreased visual acuity, or involuntary eye movements). - Occasional abnormalities — ptosis of the eyelid, microophthalmia, cleft lip with or without a cleft palate, webbed neck, short neck, tetralogy of Fallot, coarctation of the aorta, spina bifida, and hydrocephalus. Prenatal alcohol exposure is the cause of fetal alcohol syndrome. A study of over 400,000 American women, all of whom had consumed alcohol during pregnancy, concluded that consumption of 15 drinks or more per week was associated with a reduction in birth weight. Though consumption of less than 15 drinks per week was not proven to cause FAS-related effects, the study authors recommend limiting consumption to no more than one standard drink per day. Also, threshold values are based upon group averages, and it is not appropriate to conclude that exposure below this threshold is necessarily ‘safe’ because of the significant individual variations in alcohol pharmacokinetics. An analysis of seven medical research studies involving over 130,000 pregnancies found that consuming 2 to 14 drinks per week did not significantly increase the risk of giving birth to a child with either malformations or fetal alcohol syndrome. Pregnant women who consume approximately 144 grams of pure alcohol per day have a 30–33% chance of having a baby with FAS. A number of studies have shown that light drinking (1–2 drinks/week) during pregnancy does not appear to pose a risk to the fetus. A study of pregnancies in eight European countries found that consuming no more than one drink per day did not appear to have any effect on fetal growth. A follow-up of children at 18 months of age found that those from women who drank during pregnancy, even two drinks per day, scored higher in several areas of development, though in a different study, as little as one drink per day resulted in poorer spelling and reading abilities at age 6 and a linear dose-response relationship was seen between prenatal alcohol exposure and poorer arithmetic scores at the same age. Despite intense research efforts, it has not been possible to identify a single clear-cut mechanism for development of FAS or FASD. On the contrary, clinical and animal studies have identified a broad spectrum of pathways through which maternal alcohol can negatively affect the outcome of a pregnancy. Clear conclusions with universal validity are difficult to draw, since different ethnic groups show considerable genetic polymorphism for the hepatic enzymes responsible for ethanol detoxification. - The placenta allows free entry of ethanol and toxic metabolites like acetaldehyde into the fetal compartment. The so-called placental barrier is no barrier with respect to ethanol. - The developing fetal nervous system appears particularly sensitive to ethanol toxicity. The latter impacts negatively on proliferation, differentiation, neuronal migration, axonic outgrowth, integration and fine tuning of the synaptic network. In short, all major processes in the developing central nervous system appear compromised. - Fetal tissues are quite different from adult tissues in function and purpose. For example, the main detoxicating organ in adults is the liver, whereas fetal liver is incapable of detoxicating ethanol as the ADH and ALDH enzymes have not yet been brought to expression at this early stage. Up to term, fetal tissues do not have significant capacity for the detoxification of ethanol, and the fetus remains exposed to ethanol in the amniotic fluid for periods far longer than the decay time of ethanol in the maternal circulation. Generally, fetal tissues have far less antioxidant protection than adult tissues as they express no significant quantities ADH or ALDH, and far less antioxidant enzymes like SOD, glutathion transferases or glutathion peroxidases. Several diagnostic systems have been developed in North America: - The Institute of Medicine's guidelines for FAS, the first system to standardize diagnoses of individuals with prenatal alcohol exposure, - The University of Washington's 4-DDC, which ranks the four key features of FASD on a Likert scale of one to four and yields 256 descriptive codes that can be categorized into 22 distinct clinical categories, ranging from FAS to no findings. - The Centers for Disease Control's "Fetal Alcohol Syndrome: Guidelines for Referral and Diagnosis," which established general consensus on the diagnosis FAS in the U.S. but deferred addressing other FASD conditions, and - Canadian guidelines for FASD diagnosis, which established criteria for diagnosing FASD in Canada and harmonized most differences between the IOM and University of Washington's systems. Fetal alcohol syndrome is the only expression of FASD that has garnered consensus among experts to become an official ICD-9 and ICD-10 diagnosis. To make this diagnosis (or determine any FASD condition), a multi-disciplinary evaluation is necessary to assess each of the four key features for assessment. Generally, a trained physician will determine growth deficiency and FAS facial features. While a qualified physician may also assess central nervous system structural abnormalities and/or neurological problems, usually central nervous system damage is determined through psychological assessment. A pediatric neuropsychologist may assess all areas of functioning, including intellectual, language processing, and sensorimotor. Prenatal alcohol exposure risk may be assessed by a qualified physician or psychologist. - Growth deficiency — Prenatal or postnatal height or weight (or both) at or below the 10th percentile - FAS facial features — All three FAS facial features present - Central nervous system damage — Clinically significant structural, neurological, or functional impairment - Prenatal alcohol exposure — Confirmed or Unknown prenatal alcohol exposure Alcohol intake is determined by interview of the biological mother or other family members knowledgeable of the mother's alcohol use during the pregnancy, prenatal health records, and review of available birth records, court records, chemical dependency treatment records, or other reliable sources. Exposure level is assessed as Confirmed Exposure, Unknown Exposure, and Confirmed Absence of Exposure by the IOM, CDC and Canadian diagnostic systems. The 4-DDC further distinguishes confirmed exposure as High Risk and Some Risk: - High Risk — Confirmed use of alcohol during pregnancy known to be at high blood alcohol levels (100 mg/dL or greater) delivered at least weekly in early pregnancy. - Some Risk — Confirmed use of alcohol during pregnancy with use less than High Risk or unknown usage patterns. - Unknown Risk — Unknown use of alcohol during pregnancy. - No Risk — Confirmed absence of prenatal alcohol exposure, which rules out an FAS diagnosis. Amount, frequency, and timing of prenatal alcohol use can dramatically impact the other three key features of FAS. While consensus exists that alcohol is a teratogen, there is no clear consensus as to what level of exposure is toxic. The CDC guidelines are silent on these elements diagnostically. The IOM and Canadian guidelines explore this further, acknowledging the importance of significant alcohol exposure from regular or heavy episodic alcohol consumption in determining, but offer no standard for diagnosis. Canadian guidelines discuss this lack of clarity and parenthetically point out that "heavy alcohol use" is defined by the National Institute on Alcohol Abuse and Alcoholism as five or more drinks per episode on five or more days during a 30 day period. The 4-DDC ranking system distinguishes between levels of prenatal alcohol exposure as High Risk and Some Risk. It operationalizes high risk exposure as a blood alcohol concentration (BAC) greater than 100 mg/dL delivered at least weekly in early pregnancy. This BAC level is typically reached by a 55 kg (121 lb) woman drinking six to eight beers in one sitting. For many adopted or adult patients and children in foster care, records or other reliable sources may not be available for review. Reporting alcohol use during pregnancy can also be stigmatizing to birth mothers, especially if alcohol use is ongoing. In these cases, all diagnostic systems use an unknown prenatal alcohol exposure designation. A diagnosis of FAS is still possible with an unknown exposure level if other key features of FASD are present at clinical levels. The CDC reviewed nine syndromes that have overlapping features with FAS; however, none of these syndromes include all three FAS facial features, and none are the result of prenatal alcohol exposure: - Aarskog syndrome - Williams syndrome - Noonan syndrome - Dubowitz syndrome - Brachman-DeLange syndrome - Toluene syndrome - Fetal hydantoin syndrome - Fetal valproate syndrome - Maternal PKU fetal effects The only certain way to prevent FAS is to simply avoid drinking alcohol during pregnancy. In the United States, the Surgeon General recommended in 1981, and again in 2005, that women abstain from alcohol use while pregnant or while planning a pregnancy, the latter to avoid damage even in the earliest stages (even weeks) of a pregnancy, as the woman may not be aware that she has conceived. In the United States, federal legislation has required that warning labels be placed on all alcoholic beverage containers since 1988 under the Alcoholic Beverage Labeling Act. There is no cure for FAS, because the CNS damage creates a permanent disability, but treatment is possible. Because CNS damage, symptoms, secondary disabilities, and needs vary widely by individual, there is no one treatment type that works for everyone. Traditional behavioral interventions are predicated on learning theory, which is the basis for many parenting and professional strategies and interventions. Along with ordinary parenting styles, such strategies are frequently used by default for treating those with FAS, as the diagnoses Oppositional Defiance Disorder (ODD), Conduct Disorder, Reactive Attachment Disorder (RAD), etc. often overlap with FAS (along with ADHD), and these are sometimes thought to benefit from behavioral interventions. Frequently, a patient's poor academic achievement results in special education services, which also utilizes principles of learning theory, behavior modification, and outcome-based education. Because the "learning system" of a patient with FAS is damaged, however, behavioral interventions are not always successful, or not successful in the long run, especially because overlapping disorders frequently stem from or are exacerbated by FAS. Kohn (1999) suggests that a rewards-punishment system in general may work somewhat in the short term but is unsuccessful in the long term because that approach fails to consider content (i.e., things "worth" learning), community (i.e., safe, cooperative learning environments), and choice (i.e., making choices versus following directions). While these elements are important to consider when working with FAS and have some usefulness in treatment, they are not alone sufficient to promote better outcomes. Kohn's minority challenge to behavioral interventions does illustrate the importance of factors beyond learning theory when trying to promote improved outcomes for FAS, and supports a more multi-model approach that can be found in varying degrees within the advocacy model and neurobehavioral approach. Many books and handouts on FAS recommend a developmental approach, based on developmental psychology, even though most do not specify it as such and provide little theoretical background. Optimal human development generally occurs in identifiable stages (e.g., Jean Piaget's theory of cognitive development, Erik Erikson's stages of psychosocial development, John Bowlby's attachment framework, and other developmental stage theories). FAS interferes with normal development, which may cause stages to be delayed, skipped, or immaturely developed. Over time, an unaffected child can negotiate the increasing demands of life by progressing through stages of development normally, but not so for a child with FAS. By knowing what developmental stages and tasks children follow, treatment and interventions for FAS can be tailored to helping a patient meet developmental tasks and demands successfully. If a patient is delayed in the adaptive behavior domain, for instance, then interventions would be recommended to target specific delays through additional education and practice (e.g., practiced instruction on tying shoelaces), giving reminders, or making accommodations (e.g., using slip-on shoes) to support the desired functioning level. This approach is an advance over behavioral interventions, because it takes the patient's developmental context into account while developing interventions. The advocacy model takes the point of view that someone is needed to actively mediate between the environment and the person with FAS. Advocacy activities are conducted by an advocate (for example, a family member, friend, or case manager) and fall into three basic categories. An advocate for FAS: (1) interprets FAS and the disabilities that arise from it and explains it to the environment in which the patient operates, (2) engenders change or accommodation on behalf of the patient, and (3) assists the patient in developing and reaching attainable goals. An understanding of the developmental framework would presumably inform and enhance the advocacy model, but advocacy also implies interventions at a systems level as well, such as educating schools, social workers, and so forth on best practices for FAS. However, several organizations devoted to FAS also use the advocacy model at a community practice level as well. The neurobehavioral approach focuses on the neurological underpinnings from which behaviors and cognitive processes arise. It is an integrative perspective that acknowledges and encourages a multi-modal array of treatment interventions that draw from all FAS treatment approaches. The neurobehavioral approach is a serious attempt at shifting single-perspective treatment approaches into a new, coherent paradigm that addresses the complexities of problem behaviors and cognitions emanating from the CNS damage of FAS. The neurobehavioral approach's main proponent is Diane Malbin, MSW, a recognized speaker and trainer in the FASD field, who first articulated the approach with respect to FASD and characterizes it as "Trying differently rather than trying harder." The idea to try differently refers to trying different perspectives and intervention options based on effects of the CNS damage and particular needs of the patient, rather than trying harder at implementing behavioral-based interventions that have consistently failed over time to produce improved outcomes for a patient. This approach also encourages more strength-based interventions, which allow a patient to develop positive outcomes by promoting success linked to the patient's strengths and interests. Public health and policy Treating FAS at the public health and public policy levels promotes FAS prevention and diversion of public resources to assist those with FAS. It is related to the advocacy model but promoted at a systems level (rather than with the individual or family), such as developing community education and supports, state or province level prevention efforts (e.g., screening for maternal alcohol use during OB/GYN or prenatal medical care visits), or national awareness programs. Several organizations and state agencies in the U.S. are dedicated to this type of intervention. The primary disabilities of FAS are the functional difficulties with which the child is born as a result of CNS damage due to prenatal alcohol exposure. Often, primary disabilities are mistaken as behavior problems, but the underlying CNS damage is the originating source of a functional difficulty, rather than a mental health condition, which is considered a secondary disability. The exact mechanisms for functional problems of primary disabilities are not always fully understood, but animal studies have begun to shed light on some correlates between functional problems and brain structures damaged by prenatal alcohol exposure. Representative examples include: - Learning impairments are associated with impaired dendrites of the hippocampus - Impaired motor development and functioning are associated with reduced size of the cerebellum - Hyperactivity is associated with decreased size of the corpus callosum Functional difficulties may result from CNS damage in more than one domain, but common functional difficulties by domain include: Note that this is not an exhaustive list of difficulties. - Achievement — Learning disabilities - Adaptive behavior — Poor impulse control, poor personal boundaries, poor anger management, stubbornness, intrusive behavior, too friendly with strangers, poor daily living skills, developmental delays - Attention — Attention-Deficit/Hyperactivity Disorder (ADHD), poor attention or concentration, distractible - Cognition — Intellectual disability, confusion under pressure, poor abstract skills, difficulty distinguishing between fantasy and reality, slower cognitive processing - Executive functioning — Poor judgment, Information-processing disorder, poor at perceiving patterns, poor cause and effect reasoning, inconsistent at linking words to actions, poor generalization ability - Language — Expressive or receptive language disorders, grasp parts but not whole concepts, lack understanding of metaphor, idioms, or sarcasm - Memory — Poor short-term memory, inconsistent memory and knowledge base - Motor skills — Poor handwriting, poor fine motor skills, poor gross motor skills, delayed motor skill development (e.g., riding a bicycle at appropriate age) - Sensory processing and soft neurological problems — sensory processing disorder, sensory defensiveness, undersensitivity to stimulation - Social communication — Intrude into conversations, inability to read nonverbal or social cues, "chatty" but without substance The secondary disabilities of FAS are those that arise later in life secondary to CNS damage. These disabilities often emerge over time due to a mismatch between the primary disabilities and environmental expectations; secondary disabilities can be ameliorated with early interventions and appropriate supportive services. Six main secondary disabilities were identified in a University of Washington research study of 473 subjects diagnosed with FAS, PFAS (partial fetal alcohol syndrome), and ARND (alcohol-related neurodevelopmental disorder): - Mental health problems — Diagnosed with ADHD, Clinical Depression, or other mental illness, experienced by over 90% of the subjects - Disrupted school experience — Suspended or expelled from school or dropped out of school, experienced by 60% of the subjects (age 12 and older) - Trouble with the law — Charged or convicted with a crime, experienced by 60% of the subjects (age 12 and older) - Confinement — For inpatient psychiatric care, inpatient chemical dependency care, or incarcerated for a crime, experienced by about 50% of the subjects (age 12 and older) - Inappropriate sexual behavior — Sexual advances, sexual touching, or promiscuity, experienced by about 50% of the subjects (age 12 and older) - Alcohol and drug problems — Abuse or dependency, experienced by 35% of the subjects (age 12 and older) - Dependent living — Group home, living with family or friends, or some sort of assisted living, experienced by 80% of the subjects (age 21 and older) - Problems with employment — Required ongoing job training or coaching, could not keep a job, unemployed, experienced by 80% of the subjects (age 21 and older) Protective factors and strengths - Living in a stable and nurturing home for over 73% of life - Being diagnosed with FAS before age six - Never having experienced violence - Remaining in each living situation for at least 2.8 years - Experiencing a "good quality home" (meeting 10 or more defined qualities) from age 8 to 12 years old - Having been found eligible for developmental disability (DD) services - Having basic needs met for at least 13% of life - Having a diagnosis of FAS (rather than another FASD condition) Malbin (2002) has identified the following areas of interests and talents as strengths that often stand out for those with FASD and should be utilized, like any strength, in treatment planning: - Music, playing instruments, composing, singing, art, spelling, reading, computers, mechanics, woodworking, skilled vocations (welding, electrician, etc.), writing, poetry - Participation in non-impact sport or physical fitness activities Anecdotal accounts of prohibitions against maternal alcohol use from Biblical, ancient Greek, and ancient Roman sources imply a historical awareness of links between maternal alcohol use and negative child outcomes. In Gaelic Scotland, the mother and nurse were not allowed to consume ale during pregnancy and breastfeeding (Martin Martin). The earliest recorded observation of possible links between maternal alcohol use and fetal damage was made in 1899 by Dr. William Sullivan, a Liverpool prison physician who noted higher rates of stillbirth for 120 alcoholic female prisoners than their sober female relatives; he suggested the causal agent to be alcohol use. This contradicted the predominating belief at the time that heredity caused intellectual disability, poverty, and criminal behavior, which contemporary studies on the subjects usually concluded. A case study by Henry H. Goddard of the Kallikak family — popular in the early 1900s — represents this earlier perspective, though later researchers have suggested that the Kallikaks almost certainly had FAS. General studies and discussions on alcoholism throughout the mid-1900s were typically based on a heredity argument. Prior to fetal alcohol syndrome being specifically identified and named in 1973, a few studies had noted differences between the children of mothers who used alcohol during pregnancy or breast-feeding and those who did not, but identified alcohol use as a possible contributing factor rather than heredity. Recognition as a syndrome Fetal Alcohol Syndrome was named in 1973 by two dysmorphologists, Drs. Kenneth Lyons Jones and David Weyhe Smith of the University of Washington Medical School in Seattle, United States. They identified a pattern of "craniofacial, limb, and cardiovascular defects associated with prenatal onset growth deficiency and developmental delay" in eight unrelated children of three ethnic groups, all born to mothers who were alcoholics. The pattern of malformations indicated that the damage was prenatal. News of the discovery shocked some, while others were skeptical of the findings. Dr. Paul Lemoine of Nantes, France had already published a study in a French medical journal in 1968 about children with distinctive features whose mothers were alcoholics, and in the U.S., Christy Ulleland and colleagues at the University of Washington Medical School had conducted an 18-month study in 1968–1969 documenting the risk of maternal alcohol consumption among the offspring of 11 alcoholic mothers. The Washington and Nantes findings were confirmed by a research group in Gothenburg, Sweden in 1979. Researchers in France, Sweden, and the United States were struck by how similar these children looked, though they were not related, and how they behaved in the same unfocused and hyperactive manner. Within nine years of the Washington discovery, animal studies, including non-human monkey studies carried out at the University of Washington Primate Center by Dr. Sterling Clarren, had confirmed that alcohol was a teratogen. By 1978, 245 cases of FAS had been reported by medical researchers, and the syndrome began to be described as the most frequent known cause of intellectual disability. While many syndromes are eponymous, i.e. named after the physician first reporting the association of symptoms, Dr. Smith named FAS after the causal agent of the symptoms. He reasoned that doing so would encourage prevention, believing that if people knew maternal alcohol consumption caused the syndrome, then abstinence during pregnancy would follow from patient education and public awareness. Nobody was aware of the full range of possible birth defects from FAS or its prevalence rate at that time, but admission of alcohol use during pregnancy can feel stigmatizing to birth mothers and complicate diagnostic efforts of a syndrome with its preventable cause in the name. Over time, as subsequent research and clinical experience suggested that a range of effects (including physical, behavioral, and cognitive) could arise from prenatal alcohol exposure, the term Fetal Alcohol Spectrum Disorder (FASD) was developed to include FAS as well as other conditions resulting from prenatal alcohol exposure. Currently, FAS is the only expression of prenatal alcohol exposure defined by the International Statistical Classification of Diseases and Related Health Problems and assigned ICD-9 and diagnoses. - Ulleland, C.N. (1972). The offspring of alcoholic mothers. Annals New York Academy of Sciences, 197, 167–169. PMID 4504588 - Lemoine, P., Harousseau, H., Borteyru, J.B., & Menuet, J.C. (1968). Les enfants de parents alcooliques. Anomalies observées, à propos de 127 cas. Quest Medical, 21, 476–482. PMID 12657907 - Streissguth, A. (1997). Fetal Alcohol Syndrome: A Guide for Families and Communities. Baltimore: Brookes Publishing. ISBN 1-55766-283-5. - Ethen MK, Ramadhani TA, Scheuerle AE et al. (March 2008). "Alcohol Consumption by Women Before and During Pregnancy". Maternal and child health journal 13 (2): 274–85. doi:10.1007/s10995-008-0328-2. PMID 18317893. - Streissguth, A.P., Barr, H.M., Kogan, J., & Bookstein, F.L. (1996). Understanding the occurrence of secondary disabilities in clients with fetal alcohol syndrome (FAS) and fetal alcohol effects (FAE): Final report to the Centers for Disease Control and Prevention on Grant No. RO4/CCR008515 (Tech. Report No. 96-06). Seattle: University of Washington, Fetal Alcohol and Drug Unit. - Guerri, C. (2002). Mechanisms involved in central nervous system dysfunctions induced by prenatal ethanol exposure. Neurotoxicity Research, 4(4), 327–335. PMID 12829422 - Abel, E.L., & Sokol, R.J. (1987). Incidence of fetal alcohol syndrome and economic impact of FAS-related anomalies: Drug alcohol syndrome and economic impact of FAS-related anomalies. Drug and Alcohol Dependency, 19(1), 51–70. PMID 3545731 - Lancet. 1986 Nov 22;2(8517):1222. PMID 2877359 - Vaux, Keith K. "Fetal Alcohol Syndrome". Medscape Reference. Retrieved 28 March 2012. - Sampson et al. (1997), Teratology, Volume 56, Issue 5, November 1997, Pages 317–326 - Astley, S.J. (2004). Diagnostic Guide for Fetal Alcohol Spectrum Disorders: The 4-Digit Diagnostic Code. Seattle: University of Washington. PDF available at FAS Diagnostic and Prevention Network. Retrieved on 2007-04-11 - Ratey, J.J. (2001). A User's Guide to the Brain: Perception, Attention, and the Four Theaters of the Brain. New York: Vintage Books. ISBN 0-375-70107-9. - May, PA.; Gossage, JP. (2001). "Estimating the prevalence of fetal alcohol syndrome. A summary". Alcohol Res Health 25 (3): 159–67. PMID 11810953. - Bloss, G. (1994). "The economic cost of FAS". Alcohol Health & Research World 18 (1): 53–54. - Havens JR, Simmons LA, Shannon LM, Hansen WF (September 2008). "Factors associated with substance use during pregnancy: Results from a national sample". Drug and alcohol dependence 99 (1–3): 89–95. doi:10.1016/j.drugalcdep.2008.07.010. PMID 18778900. - Ebrahim SH, Gfroerer J (February 2003). "Pregnancy-related substance use in the United States during 1996–1998". Obstetrics and gynecology 101 (2): 374–9. doi:10.1016/S0029-7844(02)02588-7. PMID 12576263. Archived from the original on 2013-01-25. - Advisory on Alcohol Use in Pregnancy. US Surgeon General and CDC. Press release (February 21, 2005). Retrieved on 2014-06-20 - Can I drink alcohol if I’m pregnant? Retrieved on 2009-10-14 - Institute of Medicine (IOM), Stratton, K.R., Howe, C.J., & Battaglia, F.C. (1996). Fetal Alcohol Syndrome: Diagnosis, Epidemiology, Prevention, and Treatment. Washington, DC: National Academy Press. ISBN 0-309-05292-0 - "Australian Government National Health and Medical Research Council". Retrieved 4 November 2012. - Clinical growth charts. National Center for Growth Statistics. Retrieved on 2007-04-10 - Fetal Alcohol Syndrome: Guidelines for Referral and Diagnosis (PDF). CDC (July 2004). Retrieved on 2007-04-11 Archived September 26, 2007 at the Wayback Machine - Chudley A, Conry J, Cook J et al. (2005). "Fetal alcohol spectrum disorder: Canadian guidelines for diagnosis". CMAJ 172 (5 Suppl): S1–S21. doi:10.1503/cmaj.1040302. PMC 557121. PMID 15738468. Retrieved 2007-04-10. - Jones K, Smith D (1975). "The fetal alcohol syndrome". Teratology 12 (1): 1–10. doi:10.1002/tera.1420120102. PMID 1162620. - Renwick J, Asker R (1983). "Ethanol-sensitive times for the human conceptus". Early Hum Dev 8 (2): 99–111. doi:10.1016/0378-3782(83)90065-8. PMID 6884260. - Astley SJ, Clarren SK (1996). Most FAS children have a smaller brain then other children "A case definition and photographic screening tool for the facial phenotype of fetal alcohol syndrome". Journal of Pediatrics, 129(1), 33–41. PMID 8757560 - Astley SJ, Stachowiak J, Clarren SK, Clausen C. (2002). "Application of the fetal alcohol syndrome facial photographic screening tool in a foster care population". Journal of Pediatrics, 141(5), 712–717. PMID 12410204 - Lip-philtrum guides. FAS Diagnostic and Prevention Network, University of Washington. Retrieved on 2007-04-10. - FAS facial features. FAS Diagnostic and Prevention Network, University of Washington. Retrieved on 2007-04-10 - Astley, Susan. Backside of Lip-Philtrum Guides (2004) (PDF). University of Washington, Fetal Alcohol Syndrome Diagnostic and Prevention Network. Retrieved on 2007-04-11 - Dr Sterling Clarren's Keynote Address to the Yukon 2002 Prairie Northern Conference on Fetal Alcohol Syndrome. Retrieved on 2007-04-10 - West, J.R. (Ed.) (1986). Alcohol and Brain Development. New York: Oxford University Press. - Clarren S, Alvord E, Sumi S, Streissguth A, Smith D (1978). "Brain malformations related to prenatal exposure to ethanol". J Pediatr 92 (1): 64–7. doi:10.1016/S0022-3476(78)80072-9. PMID 619080. - Coles C, Brown R, Smith I, Platzman K, Erickson S, Falek A (1991). "Effects of prenatal alcohol exposure at school age. I. Physical and cognitive development". Neurotoxicol Teratol 13 (4): 357–67. doi:10.1016/0892-0362(91)90084-A. PMID 1921915. - Jones, K.L., & Smith, D.W. (1973). Recognition of the fetal alcohol syndrome in early infancy. Lancet, 2, 999–1001. PMID 4127281 - Mattson, S.N., & Riley, E.P. (2002). "Neurobehavioral and Neuroanatomical Effects of Heavy Prenatal Exposure to Alcohol," in Streissguth and Kantor. (2002). p. 10. - Strömland K, Pinazo-Durán M (2002). "Ophthalmic involvement in the fetal alcohol syndrome: clinical and animal model studies". Alcohol Alcohol 37 (1): 2–8. doi:10.1093/alcalc/37.1.2. PMID 11825849. - Guerri, C; Riley, E; Strömland, K (July–August 1999). "Commentary on the recommendations of the Royal College of Obstetricians and Gynaecologists concerning alcohol consumption in pregnancy". Alcohol and alcoholism (Oxford, Oxfordshire) 34 (4): 497–501. doi:10.1093/alcalc/34.4.497. PMID 10456576. - Polygenis, D. (1998). "Moderate alcohol consumption during pregnancy and the incidence of fetal malformations: a meta-analysis". Neurotoxicol Teralol 20: 61–67. doi:10.1016/s0892-0362(97)00073-1. PMID 9511170. - Kelly Y, Sacker A, Gray R, Kelly J, Wolke D, Quigley MA (February 2009). "Light drinking in pregnancy, a risk for behavioural problems and cognitive deficits at 3 years of age?". Int J Epidemiol 38 (1): 129–40. doi:10.1093/ije/dyn230. PMID 18974425. - * Day NL (1992). "The effects of prenatal exposure to alcohol." Alcohol Health and Research World, 16(2), 328–244. - Streissguth AP, et al. (1994). "Prenatal alcohol and offspring development: the first fourteen years". Drug and Alcohol Dependence, 36(2), 89–99. doi:10.1016/0376-8716(94)90090-6 PMID 7851285 - Forrest, F., and du Florey, C. Reported social alcohol consumption during pregnancy and infants' development at 18 months. British Medical Journal, 1991, 303, 22–26 - du Florey, D., et al. A European concerted action: maternal alcohol consumption and its relation to the outcome of pregnancy and development at 18 months. International Journal of Epidemiology, 1992, 21 (Supplement #1) - Goldschmidt, L; Richardson, GA; Stoffer, DS; Geva, D; Day, NL (1996). "Prenatal alcohol exposure and academic achievement at age six: A nonlinear fit". Alcoholism, clinical and experimental research 20 (4): 763–70. doi:10.1111/j.1530-0277.1996.tb01684.x. PMID 8800397. - K. Warren and T-K Li, Birth Defect Res A 73 (2005) 195–203. - J. Brien et al, Am J Obstet Gynecol 146 (1983) 181–186. - A. Nava-Ocampo et al, Reproduct Toxicol 18 (20004) 613–617 - U.S. Department of Health and Human Services. (2000). National Institute on Alcohol Abuse and Alcoholism. Tenth special report to the U.S> Congress on alcohol and health: Highlights frfom current research. Washington, DC: The Institute. - Buxton, B. (2005). Damaged Angels: An Adoptive Mother Discovers the Tragic Toll of Alcohol in Pregnancy. New York: Carroll & Graf. ISBN 0-7867-1550-2. - Malbin, D. (2002). Fetal Alcohol Spectrum Disorders: Trying Differently Rather Than Harder. Portland, OR: FASCETS, Inc. ISBN 0-9729532-0-5. - Kohn, A. (1999). Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A's, Praise, and Other Bribes. Boston: Houghton Mifflin. ISBN 0-618-00181-6. - McCreight, B. (1997). Recognizing and Managing Children with Fetal Alcohol Syndrome/Fetal Alcohol Effects: A Guidebook. Washington, DC: CWLA. ISBN 0-87868-607-X. - National Organization on Fetal Alcohol Syndrome, Minnesota Organization on Fetal Alcohol Syndrome. Retrieved on 2007-04-11 - Understanding FASD (Fetal Alcohol Spectrum Disorders. Fetal Alcohol Syndrome Consultation, Education and Training Services, Inc., Retrieved on 2007-04-11 - Malbin, D. (1993). Fetal Alcohol Syndrome, Fetal Alcohol Effects: Strategies for Professionals. Center City, MN: Hazelden. ISBN 0-89486-951-5 - Abel EL, Jacobson S, Sherwin BT (1983). "In utero alcohol exposure: Functional and structural brain damage". Neurobehavioral Toxicology and Teratology, 5, 363–366. PMID 6877477 - Meyer L, Kotch L, Riley E (1990). "Neonatal ethanol exposure: functional alterations associated with cerebellar growth retardation". Neurotoxicol Teratol 12 (1): 15–22. doi:10.1016/0892-0362(90)90107-N. PMID 2314357. - Zimmerberg B, Mickus LA (1990). "Sex differences in corpus callosum: Influence of prenatal alcohol exposure and maternal undernutrition". Brain Research, 537, 115–122. PMID 2085766 - Sullivan, W.C. (1899). A note on the influence of maternal inebriety on the offspring. Journal of Mental Science, 45, 489–503. - Goddard, H.H. (1912). The Kallikak Family: A Study in the Heredity of Feeble-Mindedness. New York: Macmillan. - Karp, R.J., Qazi, Q.H., Moller, K.A., Angelo, W.A., & Davis, J.M. (1995). Fetal alcohol syndrome at the turn of the century: An unexpected explanation of the Kallikak family. Archives of Pediatrics and Adolescent Medicine, 149(1), 45–48. PMID 7827659 - Haggard, H.W., & Jellinek, E.M. (1942). Alcohol Explored. New York: Doubleday. - Jones, K.L., Smith, D.W, Ulleland, C.N., Streissguth, A.P. (1973). Pattern of malformation in offspring of chronic alcoholic mothers. Lancet, 1, 1267–1271. PMID 4126070 - Streissguth, A.P. (2002). In A. Streissguth, & J. Kanter (Eds.), The Challenge in Fetal Alcohol Syndrome: Overcoming Secondary Disabilities. Seattle: University of WA Press. ISBN 0-295-97650-0. - Olegard, R., Sabel, K.G., Aronsson, M. Sandin, B., Johannsson, P.R., Carlsson, C., Kyllerman, M., Iversen, K. & Hrbek, A. (1979). Effects on the child of alcohol abuse during pregnancy. Acta Paediatrica Scandinavica, 275, 112–121. PMID 291283 - Clarren, S.K. (2005). A thirty year journey from tragedy to hope. Foreword to Buxton, B. (2005). Damaged Angels: An Adoptive Mother Discovers the Tragic Toll of Alcohol in Pregnancy. New York: Carroll & Graf. ISBN 0-7867-1550-2. - Clarren, S.K., & Smith, D.W. (1978). Fetal alcohol syndrome. New England Journal of Medicine, 298, 1063–1067. PMID 347295 - Ernst van Faassen and Onni Niemelä: Biochemistry of prenatal alcohol exposure. NOVA Biomedical Books, New York 2011. ISBN 978-1-61122-511-2: A mongraph with a global overview over the recent scientific literature, in which the various mechanisms and biochemical pathways of FAS and FASD are discussed and compared. - Ed. Joshua Hoffmann: Pregnancy and Alcohol Consumption. NOVA Science publishers, New York 2011. ISBN 978-1-61761-122-3: A collection of many different aspects of the effects of parental alcohol consumption on fertility and fetal health. |Wikimedia Commons has media related to Fetal alcohol syndrome.| - CDC’s National Center on Birth Defects and Developmental Disabilities - Canadian FASD resource — Motherisk
Washington, Aug. 6 (ANI): A new study has predicted that global warming from fossil fuel burning could be more intense and longer-lasting in the future, than previously thought. This prediction has emerged from a new study by Richard Zeebe at the University of Hawaii who includes insights from episodes of climate change in the geologic past to inform projections of man-made future climate change. Humans keep adding large amounts of greenhouse gases to the atmosphere, among them carbon dioxide (CO2), the most important man-made greenhouse gas. Over the past 250 years, human activities such as fossil fuel burning have raised the atmospheric CO2 concentration by more than 40% over its preindustrial level of 280 ppm (parts per million). In May 2013, the CO2 concentration in Earth's atmosphere surpassed a milestone of 400 ppm for the first time in human history, a level that many scientists consider dangerous for its impact on Earth's climate. The globe is likely to become warmer in the near future, and probably a lot warmer in the distant future. The study has suggested that amplified and prolonged warming due to unabated fossil fuel burning raises the probability that large ice sheets will melt, leading to significant sea level rise. Zeebe has used past climate episodes as analogs for the future, which suggest that so-called slow climate 'feedbacks' can boost climate sensitivity and amplify warming.(ANI)
In 1981, eighty years after Karl Landsteiner phlebotomized his own blood to prove the existence of blood types, another self-experimenting physician, Dr. Jack Goldstein, furthered the field of blood type. In doing so, he managed to expand the pool of available donors for people with type O blood in need of blood transfusions. This was an important moment in the field; although people with type O blood could give blood to anyone, they could only receive type O blood themselves. Goldstein discovered that an enzyme found in coffee, alpha-galactosidase, could render the antigens in B-type blood harmless. This chemical reaction effectively transformed B-type blood into what resembled O-type blood. If transfused into O-recipients, it would expand the available donors for B-type as well. Since Goldstein had type O blood, he underwent a blood transfusion of type B red blood cells that had been treated with the enzyme, rendering it into type O blood. Having received the transfusion without an adverse reaction, Goldstein showed that the technique worked [source: Altman].
© Jim Moulton By solving a logic puzzle, students will learn about the speciation of the Galápagos mockingbirds, a group of birds which heavily influenced Darwin's grand idea of evolution through natural selection. With all of the necessary evidence provided, your students can practice reading and constructing a branching diagram showing the evolutionary relationships among this group of birds. By solving a logic puzzle, students will: - be introduced to the family of mockingbirds specific to the Galápagos Islands. - practice reading and constructing branching diagrams showing evolutionary relationships. - learn that speciation is the process of forming new species. - Mapping Mockingbirds Student Worksheet (one 11 x 17 sheet per student pair) - Mapping Mockingbirds Clues (one 8 x 11 sheet per student pair) - Teacher Answer Sheet - colored pencils or markers (4 colors per student pair) - picture of the northern mockingbird Using a tree from your textbook or another source (a simplified tree of the vertebrates is a classic example), pre-teach vocabulary related to tree structure: lineage, node, descendant, ancestor, diverge, species. It may also be necessary to introduce or review the difference between scientific and common names. - Print out all necessary materials. For younger grades, or to make for a shorter activity, fill in a few answers before making enough copies for student pairs. - If you'd like to visit the California Academy of Sciences to see specimens of the mockingbirds Darwin studied, visit before May 2015! Apply today. Tell your students that when they visit the California Academy of Sciences, they will see an exhibit describing a famous group of tropical islands, the Galápagos. The patterns of life present on these islands have provided evidence for the formation of new species. Scientists study the organisms present today to create family trees to show how they are related to one another. Today, students will attempt to solve a puzzle about a group of famous birds called the Galápagos mockingbirds. Has anyone ever seen a mockingbird at their home, in school, or at the park? - Show a picture of California’s resident species is the Northern Mockingbird, Mimus polyglottos. This bird is famous for singing all day, and imitating other noises, such as another birds’ song or a car alarm. You can recognize it by the white patches that show on its wings when in flight. - Tell students that a few million years ago, an ancestor of this bird – which looked similar and lived a similar lifestyle – landed on the Galápagos islands. Tell a brief story about how this ancestor diversified into many species, and pass out the worksheet. - Before passing out the clues, review how to read and construct a family tree, ensuring students understand the symbols in the legend. - Explain how the puzzle works, pass out clues and markers, and let students work in pairs. - Review the answers as a class, working through the logic together. - Keeping answers open-ended, discuss with the students: » What do you notice about their geographical distribution? » What can you conclude about their evolutionary relationships? - Review the branching structure of the family tree to learn how lineages are related. » In what order did the species branch off from an ancestor? Follow the nodes from left to right. Each signifies a divergence. - Explain that the wind pattern flows from the southeast to the northwest. Notice the compass. Help students draw appropriate arrows to indicate the wind direction on their maps. » How is the family tree consistent with this wind pattern? Bird species on southeast islands are older, and more closely related to each other. The wind caused dispersal to the islands to the northwest, resulting in the speciation of two new mockingbirds. - Suppose you were leading a boat tour around the islands, and part of your job was to act as a field guide for the passengers. » How would you learn to identify the four types of mockingbirds, so that you could correctly point them out to guests? Depending on which island you disembarked, you would immediately know what mockingbird is likely to cross your path, since their ranges never overlap. Or, you could remember what physical characteristics distinguish each bird, like facial markings. For high school students, consider discussing the different evidence scientists use to construct cladograms (comparative anatomy, chromosomal DNA, mitochondrial DNA, etc). To receive a copy of the scientific article on which this lesson was based, send a request to [email protected]. If you plan to visit Academy before May 2015, your students will see the very birds that helped scientists make this tree (visiting researchers used the DNA of birds in our collection). Because the bird specimens are treasured for their research value – and because they are over 100 years old! – the scientists have hidden them in the exhibit for their protection. Many guests might not realize how special they are. Express to students how these mockingbirds are as valuable to ornithologists (bird scientists) as the Mona Lisa is to art historians. - The mockingbird display is facing the outer windows, behind the finches, in the Islands of Evolution exhibit. Look for the yellow lid with a handle at waist height. By keeping the birds covered, we minimize the damage that would occur from the light hitting their skin and feathers every day. - Allow student groups to take turns viewing the four precious mockingbirds as the rest of the class explores the interactive exhibits in the hall. Consider staging a chaperone there to ask a mystery question or two. » Do you recognize the scientific names on the specimen tags? They match the four mockingbirds you studied! » When were the birds collected? 1906, as shown on their tags. » What part of the specimens seems to be missing? The eyeballs, since they don’t preserve well. A scientist has replaced them with a ball of cotton. » Name one thing you observed about the specimens. Answers will vary: feather color, bill shape, feet shape, size. » If another guest comes by to look at the specimens, be the expert! Encourage students to teach visitors about the birds. » What do mockingbirds eat? They are omnivorous, eating meat like eggs, lizards, and insects, and plants like fruits. » What famous scientist wrote a book that discussed the mockingbirds? Charles Darwin, who wrote a book about natural selection, describing how he thought evolution worked. ancestor: an earlier organism from which others are derived; a relative from the past descendant: an organism that derives or descends from an earlier form; future offspring diverge: to branch off in two directions endemic: naturally occurring in a certain geographic area, and not found anywhere else lineage: a continuous line of descent from a particular ancestor node: the point where a single lineage diverges, or branches off, into two distinct lineages species: a group of organisms that resemble one another and can produce viable offspring speciation: the evolutionary formation of new biological species by the branching of one species into two or more distinct ones The Galápagos Islands are an archipelago consisting of sixteen volcanic islands located 600 miles west of Ecuador in the Pacific Ocean. They formed about 4 million years ago when a series of underwater volcanoes erupted, spewing up magma that cooled to form the cone-shaped islands. When the islands first formed they were devoid of life, but over time animal and plant species colonized them, producing the ecological communities that exist there today. If you traveled to the islands today, you would certainly hear about the Galápagos mockingbirds, renowned for influencing Charles Darwin’s conception of the theory of natural selection. There are four species in the Galápagos mockingbird genus (Nesomimus). All are endemic to the Galápagos Islands, meaning they are native to these islands and found nowhere else in the world. San Cristóbal mockingbird Whereas natural selection is one mechanism that works to change characteristics of a certain lineage over time, speciation is the change that results in a lineage actually diversifying into two or more distinct lines. With speciation, changes have occurred to such an extent that populations are no longer breeding, and we can distinguish these organisms as species distinct from each other. Speciation requires genetic variation just as in natural selection, but often occurs due to populations becoming isolated from one another. Populations may be isolated ecologically, such as those that occupy different niches, competing for different resources at different times in different places. A bird that eats seeds on the ground during the day is not likely to cross paths with one that forages for insects in the treetops at dusk, so they will seldom meet to swap genes. Populations may also be isolated geographically, via separation by landforms, vegetations, or bodies of water. In an archipelago, meeting individuals on your own island to reproduce is less risky than flying between islands each mating season. Current scientific studies suggest that a single ancestor in the broader mockingbird family (Mimus) colonized the islands several million years ago. Over many generations, groups of mockingbirds either flew or where dispersed by wind to other islands. Being subjected to new environments and competing with others for particular resources, the immigrants adapted via natural selection to their new home. Eventually, the distance between mockingbird populations helped solidify the speciation process. It has been suggested that the prevailing winds in the region, which blow from the southeast to the northwest, influenced mockingbird diversification by dispersing birds to the northwest. Evidence from the branching diagram supports this argument: the birds on the southeast islands are older than those currently inhabited the northwest areas, as shown by a node positioned farther back in time. To brush up on how to read an evolutionary tree, visit the Understanding Evolution website. 3d. Students know how to construct a simple branching diagram to classify living groups of organisms by shared derived characteristics and how to expand the diagram to include fossil organisms. Investigation and Experimentation - 7d. Construct scale models, maps, and appropriately labeled diagrams to communicate scientific knowledge (e.g., motion of Earth's plates and cell structure). - 1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing patterns. - 2.4 Make and test conjectures by using both inductive and deductive reasoning. - 2.5 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning. Grades Nine and Ten - 8d. Students know reproductive or geographic isolation affects speciation. Investigation and Experimentation - 1d. Formulate explanations by using logic and evidence. - Arbogast, B.S., Drovetski S.V., Curry, R.L., Boag, P.T., Seutin G., Grant, P.R., et al, (2006). The Origin and Diversification of Galapagos Mockingbirds. Evolution, 60(2), 370-382. - The University of California Museum of Paleontology, Berkeley. (2008). Reading Trees: a quick overview. Understanding Evolution. Berkeley, California. Retrieved April 16, 2008, from http://evolution.berkeley.edu/evolibrary/article/phylogenetics_02.
J. Harlen Bretz published an article in the Journal of Geology in 1923 that described a catastrophic flood that swept across eastern Washington state near the end of the Ice Age. He claimed that it eroded massive channels through solid rock and flooded the Columbia River Gorge to nearly 400 feet in depth as far downstream as Portland, Oregon. Bretz's theory that a lobe of ice from the ice sheet in Canada blocked the Clark Fork River in western Montana, created a lake as far upstream as Missoula, and released walls of water across eastern Washington when the dam was breached was rejected by most geologists until he lay on his deathbed in 1981. The reason his observations were ignored for so many years was because the floods in eastern Washington required ice lobes to block the valleys, the sudden releases of the ice dams, and massive flows of water hundreds of feet deep over thousands of square miles. These events smacked of similar stories found in the Bible. However, the evidence of rapid, catastrophic erosion that was carved in the rocks eventually overwhelmed the ridicule of the conventional geological community and is widely accepted today. The Lake Missoula Flood is only one of many events that have led to the development of a rapid, high-energy explanation for geological process called neo-catastrophism. Rapid Ice Age processes similar to those associated with the ice lobe that caused the Lake Missoula Flood have also begun to be recognized in the formation, movement, and melting of ice sheets in the upper Midwest. During the Ice Age, large ice lobes surged from the Laurentide ice sheet from central Canada southward into the Dakotas, Minnesota, and Iowa. If the Ice Age was a relatively short event of only a few thousand years, as implied by biblical constraints, then these ice lobes must have moved rapidly. Yet, it has been commonly assumed until recently that ice moves relatively slowly. Mark Horstemeyer and Philip Gullet reported at the 5th International Conference on Creationism (ICC) on their finite element simulations in one dimension of ice sheets.1 They studied the rate at which steep edges could move and deform under heavy accumulations of snow and found that the rapid movement of ice and multiple surges were plausible during a short Ice Age on the order of 500 years. Jesse Sherburn and associates reported at the 6th ICC in the summer of 2008 even more detailed simulations in three dimensions specifically for the Des Moines ice lobe in Iowa.2 They considered deformable till under the ice lobe, various porosity and crack levels, and various temperatures, slopes, and load angles. They agreed with previous simulations that surging could reach peak velocities of approximately 6.5 km/year and that the movement of ice lobes could in fact fit within a biblical time frame. Even the conventional glaciology and paleoclimatology communities have come to believe that ice sheets several thousand feet thick in Canada may have melted in just a few hundred years. A major event during the deglaciation of the ice sheets called the Younger Dryas is now thought to have occurred in as little as a few decades. So, fewer and fewer pieces of evidence seem to justify hundreds of thousands of years for the Ice Age. Several of the articles presented at the ICC in the summer of 2008, including the one on the Des Moines ice lobe, may be found on the ICR website at www.icr.org/research. - Horstemeyer, M. and P. Gullet. 2003. Will Mechanics Allow a Rapid Ice Age Following the Flood? Paper presented at the Fifth International Conference on Creationism, August 4-8, in Pittsburgh, PA. - Sherburn, J. A., M. F. Horstemeyer and K. Solanki. 2008. Simulation Analysis of Glacial Surging in the Des Moines Ice Lobe. Paper presented at the Sixth International Conference on Creation-ism, August 3-7, in Pittsburgh, PA. Image: Painting of Glacial Lake Missoula by Byron Pickering, www.pickeringstudio.com. Used by permission. * Dr. Vardiman is Chair of the Department of Astro/Geophysics. Cite this article: Vardiman, L. 2008. Rapid Surging of Glacial Ice Lobes. Act & Facts. 37 (12): 6.
Sigmund Freud completely revolutionised how the Western world thinks of the mind and human behaviour - and was the first European to investigate the concept of the unconscious. By using and developing techniques such as dream interpretation and free association, Freud is rightly called the founding father of Psychoanalysis, a term which he first used in 1896. This therapy is still widely used today. From 1882, Freud worked in psychiatric medicine. Over the course of his life, he investigated and documented the implications of our actions in childhood as being a possible explanation for our behaviour in our adult lives. He has been criticised for being unscientific: the majority of his concepts have not stood up to the scientific rigours of the laboratory. Further criticism has arisen through suggestions that his work is fundamentally sexist or simply wrong. Indeed, from the very moment Freud was surrounded by collaborators, disagreements began. Few figures have inspired such sustained controversy and intense debate. But we cannot deny the influence Freud has had upon thinking in the 20th and 21st centuries. This has spread throughout Western culture and into the international creative arts. His thoughts can be observed in art, literature, cinema and the stage. Notions of identity, memory, childhood, sexuality, and of meaning have been shaped in relation to - and often in opposition to - Freud's work. No doubt this influence will continue into the future. Freud’s concept of the mind Freud's primary interest was in understanding how influential the mind may be in shaping our personalities and behaviours. His fundamental belief was that the mind was the most powerful influence on an individual's actions. Although this could not be studied in an objective and scientific way, he propounded the concept that our mind has three components: - The conscious: that part of the mind responsible for dealing with our everyday actions at any given moment of the present. - The pre-conscious: that part of the mind responsible for storing easily accessible memories and past events. - The unconscious: that part of the mind that stores all our experiences, especially those of a traumatic or unpleasant nature. Freud believed that it is the unconscious that exerts the most influence upon our behaviour. Moreover he maintained that all the answers to our behaviour and actions lay in this hidden, inaccessible area that makes up four fifths of the mind.
The American Society of Safety Engineers, Dictionary of terms used in the Safety Profession,defines safety as “A general term denoting an acceptable level of risk of, relative freedom from, and a low probability of harm.” A hazard is defined as “a potential condition or set of conditions, either internal and/or external to a system, product, facility or operation which, when activated by a stimulus, transforms the hazardinto a real condition, or series of events, culminating in a loss — an accident.” A simpler and more fundamental definition of a hazard is a potential to do harm. In addition,hazards are classified in various levels according to: - the severity of the accident with which the hazard would result; and - the probability or estimated certainty with which the hazard will lead to an accident. The most dangerous type of hazard is one which, if an accident occurred, would cause death or severe injury to a person or seriously damage a system. Project Illustrations Include: - Conducted OSHA Standards surveys/inspections at National Aeronautics and Space Administration (NASA) field installations - Researched and developed injury and illness prevention programs and safety programs for companies in construction, transportation, warehousing, and the oil exploration and production industries
Sixth Grade Language Arts Standards (Prior) 6th Grade Language Arts Skills Prior Standards Implementation The standards listed below have been replaced by a newer set of standards. Reading - Links to prior reading skills standards (e.g. using root words and common text features to make meaning of text; identifying patterns of rhyme and rhythm). Comprehension - Links to prior reading standards (e.g. determining fact or fiction, predicting future events from a passage, draw inferences from text). Writing - Links to prior writing skills standards (e.g. selecting appropriate titles, identifying purpose and audience of text, re-arranging sentences to make a coherent paragraph). Elements of Language - Links to prior language skills standards (e.g. identifying correct use of nouns, verbs, adverbs, adjectives). Great Sites for all Ages: Literature Learning Ladders - This site encourages active reading through book-technology connections, by exploring some online resources related to literacy, themes, literature circles, technology, and learning. Learning Ladders features WebQuests, Newberrys and the Net, Caldecott Connections e-books, graphic novels and popular children's literature. Guide to Grammar and Writing - Writing Resources for Words, Sentences, Paragraphs, Essays and Research Papers. Also contains "Ask Grammar", Quizzes, and PowerPoints to help explain the rules of Grammar. Internet4classrooms is a collaborative effort by Susan Brooks and Bill Byles.
The transfer of spin from one charge to the next along a wire, which is current, also causes the charges in the wire to acquire spin in the plane perpendicular to the wire, resulting in a magnetic field. This spin, and the resulting magnetic fields of the charges, has the same orientation for all the charges along the wire because these fields can be thought of as “stacked” sort of like a stack of records on a turntable, or clutch plates, so that if one of the fields rotates, the rest of the fields in the stack are dragged into rotation with the same orientation. Current in the wire causes spin in the perpindicular magnetic planes of the charges because they have a somewhat staggered arrangement with respect to each other along the wire. They assume such a staggered arrangement because this allows charges with like spins, which repel, to distance themselves from each other to a greater extent. This staggered arrangement of charges along the wire allows spin in their magnetic planes to be created by the “bevel and miter” mechanism described above.
Identifying and Reducing Tropical Fish Aggression Aggression is often the first social problem new aquarists encounter. Weaker fish will be stressed, damaged, and more prone to disease Neale Monks, Ph.D Why fish are aggressive Social fish may not hold territories but they do jockey for position in the group, and a certain amount of chasing and aggressive display is normal. But if the group is sufficiently large and contains a good balance of males and females, schooling fish shouldn’t cause one another serious harm. Cichlid fish are the classic examples of territorial fish. Territories may be used to control access to food, to impress females, or as safe places to rear young. Problems occur when there isn’t enough space for both territory holders and the other fish in the community. Unable to leave the aggressive fish’s territory, the other fish end up being harassed, damaged, even killed. Identifying aggressive behavior The aquarist must recognize early signs of aggression so problems can be nipped in the bud. Typically the aggressor opens with a display such as flaring his fins or gill covers. |Click image to enlarge| Cichlids are the classic examples of territorial fish. Territories may be used to control access to food, to impress females, or as safe places to rear young. Threat displays are meant to give the interloper a moment to decide whether to stay and fight or swim away to safety. But in a freshwater aquarium, space may be so limited that the interloper cannot leave the aggressor’s territory. The aggressor responds to this as if challenged, and begins chasing or attacking the interloper. If the aggressor is the stronger tropical fish, the interloper will often end up hiding at the top of the fish aquarium or behind the filter, as far away from the aggressor as it can get, breathing heavily, and displaying muted or dark colors (often resembling those of juvenile or female fish). If the aquarist doesn’t fix things, the weaker fish is likely to end up damaged. Torn fins are common, as is damage to the mouth and eyes. Secondary infections can quickly set in, so it’s a good idea to preemptively treat with an antibiotic to avoid problems such as fin rot and popeye. Stress can also weaken the fish’s immune system and cause it to stop eating fish food. Aggression problems within schools can be fixed by adding further specimens. In the case of tiger barbs for example, fin-nipping amongst themselves and toward other tropical fish in the freshwater aquarium are most common when they’re kept in groups smaller than 10. Sex ratios are important too, and among species like mollies where aggression between males is common, it’s important to have at least twice as many females than males. In extreme cases, if a schooling species isn’t kept in adequate numbers, the dominant fish ends up harassing or killing all its companions; piranhas are notorious for this, but it can happen with tropical fish such as discus fish and Chromis too. Introducing new fish Schooling fish are best introduced as a group, or a succession of reasonably large groups of similar-sized individuals when new fish aquariums are being stocked. So an aquarist might add half a dozen tetras immediately after the fish aquarium has been cycled, and then another half dozen of the same species a couple of weeks later. Territorial fish are best added to communities last of all. In fish aquariums where the tropical fish are all territorial, but some more aggressive than others, the least aggressive should be added first, the more aggressive species a few weeks later, and then the most aggressive a few weeks after them. This will give each batch of fish a chance to claim a territory. Rearing fish together Aquarists wanting a mated pair of aggressive tropical fish like cichlid fjsh may find it difficult to introduce two sexually mature adults to one another. The best approach with such fish is to rear a group of them together (typically six or more) and then let them pair off naturally. It should be obvious which fish are a pair, and the rest can be removed and housed elsewhere. Removing the aggressor Giving an aggressive tropical fish a ‘time out’ can sometimes work, especially if the rockwork or aquatic plants in the fish aquarium are rearranged. The aggressor is placed in a covered bucket for a half hour or so, and then returned to the rearranged fish aquarium, the hope being that it’ll think it’s in a new part of the river, lake or sea. It won’t stop being territorial of course, but it may define new territorial boundaries, and with luck, it’ll accept the fish already in the aquarium as part of the scenery rather than trespassers. Removing weaker fish Obviously removing a weaker fish is a good idea if it is being bullied, but what often happens is that the aggressor fish now starts picking on whichever is the weakest fish among those that remain. Think about why the aggressor is causing trouble. It may be that adding more tropical fish, rather than removing fish, will make things better. If the stocking density is so high no single fish can claim a stable territory, aggression tends to subside. This is the classic solution to aggression among mbuna cichlid fish, but it also works well with fish such as mudskippers, damselfish and Ameca splendens. The problem is that overstocking massively increases the amount of filtration and maintenance work required for good water quality. Readers will have noticed that these solutions and workarounds don’t come with guarantees! In truth, fixing aggression problems in improperly stocked fish aquariums is actually very difficult. This is why proper research is so important. Good aquarium books will usually give some indication of the social behavior and space requirements of each species.
Writing Mechanics & Grammar Learning grammar rules and the mechanics of writing are critical components of learning to write. Having strong skills in writing and grammar allows writers to get their message or story to their readers in a clear and understandable way. It is important to know the rules of grammar and how to use them properly. Time4Writing.com is a useful site to find resources to help students improve their familiarity with writing and grammar. You’ll find free writing resources covering capitalization, parts of speech, and punctuation. The articles on each topic provide additional guidance and students can practice their skills using activities that include video lessons, printable worksheets and quizzes, standardized test prep materials, and interactive games. For a more in-depth look at the mechanics of writing, eight-week courses are available. Parents and educators can use these resources to motivate students and reinforce skills. Students can gain a better understanding of writing and grammar as well as boost their confidence and expand their skills with online practice. Free Writing Resources: Printables, Videos, Presentations, and Games Knowing the parts of speech, using them correctly, and understanding how they relate to one another is an important early step in creating strong writing skills. From nouns and verbs to prepositions and conjunctions, each part of speech plays a key role in sentence structure and clarity of thought. ... Read More » The question of subject-verb agreement highlights a writer’s need to make sentences clear and understandable. Having plural subjects with singular verbs, or the reverse, results in nobody being quite sure who is doing what. This becomes particularly important when long phrases separate the subject from the verb. Learning about and understanding subject-verb agreement helps writers create clear sentences that the reader will understand. ... Read More » In a world of lowercase texting, learning proper capitalization takes on a whole new meaning. From learning to distinguish between “capitonyms” (a turkey in Turkey, a march in March) to learning the basic rules of capitalization, students have much to gain from mastering this area of writing mechanics. ... Read More » Punctuation marks are signposts used by writers to give directions to their readers about which way a sentence is going. Using punctuation properly is one of the most crucial elements in making the meaning of the sentence absolutely clear. Take our favorite example: “Let’s eat Grandma!” becomes considerably less worrisome when a single comma is added … “Let’s eat, Grandma!” ... Read More » Some of the most interesting words in English are homophones, homonyms, and homographs. However, intrigue can quickly give way to confusion when dealing with sound-alikes and look-alikes! Learning the distinction between identical spellings with two different pronunciations or two different spellings with identical pronunciation is not just confusing, but potentially frustrating. Still, with the proper approach, students can be brought to appreciate homophones, homonyms, and homographs. ... Read More »
EnchantedLearning.com is a user-supported site. As a bonus, site members have access to a banner-ad-free version of the site, with print-friendly pages. Click here to learn more. (Already a member? Click here.) ||DINOSAURS and BIRDS Birds probably evolved from the maniraptors, a branch of bird-like dinosaurs . This idea has been hotly debated for over a hundred years. New fossil evidence is reinforcing this theory, which is now accepted by most scientists. In order to determine what animals birds evolved from, scientists use fossil evidence to trace the emergence of bird-like traits. Many Mesozoic Era bird-like creatures have been found, some which are clearly dinosaurs. There are many similarities between birds and theropod dinosaurs, including the number of openings in the skull (they're diapsids), secondary palate structure, leg and foot structure and proportions, upright stance, oviparous birth (laying eggs), bone structure (bones interlaced with vessels), and, in some instances, feathers. Recently, scientists have reorganized the groups in which many animals have been classified using a system called cladistics. Since birds are descended from dinosaurs, they are in the same group, dinosauria. So the national symbol of the United States is actually a dinosaur (the bald eagle). FEATHERED, BIRD-LIKE DINOSAURS In the last few years, many fossils of feathered dinosaurs have been found near Yianxin, in Liaoning Province, China. Two new Chinese feathered dinosaurs dating from between 145 and 125 million years ago (during the late Jurassic and early Cretaceous periods) have been found, Protarchaeopteryx robusta and Caudipteryx zoui. Their features are more dinosaur-like than bird-like, and they are considered to be theropod dinosaurs. Their feathers were symmetrical, which indicate that they could not fly (flightless birds have symmetrical feathers while those that fly have asymmetrical ones). These finds, along with the feathered dinosaur Sinosauropteryx, found a few years ago, also in the same region of China, and the bird-like Unenlagia in Argentina, reinforce the theory that birds are descended from dinosaurs. THE OLDEST-KNOWN BIRDS The Archaeopteryx is one of the most famous and oldest-known fossil birds, and dates from the late Jurassic period (about 150 million years ago). It is now extinct. Although it had feathers and could fly, it had similarities to dinosaurs, including its teeth, skull, and certain bone structures. Some paleontologists think that Archaeopteryx was a dead-end in evolution and that the maniraptors led to the birds. The first Archaeopteryx fossilized feather impression was found in 1860 in a limestone quarry in Germany. A year later, a much more complete fossilized Archaeopteryx was found at the same quarry. Impressions of its feathers and bone structure were quite clear. Many more have been found since, for a total of seven. In 1868, Thomas Henry Huxley interpreted the Archaeopteryx fossil to be a transitional bird having many reptilian features. Using the fossils of Archaeopteryx and Compsognathus, a bird-sized and bird-like dinosaur, Huxley argued that birds and reptiles were descended from common ancestors. Decades later, Huxley's ideas fell out of favor, only to be reconsidered over a century later (after much research and ado) in the 1970's. In 1986, J. A. Gauthier looked at over 100 characteristics of birds and dinosaurs and showed that birds belonged to the clade of coelurosaurian dinosaurs. [Gauthier, J.A., 1986. Saurischian monophyly and the origin of birds, in The Origin of Birds and the Evolution of Flight, California Academ of Sciences Memoir No. 8] Bird fossils are rare because bird bones are hollow and fragile, but Jurassic, Cretaceous, Eocene and Miocene-Pliocene bird fossils have been found. In the chain of creatures leading from dromaeosaurid dinosaurs (advanced theropods) to birds, Sinosauropteryx is the earliest bird-like dinosaur. For now, the bird-like animals include (in chronological order): - Protoavis (meaning "first bird") is an extinct diapsid from the late Triassic period (80 million years earlier than Archaeopteryx). Its partly toothless jaw and keel-like breast bone were like those of birds. It also had a tail, dinosaur-like rear legs, and hollow bones. There is some dispute about whether this animal was a bird or a dinosaur; the answer depends partly on whether the Protoavis fossil belongs to one or two different genera. Fossils have been found in Texas, USA. - Archaeopteryx - The oldest known bird had asymmetrical feathers - it could probably fly short distances and was the size of a crow. This bird was probably an evolutionary dead-end. (from Germany, 150 million years ago). - Sinosauropteryx - Sinosauropteryx had a coat of downy, feather-like fibers that are perhaps the forerunner of feathers. This ground-dwelling dinosaur had short arms, hollow bones, a three-fingered hand, and was about the size of a turkey. (from China, 121-135 mya). - Protarchaeopteryx - Long, symmetrical feathers on arms and tail, but it probably could not fly. It was the size of a turkey (from China, 121-135 mya). - Caudipteryx - a small, very fast runner covered with primitive (symmetrical and therefore flightless) feathers on the arms and tail, with especially long ones on the tail. It was about the size of a turkey. (from China, 121-135 mya) - Iberomesornis (meaning "Iberian=Spanish intermediate bird") was a small, early, toothed bird that lived during the early Cretaceous period. It was capable of powered flight. It had tiny, spiky teeth in its beak and was the size of a sparrow. Its hip was primitive compared to modern birds; its ilium, ischium, and pubis were all parallel and directed backward. Iberomesornis was named by paleontologists Sanz and Bonaparte in 1992. Fossils were found in Spain. The type species is I. romeralli. - Unenlagia - a much larger ground-dwelling theropod about 4 feet (1.2 m) tall and 8 feet (2.4 m) long. It had flexible arm movement (up and down movements were possible, like that which a bird uses in flying). (from Argentina, 90 mya). - Patagonykus (meaning "Patagonia claw") was a lightly-built meat-eater with a single, clawed finger on each hand. It was about 6.5 ft (2 m) long. It had long legs, a long tail, and short arms. Patagonykus lived during the late Cretaceous period, about 90 million years ago. Patagonykus was either a bird-like dinosaur (an advanced theropod, or a primitive bird; it possessed qualities of both groups of animals, and there is much scientific debate over which it is. Patagonykus was similar to Mononykus. Fossils were found in Patagonia, a region of southern Argentina. The type species is P. puertai. Patagonykus was named by paleontologist F. Novas in 1996 - Velociraptor - a larger, ground-dwelling carnivore with a swiveling wrist bone (this type of joint is also found in birds and is necessary for flight). About 3 feet tall (1 m). (from Mongolia, 85 - 80 mya). - Mononykus (meaning "single claw") was a small, insect-eater from the Late Cretaceous period, about 72 million years ago. Mononykus was either a bird-like dinosaur (an advanced theropod, or a primitive bird; it possessed qualities of both groups of animals, and there is much scientific debate over which it is. Mononykus had short arms with one long, thick clawed finger on each hand (hence its name). It was lightly built, had long, thin legs, and a long tail. Mononykus was roughly 28 inches (70 cm long). A fossil was found in SW Mongolia in 1923 (and originally called Mononychus). Mononykus was named by Perle, Norell, Chiappe, and Clark in 1993. The type species is M. olecranus. - Hesperornis (meaning "western bird") was an early, flightless bird that lived during the late Cretaceous period. This diving bird was about 3 feet (1 m) long and had webbed feet, a long, toothed beak, and strong legs. Although it couldn't fly, Hesperornis was probably a strong swimmer and likely lived near coastlines and ate fish. Fossils have been found in North America . - Ichthyornis (meaning "fish bird") were 8 inch (20 cm) long, toothed, tern-like, extinct bird that date from the late Cretaceous period. It had a large head and beak. This powerful flyer is the oldest-known bird that had a keeled breastbone (sternum) similar to that of modern birds. It lived in flocks nesting on shorelines, and hunted for fish over the seas. Ichthyornis was originally found in 1872 in Kansas, USA, by a member of paleontologist Othniel C. Marsh's Yale University expedition. Fossils have been found in Kansas and Texas, USA and Alberta, Canada. (Subclass Odontornithes, Order Ichthyornithiformes) - Eoalulavis (from Spain) - the earliest bird that had good maneuverability while flying, even at low speeds (this extra flight control is obtained from a tuft of feathers on the thumb called the alula - it also helps in takeoffs and landings). Over 35,000 Web Pages Sample Pages for Prospective Subscribers, or click below Enchanted Learning Search Search the Enchanted Learning website for: EnchantedLearning.com ------ How to cite a web page
Clay deposits on Mars have been seen as evidence that the planet once had a warm, wet climate. But a new study suggests the clay could have volcanic origins. A study found that the types of clays found on Mars to not necessarily require Earthlike aquatic conditions. Since water is thought to be essential for all life, the Martian clay findings complicate the question of whether early Mars was likely to have been hospitable to life. The fossilized gigantic footprints detected in the Arabian dessert belong to a herd of elephants, scientists say. The seven-million-year-old discovery marks the world’s oldest evidence on how these ancient mammals lived. The common ancestors of humans, apes, and monkeys, might have originally arisen in Asia, a new fossil discovery in Libya suggests.
When tracing an ancestry it is common to encounter records filled with obsolete, archaic, or legal terms that can be difficult to interpret. Misinterpreting these terms can make the difference between linking persons to the right generation, parents, spouse or children. Understanding exactly what is stated in any record is vital before attempting to move to the next generation. Inexperienced or impatient genealogists undervalue the quality of their research by applying present-day definitions to documents created in an earlier century. Take the time to use the glossaries provided here and other excellent dictionaries, genealogical reference books and encyclopedias to interpret documents correctly. Includes the following: - Abbreviations: These are those most commonly used in genealogical records. It is not unusual to find, within the pages of one record, different variations used, but care should be taken to ensure that in these instances, it is a variation and not meant to indicate something else. - Censuses: This describes what is listed on the census forms in each of the census years. Few, if any, records reveal as many details about individuals and families as do government census records. Substitute records can be used when the official census is unavailable. - Illnesses: This describes the various old time Illnesses and Diseases that you will find in old documents, medical records or listed as causes of death on old death certificates or in old family Bibles. - Occupations: This following list that describes the various old occupations of which many are archaic. These are useful to genealogists since surnames usually originated from someone's occupation. Ships passenger lists, census returns and other documents used in genealogy may give an ancestor's occupation, this list gives more modern interpretations of those terms. They also are useful to historians in general. The list is by no means complete. - Terms: This page defines the Genealogical Terms used in genealogical research you will find in documents - Nickname Meanings - Worldwide Epidemics - Tombstone Symbols This is in addition to another Encyclopedia of Genealogy (eogen) by Dick Eastman
A risk factor is something that increases your chance of getting a disease or condition. It is possible to develop esophageal cancer with or without the risk factors listed below. However, the more risk factors you have, the greater your likelihood of developing esophageal cancer. If you have a number of risk factors, ask your doctor about reducing your risk. Some factors cannot be altered, such as age or gender. Esophageal cancer is over 3 times more common in men than in women. Though esophageal cancer can occur at any age, the risk increases with age. Adenocarcinoma incidence is highest in people aged 50-60 years old, while squamous cell carcinoma is more likely to be found in people aged 60-70 years old. Other factors that may increase your chance of esophageal cancer include: Smoking and chewing tobacco contain cancer-causing agents (carcinogens) that are absorbed through the surface of the esophagus, causing irritation and cellular changes. The risk of cancer increases with the amount of tobacco used and the number of years as a tobacco user. All forms of tobacco are strongly and directly associated with esophageal cancer, especially squamous cell carcinoma. The risk drops once tobacco use is stopped. Alcohol itself is not considered a carcinogen, but a by-product of alcohol may create a highly toxic agent that irritates the esophagus. As with tobacco, prolonged alcohol use is directly associated with an increased risk of esophageal cancer, especially squamous cell carcinoma. Alcohol and Tobacco Combined The combined effect of alcohol and tobacco use has been shown to substantially multiply the risk of esophageal cancer. The risk of esophageal cancer may increase 3-fold in people who use both alcohol and tobacco compared to using one either alone. Diets high in red meat consumption are associated with an increased risk of esophageal cancer. Processed meats may also increase risk, but a clear link has not been established. Squamous cell carinoma risk is higher in those who drink very hot liquids without allowing time for them to cool down. Repeated exposure to high temperatures may affect the cellular structure of the esophagus. Exposure to certain chemicals through work, accidents, or lifestyle habits can harm the esophagus and increase the risk of cancer. These may include: - Harsh chemicals like drain cleaners or lye can burn or damage cells that line the esophagus. Damage can result in scar tissue that narrows the esophagus, making it difficult for food to pass from the throat to the stomach. - Certain occupations increase exposure to harmful chemicals. Inhaled chemicals may injure the esophagus. Risk may be higher in people who are exposed to solvents in dry cleaning. - Radiation therapy aimed at the abdomen or chest may cause damage to the esophagus. Current or history of certain medical conditions that may increase the risk of esophageal cancer include: - Barrett’s esophagus —Barrett’s esophagus is the change in the cells of the lower esophagus when they are exposed to acid from the stomach. The acid reflux causes the cells to change from the normal squamous cells to columnar cells normally found in the intestine. - Gastroesophageal reflux disease (GERD)—Gastric contents, including acid, chronically refluxes from the stomach into the esophagus causing irritation and discomfort.. - Obesity —Obesity is associated with Barrett's esophagus and GERD. Excess weight causes more pressure on the lower esophageal sphincter (LES), which contributes to acid reflux. - Achalasia —The LES does not open properly, so food and liquids have a hard time moving into the stomach. The delay can cause irritation to the cells of the esophagus. - Human papilloma virus (HPV) infection —HPV can cause normal cells to become abnormal. Persistent HPV infection has been linked to increased risk of several cancers. - Nutrient deficiencies—Being deficient in folic acid, vitamins A and C, and riboflavin, molybdenum, and selenium increases the risk of esophageal cancer. - Reviewer: Mohei Abouzied, MD - Review Date: 12/2016 - - Update Date: 12/08/2015 -
Reflux in infants Most babies spit up once in a while, but some do it a lot. This is called reflux. Reflux is short for gastroesophageal reflux or GER. Reflux happens when food in the stomach comes back up during or after a feeding. It often happens to babies who were born early. Most of the time babies outgrow the condition in a few months. Most babies don’t seem to be upset by reflux. If your baby had reflux in the NICU, the nurses may have shown you how to feed and position your baby to minimize spit up. These tips may help: - Hold your baby upright during feeding. - Try smaller, more frequent feedings. - Burp your baby often, especially if you are feeding him with a bottle. - Try a different nipple on your baby's bottle so he swallows less air. - Keep your baby still after feeding. These symptoms may mean that your baby has other problems digesting food: - The spit-up is bright yellow or green. - There is a large amount of spit-up. - Your baby arches his back or cries during feeding. - Your baby vomits with great force (projectile vomiting). See also: Share your story Last reviewed August 2014 Frequently Asked Questions How do I calculate adjusted age for preemies? Chronological age is the age of a baby from the day of birth. Adjusted age is the age of the baby based on his due date. To calculate adjusted age, take your baby's chronological age (for example, 20 weeks) and subtract the number of weeks premature the baby was (6 weeks). This baby's adjusted age (20 - 6) is 14 weeks. Health care providers may use this age when they evaluate the baby's growth and development. Most premature babies catch up to their peers developmentally in 2 to 3 years. After that, differences in size or development are most likely due to individual differences, rather than to premature birth. Some very small babies take longer to catch up. What does it mean if a baby is born “late preterm?” Late preterm means that a baby is born after 34 weeks but before 37 weeks of pregnancy. It's important to try to have your baby as close to 39 weeks of pregnancy as possible. In the last few weeks of pregnancy, your baby's organs, like his brains, lungs and liver, are still growing. Waiting until you're at least 39 weeks also gives your baby time to gain more weight and makes him less likely to have vision and hearing problems after birth. Your baby will also be better able to suck and swallow and stay awake long enough to eat after he's born. Babies born early sometimes can't do these things.
Hearing-impaired students vary widely in their most hearing-impaired students use note takers in class because it is difficult to speechread and take notes. The mission of teaching students with visual impairments is to: address and encompass all aspects related to educating students who are blind or visually impaired. E-learning environment for hearing impaired virtual classroom, animation environment for deaf and hoh pupils by. Accommodations in the classroom can help your child with hearing loss learn at their best discover best practices for teaching hearing impaired management. A deaf or hard of hearing student in the classroom good teaching strategies for deaf and hard of hearing students. This page provides suggestions for arranging the classroom teaching students with visual impairments impaired vision to move around the classroom. Classroom management there are some considerations that need to be taken when accommodating a blind or visually impaired student in the classroom. Suggested teaching strategies: d/deaf and hard projector or board and as much of the class as possible if remember that deaf or hearing-impaired students may. Teaching strategies for hearing impaired teaching strategies for hearing impaired hard of hearing before resuming class. Essays on pupils we have found 500 light of the concers and challenges for hearing impaired pupils background determines pupils' success it is as if. Teaching and learning environments to support students with special education needs or disabilities school and enable hearing impaired. Transcript of the inclusive classroom: how to support students with hearing impairment hearing impaired students may have difficulties with. Challenges faced by hearing impaired pupils in learning: participate in class unfortunately most of these hearing devices are not available at king george ix. “hearing loss can affect a child sue 10 strategies to support hearing-impaired students in 4 principles of classroom management and social. Challenges faced by students with hearing classroom practices with hearing impaired students and academic structure and the attitudes of staff and pupils. Challenges for visually impaired students: essay to see classroom boards and aides on history essays on management essays on people family god.View
As you learned from the “Hollow Penny” activity, pennies minted before 1982 are pure copper. Newer pennies are actually almost entirely composed of zinc, but the thin coating of copper on the outside makes new pennies look very much like they are made of copper. Copper and zinc are different elements and therefore have different density values. By determining the density of each type of pennies, the composition of the metal can be confirmed. Older copper pennies should have a different ratio (density) than zinc pennies. Use the internet to find the theoretical density of zinc and copper. Look for units of grams/cm3 . Use this information to make a hypothesis for the experiment. of copper = ____________g/cm3 of zinc = ____________ g/cm3 - Using the mint dates, separate out the pennies into a copper and a zinc pile. You will need 15 pennies of each type. - 2. Place 50.0 mL of water into a graduated cylinder. Record the initial water level of water as 50.0 mL. - 3. Put the cylinder on the balance. Record the initial mass of the cylinder and water. - Add 3 copper pennies to the cylinder. Notice that the water level rises. Record the final water level. The volume of the pennies can be determined by water displacement (i.e. by taking the difference between the volumes). - Put the cylinder on the balance. Record the mass of the cylinder, water and the pennies. Find the mass of the coins by subtraction. - Add three more pennies, so that there is a total of 6 coins in the cylinder. Record the volume and the mass. - Keep adding the pennies, in groups of 3, until you have put all 15 copper pennies into the water. - When finished with the copper pennies, repeat the process using zinc pennies. Data for the copper pennies Data for the zinc pennies Calculate the density for each trial. Since you have five density values, find the average density for each metal. Compare the theoretical density to the average experimental density by calculating the % error. Using your graphing calculator or LoggerPro, create a graph (y-axis) versus volume (x-axis) for each metal. You will plot the five data points for each metal. Calculate the slope of each line. The slope represents the mass/volume or the density of the metal. Both lines can be plotted on the same graph so that the results can be easily compared. Print out a copy of the graph to include in your lab report. Be sure to write the slope of each line on the graph. - State your results. What is the average experimental density for each metal? - State the theoretical value. - State the % error. - Think about and suggest at least two valid sources of error. Suggest at least two ways to improve the experiment.
The first step after writing a program is to enter it into the computer: these files are known as the source code. Fortran systems do not usually come with an editor of their own: the source files can be generated using any convenient text editor or word processor. Many text editors have options which ease the drudgery of entering Fortran statements. On some you can define a single key-stroke to skip directly to the start of the statement field at column 7 (but if the source files are to conform to the standard this should work by inserting a suitable number of spaces and not a tab character). An even more useful feature is a warning when you cross the right-margin of the statement field at column 72. Most text editors make it easy to delete and insert whole words, where a word is anything delimited by spaces. It helps with later editing, therefore, to put spaces between items in Fortran statements. This also makes the program more readable. Most programs will consist of several program units: these may go on separate files, all on one file, or any combination. On most systems it is not necessary for the main program unit to come first. When first keying in the program it may seem simpler to put the whole program on one file, but during program development it is usually more convenient to have each program unit on a separate file so that they can be edited and compiled independently. It minimises confusion if each source file has the same name as the (first) program unit that it contains.
New archaeological study attempts to uncover the mysteries that shroud one of the world’s most famous paintings: the Mona Lisa. Researchers have long suspected that the model, in the artwork, is Lisa Gherardini del Giocondo, whose husband commissioned Leonardo da Vinci to make the portrait. In the new study, a team of Italian archaeologists claims to have discovered bone remains of the woman, who has been the subject of awe, in the art world, for quite some time now. Born in the year 1479, Mona Lisa belonged to the famous Gherardini family of Florence. Later, after her marriage to a silk merchant named Francesco, she became Lisa del Giocondo. While very little is actually known about her life, the few existing records show that, as a widow, Lisa spent her last years in the convent of Sant’Orsola in Florence, where she died at the age of 63. Over 480 years after her death, a group of researchers has unearthed pieces of bones that could have belonged to the woman, whose portrait now sits in Paris’ Louvre Museum. In an attempt to establish the identity of Mona Lisa, the archaeologists resorted to exhuming the bodies that lay buried in the convent complex. After four years of digging, the team has managed to uncover nearly a dozen skeletons, of which only one seems to be a match. Carbon dating revealed that around eight of these samples date back to a period much earlier than Lisa Gherardini’s lifetime. The remaining four were discovered inside a common tomb that remained in use till 1545. Upon further analysis, the researchers found one of the specimens, containing only tiny fragments of femur, shinbone and ankle, to belong to the time when Gherardini was alive. Speaking about the find, Silvano Vinceti, the team’s lead historian, said: [It is] a coming together of elements, from anthropological exams to historic documents, which allow us to conclude that the remains probably belong to Lisa Gherardini. The extremely humid conditions, of the burial site, have resulted in irreversible deterioration of the skeleton, which is sadly unsuitable for DNA testing. Furthermore, the lack of skull remains renders forensic facial reconstruction nearly impossible. While the discovery does not confirm Mona Lisa’s identity, it leaves room for other hypotheses, such as the belief, among certain art sleuths, that the painting is actually da Vinci’ self-portrait. Via: The Independent
The education system in the United States has always been arranged in a way where each state has different standards for students to achieve. However, there has long been a move toward a more organized and fixed set of standards and federal standards for all states. President George H.W. Bush called for national education standards in his America 2000 plan, but the states eventually rebuffed the standards and accused the federal government of meddling in what they considered state responsibility. President Bill Clinton established a federal education program called Goals 2000, which was approved by Congress in 1994. It established a National Education Standards and Improvement Council, whose role was to certify standards developed by educational associations. The Department of Education funded such programs in an effort to increase the educational level of American children. The goal of the program was for standardized tests to be established in all states. Students had to pass these tests in order to graduate and advance from the 4th, 8th, 10th and 12th grades. Under the standards, students needed to reach a certain level of math, science, writing and a mastery of language. In addition, students were also required to achieve a certain level of analytical decision making, which was assessed by their choice of certain answers as better than others. Later under President George W. Bush, the No Child Left Behind law was introduced, which called for testing and progress toward proficiency, but the specific classroom standards and test criteria had to be determined by the states. According to some critics, as a result of this some states artificially lowered their standards so that their students could appear higher-performing on standardized tests. President Barack Obama included national education standards as one of five pillars for his education reform. In March 2010, new national standards for K-12 education (which encompasses primary and secondary education) were proposed by state governors and education officials. The effort differed from previous ones in that that the states took the lead, with forty-eight states, excluding Alaska and Texas, agreeing to participate in the creation of the core standards. The voluntary guidelines, called Common Core State Standards, call on states to teach specific topics in each grade level and seek to replace the guidelines which vary from state to state. The guidelines were developed by the Council of Chief State School Officers (CCSSO) and the National Governors Association in collaboration with various stakeholders such as content experts, states, teachers, school administrators and parents. After a period when educators, students and members of the public were able to give feedback on the draft, the set of state-led education standards was released in June 2010. After that the states started adopting and implementing the standards following their own procedures and processes for adoption. The core standards lay out detailed, high-achieving goals for language, math and history at every grade level. For example, students in the seventh grade need to be able to "use ideas about distance and angles, how they behave under dilations, translations, rotations and reflections." In reading, they are expected to be able to "analyze how particular lines of dialogue or specific incidents in a story or drama propel the action, reveal aspects of a character or provoke a discussion." Under the standards, a reading list including many classics has been included but, according to its authors, it is meant as a guideline for appropriate complexity of texts for different grades and not as a required list. The standards provide a definition of the knowledge and skills students need to have within their K-12 education in order to be fully prepared for college and careers when they graduate high school. The standards are aligned with college and work expectations and include rigorous content and application of knowledge through high-order skills. They also build upon strengths and lessons of existing state standards, while also being evidence and research-based. The standards are also informed by other top performing countries and aim at preparing students for success in the global economy and society. According to the CCSSO, the language used in the standards is clearer and the jargon, which had confused parents and students in the past, was avoided. However, it was stressed that the standards were not intended as a specific teaching method, because there were different teaching styles in different classrooms and every teacher needed to understand how to get the students to meet the standards.
Teaching and Classroom Resources To extend our services to you, the math and science teachers, we have assembled a list of resources for classroom activities and lessons focused on math and science. The resources we have located are all available free of charge on the internet. We have tried the links, randomly selected different lessons from the sites and were satisfied that the information provided was accurate and usable. If you find any of the links to not be working, please let us know so we may remove the link from Newton Network. Lessons and Lesson Plans - Discovery Education - A variety of physical science and mathematics lesson plans with hands-on activities many with weblinks to additional information. Excellent search capability with search by subject and grade level. Produced by Discovery Communications Educational Outreach. - Teachnology - Online access to thousands of lesson plans, worksheets, teaching tips, webquests, education games and downloads. Maintained by Teachnology.com. - PBS Teachers - A very extensive collection of lesson plans and source material searchable by subject area, grade level and topic of interest. Maintained by Public Broadcasting System. - Microsoft Lesson Plans - These lesson plans are well written and contain all resources needed to teach the lesson. Maintained by Microsoft Corp. - Lesson Plans, Inc - Lesson Plans Inc. is a high school science education resource. Science curriculums for middle school & elementary school science teachers are also included. > Lesson Plans Inc. offers life & physical science teachers curriculum that is aligned to state & national standards. - Merlot.org – Multimedia Educational Resource for Learning and Online Teaching (MERLOT) is a collection of peer reviewed and selected higher education, online training materials. Cataloged into collection by subject, material types and learning communities, the ability to search Merlot.org makes finding resources you can use in your class very efficient. With a database containing more than 20,000 items, you should be able to find what you need for your class. The best part is that membership to Merlot is free. - TeachersFirst.com–TeachersFirst is a rich collection of lessons, units, and web resources designed to save teachers time by delivering just what they need in a practical, user-friendly, and ad-free format. We offer our own professional and classroom-ready content along with thousands of reviewed web resources, including practical ideas for classroom use and safe classroom use of Web 2.0. - Thinkfinity–Verizon Thinkfinity offers comprehensive teaching and learning resources created by our content partners—the most respected organizations in each academic subject and literacy. The easy-to-navigate K-12 resources are grade-specific and are aligned with state standards. Class Activities and Projects - Newton's Apple - An excellent collection of more than 300 video clips with accompanying text and teachers guides organized by science category. Don't miss the "Try This at Home" section for hands on activities. Produced by Twin Cities Public Television. - Science and Arts Gateway for Education (SAGE) - Links to resources in math and science for students and teachers is grades 9-12 with many projects and experiments easily used in the classroom. Produced by Cornell University. - NASA Education - Many of the resources necessary to bring space into the classroom. Content changes frequently so check back often. Maintained by NASA. - Ask Dr. Math - Get the answer to a math question from an expert. Questions and answers are categorized by grade level and subject and provides teacher services. Produced by Drexel University. - Nevada Mining Association - A collection of classroom activities to illustrate principles found in geology and earth sciences. Activities range from asphalt cookies to time travel. All activities are in lesson plan format based on NDOE science standards. Maintained by Nevada Mining Association with contributions from teachers. - Mineral Information Institute - Materials and classroom activities to explore natural resources and how they are used. Maintained by MII - 42eXplore - Thematic Pathfinders for all Ages. Multiple links to hundreds of topics in many subjects. Well maintained and easy to access. Sponsored by eduScapes.com. - Science Spot - An award winning site filled with science projects, information links and classroom activities targeted for middle school. Developed and maintained by Tracy Trimpe. - SciJinks - An interactive website sponsored by NASA and filled with weather experiments and activities. - Exploratorium - Located in San Francisco, the Exploratorium offers the best of hands-on activities. The Exploratorium website extends this experience to your classroom. - SHODOR - (SHOrt and DORky) An internet resource dedicated to providing students and educators with materials and instruction related to computational science through scientific, interactive computing. - Siemens Science Day Access videos, tools and revealing hands-on activities designed for your 4th-6th grade students that will help reinvent science class. You'll find new, original experiments with intuitive directions, materials lists and home extensions. - Eduhound.com The Educhound site sets are a collection of topic-based online educational resources. Also includes free clipart and educational templates. Web 2.0 Teaching Tools - Classtool.net - Create free educational games, quizzes, activities and diagrams in seconds! Host them on your own blog, website or intranet!. - Twiducate - Social networking for Schools. This tool allows you to setup a safe environment for your class to use to interact, share ideas, post discussions and collaborate on work. It can even be used to keep parents informed. - ChartTool - Online Graphs and Charts. This tool allows you or your students to create, print and share graphs and charts. The tool provides step-by-step instructions for creating many different types of graphs, entry of chart data and formatting of graph output. The tool is simple, easy to use and doesn't require any additional software. - Skype in the Classroom -Skype in the classroom is a free community to help teachers everywhere use Skype to help their students learn. It’s a place for teachers to connect with each other, find partner classes and share inspiration. This is a global initiative that was created in response to the growing number of teachers using Skype in their classrooms. - Prezi -Prezi provides a method for creating dynamic presentation without using MS Powerpoint. Prezi.com provides free accounts to educators and students and online tutorials to get you started. - VUE - The Visual Understanding Environment (VUE) is an Open Source project based at Tufts University. The VUE project is focused on creating flexible tools for managing and integrating digital resources in support of teaching, learning and research. VUE provides a flexible visual environment for structuring, presenting, and sharing digital information. - Edmodo - Edmodo is an online social learning environment for your classroom. Edmodo was created specifically for teachers and schools, giving teachers control over who is part of their online classroom. Assignments and classroom resources can be posted to the classroom and teachers have the option of grading the assignments on the site itself. Edmodo is completely free. - BrainFlips - BrainFlips provides the world's best tools for creating, sharing and studying flashcards! Make flashcards on any subject and share them with your friends and classmates. BrainFlips flashcards can incorporate text, images, audio and video to learn any subject. BrainFlips is free but does require a registration. - PBWorks- PBWorks Education is an online classroom workspace made for teachers and students. Teachers can create their own workspaces to reach their students and parents outside the classroom. Teachers can publish notes, lectures, videos and more to their workspace. PBWorks Education also allows teachers to create group projects and find other classrooms to interact with. The basic edition is free for individual teachers. - CoolTools - We are not highlighting a specific Web 2.0 tool, but rather a collection of tools. Cool Tools for Schools is a website dedicated to finding Web 2.0 tools that can be used by all teachers. Their collection is organized by topic and includes useful apps for phones and tablets. - National Geographic Maps - National Geographic Maps are a great tool you can use in your classroom. Maps are available for every part of the world and come with interactive lesson plans as well. You can design your own map using their interactive map kit, making your maps as large or small as you need them. All maps are also printable and free. - Zunal- Zunal is a website that hosts webquests on every subject. Users can create their own webquests using templates provided, or search the thousands of free webquests available. Videos, games, quizzes, and more can be added to any webquest. Zunal requires teachers to register, but registration is free. - WebQuest.Org - This is the heart and sole of webquests. If you are not familiar with what is a webquest or how to use a webquest, then this is the starting point. The site provides a searchable database of webquest and templates for creating a webquest. - Teachnology WebQuests - A goldmine of webquest indexed by subject with webquest for Math and Science as well as many other subjects. Webquests are submitted by teachers and reviewed by Teachtechnology. - NASA Podcasts - Informative podcasts in the field of astronomy and earth sciences. Programs are directed to 5-12 but could be used for all grade levels. A full transcript is provided with each podcast. - NPR Science Friday - Archived broadcasts from NPR Science Friday hosted by Ira Flatow covering news in science, technology, health and environmental issues. Most appropriate for older students. - This Week in Science - Weekly broadcast of science news and discussion. Also includes archives of broadcasts. - Science Friday Kids - Science Friday for grades 6-8 indexed by subject and topic. - The Math Factor - A variety of math topics are discussed by C Goodman-Strauss, professor of mathematics with the University of Arkansas. The site also has math puzzles, games and a section of math weirdness. - Video Math Tutor - A video series of basic math lesson designed as a review or refresher for grades 8 - 12 covering basic algebra. - Is All About Math - Video podcasts covering advanced subjects in mathematics. Local and Regional Lesson Plans - Trout in the Classroom - Looking for a get-your-hands-wet project to interest your students? Nevada Department of Wildlife sponsors the Trout in the Classroom project and provides the tank and eggs to get started. Nevada Division of Wildlife and Trout Unlimited. - Learn the Great Basin - This site was designed and developed by students and teachers to provide student activities for learning about the Great Basin and contains a student created video clips explaining types of rocks.
Light and dark reactions in photosynthesis Photosynthesis is divided in two parts: 1. Light-dependent reactions (light reactions) 2. Light-independent reactions (dark reactions). Light reactions need light to produce organic energy molecules (ATP and NADPH). They are initiated by colored pigments, mainly green colored chlorophylls. Dark reactions make use of these organic energy molecules (ATP and NADPH). This reaction cycle is also called Calvin Benison Cycle, and it occurs in the stroma. ATP provides the energy, while NADPH provides the electrons required to fix the CO2(carbon dioxide) into carbohydrates. This means Dark reactions will fail to continue if the plants are deprived of light for too long, since they use the output of the initial light-dependent reactions.
The latest news from academia, regulators research labs and other things of interest Posted: July 24, 2008 Revolutionary materials reflect ancient forms (Nanowerk News) Although order is pleasing to the eye, it can quickly become boring. In Islamic architecture therefore, decoration often follows a strict yet aperiodic pattern. Similar structures also form in certain materials, called quasicrystals. Physicists from the University of Stuttgart and the Max Planck Institute of Metals Research have succeeded in trapping a monolayer of colloidal particles, tiny plastic spheres, in a laser lattice with an aperiodic structure. How the particles arrange themselves in this lattice strongly depends on the laser power. At high intensity, the particles form a quasicrystalline pattern; at low intensities, the particles instead position themselves into a periodic crystalline arrangement. The researchers were particularly surprised by what they observed at intermediate laser intensities: an Archimedean-like tiling combining both crystalline and quasicrystalline elements. As there are significant differences in the physical and chemical behaviour of quasicrystals and crystals, this combination of both in a single material can be expected to possess interesting and previously undiscovered characteristics ("Archimedean-like tiling on decagonal quasicrystalline surfaces"). Tiles for kitchens and bathrooms are usually square or rectangular. There is a good reason for this: anyone trying to tile a bathroom with five-sided tiles will not be able to cover the wall without any gaps. This is only possible with three-, four-, or six-sided tiles. For a long time it seemed that Nature also adhered to this principle. In 1984, however, the Israeli physicist Dan Shechtman reported the first crystals whose surfaces are indeed described by tiles having pentagonal and other shapes and are as imaginative as Islamic decoration. Now physicists at the University of Stuttgart and the Max Planck Institute of Metals Research have discovered structures that combine crystalline and quasicrystalline structural elements. They created a light lattice with a quasicrystalline structure by overlapping five laser beams. In the optical potential wells of this lattice they trapped a single layer of 3 micrometer plastic spheres floating in water that could easily be observed through a microscope. At high laser intensities and deep wells, the light lattice forced the spheres into a quasicrystalline arrangement with pentagonal-, star-, and diamond-shaped basic elements. At low intensities, however, the negatively charged particles were barely influenced by the light lattice. Under these conditions, they position periodically with each particle surrounded by six neighbours at equal distance, behaving as the scientists had expected. Particles in quasicrystalline light field: Stuttgart-based physicists superimposed five laser beams to create a quasicrystalline structure. Upon variation of the laser intensity the small plastic spheres arrange differently. At certain light intensities they form a pattern, shown here with red lines, similar to an ancient model - an Archimedean tiling. (Image: Ingrid Schofron/MPI of Metals Research & Jules Mikhael/University of Stuttgart) "What was actually intriguing is the structure we observed at intermediate intensities," says Clemens Bechinger, head of the 2nd Physical Institute at the University of Stuttgart and fellow at the Max Planck Institute for Metals Research. "In this case, the plastic spheres arrange strictly periodically in one direction, as in a crystal, however, perpendicular to this direction, the particles order not like in a crystal, but in a quasicrystal," explains Jules Mikhael, a doctoral student of Lebanese decent working on the project. Evidently, the competition between the mutual particle´s interaction and that with the light field results in an intermediate structure which exhibits both crystalline and quasicrystalline properties. There are clearly recognizable bands of squares that are separated by randomly arranged single and double rows of equilateral triangles in an aperiodic rhythm. This structure is similar to a specific Archimedean tiling, as first mentioned by Archimedes and fully characterized by Johannes Kepler in 1619. Archimedean tiles meet two conditions: first, their sides are all of the same length, irrespective of whether they have three, four or more angles. Second, the points where tiles meet must be identical. Using these structural principles, one can construct exactly eleven different tiling patterns that can fully cover a surface. In one of them, rows of squares alternate with rows of equilateral triangles. "At short distances, the intermediate pattern we found is identical to this tiling pattern. At larger scales, however, we observe characteristic disruptions, as the strictly periodic Archimedean pattern would not fit into the quasiperiodic structure of the light lattice," explains Clemens Bechinger. Because crystals and quasicrystals comprise different material classes with differing physical and chemical properties, the observed intermediate structure is striking. "The combination of crystalline and quasicrystalline structural elements will likely lead to novel material properties", says Clemens Bechinger. Because colloids - in contrast to atoms - can be directly observed with optical techniques and their pair interactions can be tailored over a large range, the knowledge gained by the Stuttgart physicists from their experiments with colloids will help to explore the conditions where similar structures form in atomic systems. In this respect, colloidal systems can be regarded as a model system as they reveal a great deal about the conditions under which particles arrange on quasicrystalline surfaces in the manner of Archimedean tilings.
Definition - What does Slip System mean? In metallurgy, a slip system refers to slip planes or lines at which dislocation motion (or slip) occurs. It leads to plastic deformation. In crystals, slip system is an vital mode of deformation. It is the most crucial deformation mechanism in metals and metallic alloys. To begin plastic deformation, a critical resolved shear stress is needed. Different metals have different slip system due their different crystallographic structures. Corrosionpedia explains Slip System A slip system is composed of a slip plane and a slip direction. An external force causes parts of the crystal lattices to glide along each other, altering the material's geometry. Different types of lattice causes different slip systems in the material. Both the slip planes and slip directions in a crystal have distinct crystallographic forms. The slip planes are the planes with the highest density of atoms. The direction of the slip refers to the direction in the slip plane that corresponds to one of the smallest lattice translation vectors. There are three types of slip systems: - Face-centered cubic (FCC) slip- occurs along the close-packed plane. The number of slip systems in FCC crystals is 12. It includes metals like copper, aluminum, nickel, silver, etc. - Body-centered cubic (BCC) slip- occurs along the plane of shortest Burgers vector. they are not truly close-packed crystal structure like FCC. It requires heat to activate. Some BCC materials (such as a-Fe) can contain up to 48 slip systems. - Hexagonal close packed (HCP) slip- is much more limited than in BCC and FCC crystal structures. Generally, HCP crystal structures allow slip on the densely packed basal planes. There are two types of dislocations in crystals are important: one is slip-edge dislocations and another is screw dislocations. Which type of dislocations occur largely depends on following factors: - the direction of the applied stress - other factors Emergency In-Situ Repair Problems & Surface-Tolerant Solutions
Science Fair Project Encyclopedia Botnet is a jargon term for a collection of software robots, or bots, which run autonomously. A botnet's originator can control the group remotely, usually through a means such as IRC, and usually for nefarious purposes. A botnet can comprise a collection of cracked machines running programs (usually referred to as worms, Trojan horses, or backdoors) under a common command and control infrastructure. Individual programs manifest as IRC "bots". Often the command and control takes place via an IRC server or a specific channel on a public IRC network. A bot typically runs hidden, and complies with the RFC 1459 standard. Generally, the perpetrator of the botnet has compromised a series of systems using various tools (exploits, buffer overflows, as well as others; see also RPC). Newer bots can automatically scan their environment and propagate themselves using vulnerabilities and weak passwords. Generally, the more vulnerabilities a bot can scan and propagate through, the more valuable it becomes to a botnet owner community. Botnets have become a significant part of the Internet, albeit increasingly hidden. Due to most conventional IRC networks taking measures and blocking access to previously-hosted botnets, owners must now find their own servers. Oftentimes, a botnet will include a variety of connections, ranging from dial-up, DSL, cable, educational, and corporate. Sometimes, an owner will hide an IRC server installation on an educational or corporate site, where high-speed connections can support a large number of other bots. Exploitation of this method of using a bot to host other bots has proliferated only recently, as most script kiddies do not have the knowledge to take advantage of it. Botnets serve various purposes, including Denial-of-service attacks, creation or misuse of SMTP mail relays for spam, click fraud, and the theft of application serial numbers, login IDs, and financial information such as credit card numbers. The botnet owner community features a constant and continuous struggle over who has the most bots, the highest overall bandwidth, and the largest amount of "high-quality" infected machines (commonly university, corporate, and even government machines). Botnet servers will often liaise with other botnet servers, such that a group may contain 20 or more individual cracked high-speed connected machines as servers, linked together for purposes of greater redundancy. Actual botnet communities usually consist of one or several owners who consider themselves as having legitimate access (note the irony) to a group of bots. Such owners rarely have highly-developed command hierarchies between themselves; they rely on individual friend-to-friend relationships. Often conflicts will occur between the owners as to who owns the individual rights to which machines, and what sorts of actions they may or may not permit. Types of Attacks Main article: Denial of Service Attacks If a machine receives a Denial of Service attack from a botnet, few choices exist. Given the general geographic dispersal of botnets, it becomes difficult to identify a pattern of offending machines, and the sheer volume of IP addresses does not lend itself to the filtering of individual cases. Passive OS Fingerprinting can identify attacks originating from a botnet: network administrators can configure newer firewall equipment to take action on a botnet attack by using information obtained from Passive OS Fingerprinting. Botnets typically use free DNS hosting services such as DynDns.org, No-IP.com, and Afraid.org to point a subdomain towards an IRC server that will harbor the bots. While these free DNS services do not themselves host attacks, they provide reference points, often hard-coded into the botnet executable. Removing such services can cripple an entire botnet. Recently, these companies have undertaken efforts to purge their domains of these subdomains. The botnet community referto such efforts as "nullrouting", because the DNS hosting services usually direct the offending subdomains to an inaccessible IP address. The botnet server structure mentioned above has inherent vulnerabilities and problems. For example, if one was to find one server with one botnet channel, often all other servers, as well as other bots themselves, will be revealed. If a botnet server structure lacks redundancy, the disconnection of one server will cause the entire botnet to collapse (at least until the owner(s) decides on a new hosting space). However, more recent IRC server software includes features to mask other connected servers and bots, so that a discovery of one channel will not lead to much harm. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
Understanding how circadian rhythms work is essential in controlling and preventing problems. Scientists have been able to show that when bright light enters the eye, it stimulates photoreceptors in the periphery of the retina. These photo-cells are connected to the body’s master clock or suprachiasmatic nucleus via a nerve pathway called the retinohypothalamic tract. At the right time of day, light triggers the body clock into resetting its daily rhythms. The circadian pathway breaks down when the photoreceptors in the eye can’t perceive these light zeitgebers. Scientists have learned that very bright light can restore the circadian pathway, but they only recently discovered which photoreceptors are responsible. We now know what causes circadian related rhythm problems, and it’s not what we thought. For over a hundred years, scientists believed that the rod and cone cells of the eye were responsible for our body’s reaction to light. We now realize that a newly discovered photoreceptor, calledmelanopsin is responsible for activating the circadian pathway, and it does not respond to the same light as rod and cone cells. This discovery may hold the key to why we suffer from sleep and other body clock problems, and it’s changing the way we treat circadian rhythm disorders. What is Melanopsin? Melanopsin is a light sensitive protein that lies in the periphery of the eye’s retina. Unlike rod and cone cells, melanopsin detects intensity or changing levels of light, and when it gets brighter in the morning for example, melanopsin becomes very active and triggers the brain’s suprachiasmatic nucleus, or body clock into shifting to an active day pattern. Under normal light conditions, melanopsin doesn’t respond, but in bright light, like sunshine, melanopsin cells become very active. The melanopsin cells are ‘responsible for telling our bodies that it is daytime – daylight is always bright light,’ according to Dr Rob Lucas at the Imperial College in London . Melanopsin cells help regulate a healthy body clock, and among other things, help keep us active and alert in bright sunshine. Those of us with fewer melanopsin cells don’t recognize changing light signals, and so we don’t ‘wake up’ like we’re supposed to. That is why we struggle through the day, feeling down or gloomy, and why we may not sleep well at night. Melanopsin also causes our pupils to constrict and dilate, and pupils don’t work as well in people with low melanopsin levels. Apparently, those with a melanopsin deficiency have pupils that are visibly different than normal subjects. Scientists haven’t quite determined the relationship yet, but in the future, doctors may be able to tell if you’re susceptible to circadian rhythm disorders by simply looking at your pupils’ reaction to changes in light. Why we get Circadian Rhythm Disorders Apparently those of us who have body clock problems may have fewer melanopsin cells in our eyes. Those with few melanopsin cells don’t react to brighter light and so their body clocks can’t tell the difference between bright, daytime light and darkness. This may be the reason that very bright light and longer duration of bright light is necessary for sufferers to feel normal. Because they have fewer melanopsin cells, it takes more light for their body clocks to work properly. We were wrong! For decades researchers assumed that the rod and cone cells were responsible for mediating light, and so they created light boxes designed to stimulate rod and cone photoreceptors. But melanopsin doesn’t respond to the same light that rod and cone cells do, and that discovery has dramatically changed the way we deliver light therapy. Blue not white While rod and cone cells respond best to white, full spectrum light, melanopsin cells do not. As a matter of fact, they only respond to a specific bandwidth of blue light, in the range of 446-477nm (nanometers). This discovery is critical for circadian rhythm disorder sufferers; because it means they will respond much stronger and quicker to this effective bandwidth of light than to full spectrum. Indeed, studies at Thomas Jefferson and Harvard suggest that this new wavelength of light is not only safer, but more effective as well. “Our results imply that shorter wavelengths may be more effective and energy-efficient compared to higher energy polychromatic white light for phase-shifting the human circadian pacemaker… Exposure to the optimum balance of light wavelengths may also reduce the undesirable side-effects associated with therapeutic use of light exposure such as glare, visual discomfort, headaches and nausea.” – Steven W. Lockely, MD June 2003, J. of Endocrinology & Metabolism Apollo has worked with researchers at TJU to develop this new light technology, called BLUEWAVE® . Because this technology is patented, only BLUEWAVE® products produce 100% of the effective bandwidth of light without the unnecessary full spectrum light. Why white is wrong Researchers used to think the response was through the visual spectrum, known as the photopic response curve, and light boxes were manufactured accordingly. Now we know that the photo-receptors responsible for circadian rhythm problems do not respond to the visual response curve, but rather to a very narrow slice or bandwidth of blue light, from 446-477 nm (nanometers). When stimulated by this bandwidth, melanopsin triggers the suprachiasmatic nucleus, or body clock to reset its circadian rhythms and produce the active energetic hormones. The problem with 10,000 lux, full spectrum light This new discovery may also give us some insight into why some people experience side effects with 10,000 lux, full spectrum light. In traditional light therapy, bright light at 10,000 lux intensity is used. This is because full spectrum is inefficient at producing enough blue light on its own. But by increasing full spectrum enough to stimulate the melanopsin photoreceptors, we may also be over stimulating the rod and cone cells and eye muscles, which result in headaches, eyestrain, excessive glare, nausea, etc. Although BLUEWAVE® is bright, it is only 1/25 th as bright as full spectrum light, and is much easier on the eyes. Why We Have Circadian Rhythm Sleep Problems Researchers now believe that circadian rhythm disorders are caused by a melanopsin deficiency. In addition to responding to blue light, melanopsin cells are responsible for detecting intensitychanges. If the eye doesn’t have enough melanopsin receptors, the body clock can’t distinguish daylight signals and can’t regulate circadian rhythms and energy, mood and sleep hormones. Those with sleep, SAD or similar circadian rhythm disorders have fewer melanopsin receptors and are more dependent on blue light. Why Current technology falls short The problem with current lighting technology is that it doesn’t naturally produce the effective wavelength of light. Even at 10,000 lux, full spectrum and other white light is inefficient at treating sleep, SAD and related circadian rhythm disorders, because they don’t produce the necessary bandwidth. BLUEWAVE® is Changing Light therapy In 2001 Apollo began working with researchers to create a new light source that could deliver the necessary blue light without the over-stimulation problems inherent with current technology. The resulting BLUEWAVE® technology was developed through a National Institutes of Health grant. Clinical testing and tens of thousands of products confirm the effectiveness and increased safety of BLUEWAVE® .
Hardy (Feb. 7, 1877 – Dec. 1, 1947) was an English mathematician known for his work in number theory and mathematical analysis. Although Hardy considered himself a pure mathematician, he nevertheless worked in applied mathematics when he formulated a law that describes how proportions of dominant and recessive genetic traits will propagate in a large population (1908). Hardy considered it unimportant but it has proved of major importance in blood group distribution. As it was also independently discovered by Weinberg, it is known as the Hardy-Weinberg principle. The Hardy-Weinberg equation
Scientists have discovered hot atomic hydrogen atoms that exist in the upper layer of Earth's atmosphere known as thermosphere. This particular finding is of significant importance as it may change the current understanding of hydrogen (H) distribution and its interaction with other atmospheric constituents, researchers believe. H atoms are said to be very light, they can easily overcome a planet's gravitational force and permanently escape into interplanetary space. Escape of H atoms is one reason why Earth's sister planet, Mars, has lost the majority of its water, researchers said. H atoms play a major role in : -Physics governing the Earth's upper atmosphere -serve as an important shield for societies' technological assets, “Hot H atoms had been theorised to exist at very high altitudes, above several thousand kilometres, but our discovery that they exist as low as 250 kilometres was truly surprising,” said Lara Waldrop, Assistant Professor from University of Illinois' Coordinated Science Laboratory in the US. “This result suggests that current atmospheric models are missing some key physics that impacts many different studies, ranging from atmospheric escape to the thermal structure of the upper atmosphere,” said Waldrop. Scientists gave out an explanation of upper atmospheric physics did not allow for the presence of hot H atoms at greater heights. The finding was published in the journal Nature Communications.
Wisdom teeth, or third molars, normally begin to erupt between the ages of 17 and 21. Some people may notice them trying to come in after the age of 21, but their late arrival may be because the structure of the mouth has slowed their progression. The size of the mouth and the teeth are different for every individual. Many people have more than enough room for their wisdom teeth to grow in naturally. Others have much less space at the back of their mouth. This can be due to the size of their teeth or the length of their jaw bone. Wisdom teeth are normally larger than the rest of the teeth in the mouth and may grow in at an angle if there is not enough room for them to move freely into place. If the entire surface of the wisdom tooth is able to erupt through the surface of the gum, then surgery is not needed to remove it. The dentist will measure the area to determine if there is enough room for the tooth to fit comfortably into its space without disrupting the tooth next to it, the dentist will allow it to remain in place to protect the structure and integrity of the mouth. If space is limited and the doctor can still get a firm grip on the tooth, he may be able to pull it without cutting the tissues. This allows room for the other teeth in the area to rest comfortably in their original position or forcing them to be bunched together. When a tooth is impacted, it means it is coming in at an angle or sideways and is pushing against the back of the tooth next to it. It is often due to being forced out of its original location. When this occurs, there is no way for the dentist to grip the tooth and pull it. In this situation, surgery is required to remove both the crown of the tooth and the roots that extend back or downward into the jaw bone. Because of the tooth’s size, the dentist may have to break it into smaller sections in order to remove it safely. This allows them to remove all pieces of the tooth, even those that are located near the jaw bone.
Language is an inseparable part of human life and society which paved the way for civilization. It is an arbitrary system of sounds produced by some articulators by human beings with the purpose of interaction and communication with each other. According to Aristotle, language stands for speech that humans produce for exchanging their experiences resulting in ideas and emotions. Another great linguist, Saussure says that language is an arbitrary system of signs constituted of the signifier and signified. So it is very obvious to say that language is a symbolic, vocal, and articulatory system of communication that is related to society and culture. It is notable that human language is different from other animal’s language. Language has some major characteristics which give further clarification about it. Those are described below Human language is arbitrary because there is no inherent relationship between the words of the language and the meaning or the idea conveyed by them. For example, there is a relationship between the word ‘tree’ and the actual thing called ‘tree’. Also, it has different names in different language. So it is clear that every language is arbitrary. Again, if language was not arbitrary, there would be only one language in the world. Language is necessary for socialization. It exists in the society to fulfill different purposes like developing the culture, civilization and human relationship. Being a mean of communication, language is for interaction. There would be no use of language if everyone lived alone. So language is highly social. On the basic level, language consists of some sounds and symbols. These symbols are arbitrary but they are universally accepted and agreed upon collectively. So any word of any language needs to be meaningful. The skill of language depends on interpreting the symbols correctly. If anyone fails to interpret the signs properly, the language bears no significance or meaning to him. That is why it is said that language is symbolical. As discussed previously, language depends on some symbols. But these symbols need to be systematically arranged to provide a meaning. All language has its own systems of arrangements. It has phonological and grammatical rules including morphology, syntactic systems etc. so it is evident that language is not just a haphazard way of communication but systematic and organized. Non-instinctive and conventional: Language is not predetermined. It is the result of evolution and convention. It passes through generation by generation. It is non-instinctive because it is supposed to be acquired by human beings. It is not a matter of heritage. Language is acquired by a human simply because he has the ability to learn a language. Productive and creative: Every language in the world is highly productive and creative. Any human can utter novel utterances which he has not heard before. It is because the language has the possibility to become creative. This is how the language becomes productive as well. By producing novel utterances, a language gets enriched and productive. This way it becomes able to fulfill its function more precisely. These are the major characteristics or properties of a language. These characteristics make a language the best medium to exchange thoughts, ideas, and emotions.
Essential Idea: Genes may be linked or unlinked and are inherited accordingly. - Outline answer to each objective statement for topic 10.2 (coming soon) - Quizlet study set for this topic (coming soon) Statements & Objectives: 10.2.U1 Unlinked genes segregate independently as a result of meiosis. - State the difference between independent assortment of genes and segregation of alleles. - Describe segregation of alleles and independent assortment of unlinked genes in meiosis. 10.2.U2 Gene loci are said to be linked if on the same chromosome. - Define autosome and sex chromosome. - Describe what makes genes “linked.” 10.2.U3 Variations can be discrete or continuous. - Contrast discrete with continuous variation. - State an example of a discrete variation, - State an example of a continuous variation. 10.2.U4 The phenotypes of polygenic characteristics tend to show continuous variation. - Explain polygenetic inheritance using an example of a two gene cross with codominant alleles. - Outline the use of Pascal’s triangle to determine phenotype frequencies that results from polygenic crosses. - State that a normal distribution of variation is often the result of polygenic inheritance. - State example human characteristics that are associated with polygenic inheritance. 10.2.U5 Chi-squared tests are used to determine whether the difference between an observed and expected frequency distribution is statistically significant. - State the two possible hypotheses of a statistical test. - Calculate the chi square value to determine the significance of differences between the observed and expected results of a genetic cross. - Determine the degrees of freedom and critical value for the chi-square test. - Draw a conclusion of significance by comparing the calculated and critical chi-square values. 10.2.A1 Completion and analysis of Punnett squares for dihybrid traits. - Determine possible allele combinations in gametes for crosses involving two genes. - Use correct notation to depict a dihybrid cross between two unlinked genes. - Construct a Punnett square to show the possible genotype and phenotype outcomes in a dihybrid cross. 10.2.A2 Morgans’s discovery of non-Mendellian ratios in Drosophilia. - Describe how Morgan discovered relationship between eye color and sex in Drosophila. 10.2.A3 Polygenic traits such as human height may be influenced by environmental factors. - Outline two example environmental factors that can influence phenotypes. - Compare continuous to discrete variation. 10.2.S1 Calculation of the predicted genotypic and phenotypic ratio of offspring of dihybrid crosses involving unlinked autosomal genes. - Determine the predicted genotype and phenotype ratios of F1 and F2 offspring of dihybrid crosses. 10.2.S2 Identification of recombinants in crosses involving two linked genes. - Use correct notation to show alleles of linked genes. - Construct a Punnett square to show the possible genotype and phenotype outcomes in a dihybrid cross involving linked genes. - Explain how crossing over between linked genes can lead to genetic recombinants. 10.2.S3 Use of chi-squared test on data from dihybrid crosses. - Calculate a chi-square value to compare observed and expected results of a dihybrid genetic cross. - Using the df and critical chi-square value, determine if there is a significant difference between observed and expected results of a dihybrid cross. 10.2.NOS Looking for patterns, trends and discrepancies- Mendel used observations of the natural world to find and explain patterns and tends, Since then, scientists have looked for discrepancies and asked questions based on further observations to show exceptions to the rules. For example, Morgan discovered non-Mendellian ratios in his experiments with Drosophilia. - Describe the trends and discrepancies that led Morgan to propose the idea of linked genes.
The centimeter (symbol: cm) is a unit of length in the metric system. It is also the base unit in the centimeter-gram-second system of units. The centimeter practical unit of length for many everyday measurements. A centimeter is equal to 0.01 (or 1E-2) meter. A foot (symbol: ft) is a unit of length. It is equal to 0.3048 m, and used in the imperial system of units and United States customary units. The unit of foot derived from the human foot. It is subdivided into 12 inches. An inch (symbol: in) is a unit of length. It is defined as 1⁄12 of a foot, also is 1⁄36 of a yard. Though traditional standards for the exact length of an inch have varied, it is equal to exactly 25.4 mm. The inch is a popularly used customary unit of length in the United States, Canada, and the United Kingdom. - How many feet and inches are in 26.94 centimeters? - 26.94 centimeters is equal to how many feet and inches? - How to convert 26.94 centimeters to feet and inches? - What is 26.94 centimeters in feet and inches? - How many is 26.94 centimeters in feet and inches? feetcm.com © 2022
Independence skills can be supported and worked on a variety of contexts including when at home. Working on independent skills will look different for every student but may include areas such as; - Personal care - Taking part in household chores/carrying out jobs - Making choices - Expressing preferences - Assertiveness skills - Taking responsibility for self and belongings - Asking for help Below are some top tips and visual supports. - We suggest you start by looking at what your child can currently do and working towards the next step on from this. - Break tasks down: being asked to complete a task from start to finish or being given general instructions such as ‘tidy your room’ can be overwhelming. Breaking down activities bit by bit e.g. first put your clothes in the basket then put your toys in the box will support children to understand what is being asked. - Modelling: show your child what you would like them to do so it is clear. This may need to happen more than once. - Allow processing time: it may take your child time to understand an instruction. Giving the short instructions and allowing them time to think about this can help. - Support visually where possible (please see visual supports below) - Make activities fun!
Any profits not paid out as dividends are shown in the retained profit column on the balance sheet. The amount shown as cash or at the bank under current assets on the balance sheet will be determined in part by the income and expenses recorded in the P&L. The profit function equation is made up of two primary functions: the revenue function and the cost function. If x represents the number of units sold, we will name these two functions as follows: R(x) = the revenue function; C(x) = the cost function. How do you calculate profit or loss? The profit or gain is equal to the selling price minus the cost price. Loss is equal to cost price minus selling price. What is a sentence for-profit? How to use Profit in a sentence. I turned a good profit on that piece of real estate. Land for farming purposes is expensive, and wages are high, leaving small profit, unless it happens that a man, with his family to assist him, works his own land. Is profit an asset? For instance, the investments via which profit or income is generated are typically put under the category of assets, whereas, the losses incurred or expenses paid or to be paid are considered to be a liability. What is the difference between profit and cash? Understanding the difference between profit vs cash is very important in the finance industry. Profit is defined as revenue less all the expenses of a company in a certain period, while cash flow is cash that flows in and out to/from a business throughout a certain period of time. What is profit in balance sheet?Is revenue a profit or loss? It is the amount left after deducting the expenses from the revenue. Revenue is the blanket term of income or the superset of income. Profit is the subset of revenue or the subgroup of revenue. The company’s lifeline is the revenue earned; otherwise, the company will be under loss. What is difference between interest and profit? In short, interest is income that lenders (usually banks) make on loans, whereas profit is the net result of a company’s income (after all charges are accounted for) — whether that company is a bank or not. What is the profit symbol? Learn about profit in this video: What is profit loss? The concept of ‘loss of profit’ is used in a broad sense, defined as any difference between the actual profits generated by an undertaking and the profits it would have generated in the absence of an infringement. What is standard profit? Standard profit is the difference between sales and standard costs. Standard profit margin is the ratio of standard profit to sales, and it tells the analyst how much profit the business will make after paying for standard cost. What is profit in balance sheet?Why is profit a debit? Retained earnings increase when there is a profit, which appears as a credit. Therefore, net income is debited when there is a profit in order to balance the increase in retained earnings.
In the previous example we have seen that quantum mechanics allows us to calculate the experimentally verifiable probabilities. In this post we will further build our theory by taking another simple example, and by considering the quantum mechanical description of that system. Let’s consider a particle trapped in a box. Its classical description comprises the position and momentum information of the particle at some instant of time. From this initial data and using the classical equations of motion we can calculate the future behavior of the particle for all times. Now lets look at the quantum description of the system. How to write the state vector for the particle. From our previous discussion we know that quantum mechanics associates a vector with the state of the particle. So our first task is to work out how to write this vector. For this we consider some experimentally measurable observable — for example the energy of the particle. Lets assume that the various energy measurements of the particle result in values , ,… We can associate a vector with each of these energy values. Now, since the count of such vectors will be infinite ( for infinite possible values of the energy), we can not easily represent these vectors as column matrices as we did in our previous example. So we need to introduce a new compact notation. A new notation We adopt a new notation as follows. To each energy value , we associate a vector denoted as , with The vectors span a space called the state space of the particle, and all possible states of the particle are some linear combination of the vectors . We will come to this point in a short time. Lets first consider some properties of the vectors . These vectors satisfy the following relation. The left side of the above equation represents the scalar ( or inner) product of the two vector and . The value 0 on the right side tells us that the two vectors are orthogonal to each other. This means that the two vectors are linearly independent, and that the one can not be written in terms of the other. Now if we consider the scalar product of with itself, we get In this case the scalar product on left hand side is a measure of the norm of the vector (It is the square of the norm of the vector ). From above equation we see that the vectors are normalized. The last two relations can be written in a more compact form as where is called Kronecker delta. Its value is 1 if the two indices in the subscript are the same; else the value is 0. More generally, the scalar product of two arbitrary vectors is a complex number. It quantifies how much the vector resembles , and it satisfies the following properties. Here and are complex numbers. The outer product of the vectors is an operator, that can transform a vector into another. For example consider The operator in this case is a projector that projects any arbitrary vector to the vector . One interesting property of the vectors is that the sum of all the projectors they give rise to equals identity operator. Mathematically it can be written as This is called the completion relation. It allows us to treat the vectors as base vectors, and use these base vectors to write any vector, operator, or a scalar product. As example Here, are called expansion coefficients for the expansion of in the base vectors . Finally the vectors satisfy the equation In this case we see that the vector is not changed after the action of the operator , and is merely multiplied by a number . This is called an eigenvalue equation. The operator on the left hand side is called the Hamiltonian. It represents the total energy of the system. The above equation tells us that our base vectors are the eigenvectors of the Hamiltonian, and the eigenvalues associated with these vectors are the experimentally observable energies of the system ( from where we started our discussion). Lets go back to our particle in a box, and to our question of writing the state vector for the particle. The state of the particle in the box at some time is given by . We can expand it in the eigenstates of the Hamiltonian as where is the overlap or the component of along the base vector . The time evolution of the system is given by the famous Schrodinger equation Where is again the Hamiltonian of the system representing the total energy of the system. The solution of this equation (in our case) is Using the eigenvalue equation for the Hamiltonian, and the expansion of the state vector in the eigenvectors of the Hamiltonian we can write the solution as This is the complete description of the particle in the box for all time. Lets now see what information we can extract from this description. First lets ask the question what is the energy of the particle. For this we will have to perform an experiment where we measure this energy. If we perform such experiment and we keep in mind that quantum mechanics allows us to calculate the probability of random events happening in the nature, we get the following answer - The result of the energy measurement will be a random value among all the eigenvlaue of the Hamiltonian. - The probability of obtaining a given eigenvalue (say ) is given by . This is the modulus square of the amplitude associated with the path . In our next post we will see how to extract more information from the state vector of the particle.
Choose the correct answer: Something is ___________ means that it is short and clear, as a summary or even a definition, expressing what needs to be said covered much in few words, without unnecessary words. __________ is a word used to describe a short, often funny story, especially about something someone has done; usually narrated or shared orally. When you are drawing conclusions about something based on how it seems and not on proof, it is called a __________. To __________ something is to control or limit something that is not wanted, this word is most commonly used in describing the reduction or limitation of negative forces such as violence, corruption, or pollution. Do you know which one is a synonym for LIGHT? cancel out, make ineffective or invalid, deny restrain, keep within close bounds, confine reckon, make mathematical calculation randomly chosen, determined by chance or impulse, and not by reason or principle put force upon, force, constrain, compel, put in motion or action by violence
The Montessori environment integrates all aspects the child needs to be successful. Mastery over the environment begins when the child becomes aware of their actions in and on the environment and, for some, this may be their first experience outside of the home. The classroom supports all of what’s to come and is the physical, psychological, and social foundation for growth. The materials in the classroom become the basis of the child’s activities so they have an opportunity for movement that’s directed by the mind with purpose resulting in concentration, independence, and control and coordination of movement. The child may not do it as well as an adult but the work itself gives satisfaction to the child. The process is the most important aspect of the work, not the product. When you observe in a Montessori classroom you might not understand what’s happening at first glance. Children are walking around and working with everything from bead stringing (which builds hand eye coordination) to a child memorizing math facts (one of the first abstract experiences with math). Here are some things to keep in mind when observing a Montessori classroom: The children are working toward independence, concentration, and coordination and all of the materials meet those needs, depending on where they are developmentally. The materials are all placed on the shelves from left to right/top to bottom (indirect preparation to reading and writing) and simple (few steps, not easy) to complex (many steps). The materials are self-correcting. For example, if someone is scrubbing a table and there is water everywhere, the child learns to not use so much water next time; or if they are counting a number of objects and get to the end and don’t have enough or have too many, they learn something was miscounted. During your visit you may notice: The activities are self-chosen. On occasion, a child might need some ideas about what they might like to practice next, but they are never forced. How the children interact with each other. The value of the multiage classroom really shines through with the beautiful things they say to each other or how they assist one another. The adults sitting and observing, letting the children figure out their own problems. When offering support, do the adults give answers or simply guide them to find the answers for themselves? Observing can be tricky in a Montessori classroom. Our goal as adults is to be a “fly on the wall.” But the children are curious when we have visitors and may gather around you to ask, “Who are you? Why are you here?” They are adorable and it’s really hard to ignore them. It’s ok to say “hello and my name is ______.” But after that, your line is: “I’m here to observe your classroom, so I’d love to see what you are going to choose next.” Lastly, please remember that you are only seeing a very small part of our day. Children’s moods change and every day and every hour changes. It’s what makes my job so fun!
Buses are connection systems for electronic and electrical components. In terms of topology, a bus is always a physical medium to which the individual components are connected and which is terminated at both ends. Transmission on a bus can be byte-serial or byte-parallel, as with the PC bus, or bit-serial, as with networks in bus topology or with the field bus. - Bus topology in networks is a serially operating bus over which the data is transmitted serially with a lot of additional information. The best known and most widely used network bus concept is Ethernet. - Inautomation, production engineering and automotive technology, the so-called field buses connect field devices such as sensors, actuators, control devices and host computers with each other. Typical fieldbuses are Profibus, CAN bus, Interbus and MOST. The European Installation Bus( EIB) also belongs to the group of field buses. - A PC bus is the interconnection system consisting of several parallel lines through which all computer components such as the central processing unit( CPU), the I/O controller, main memory, hard disks, etc. can communicate with each other. interface for bit- parallel transmission of data between the microprocessor, memory, graphics cards, communication devices and interfaces forperipheral devices. Address lines, data lines, control lines and supply lines are routed via a bus. The width of the data bus is determined by the concept and the microprocessor used. Bus concepts used in personalcomputers (PCs) include the ISA bus, the PCI bus and the AGP bus, Accelerated Graphics Port (AGP). There are two fundamentally different bus systems in personal computers: the system bus and the I/O bus.
A new measurement moves scientists closer to revamping how we keep time. After scientists redefined the unit of mass, the kilogram, in 2019, they set their sights on overhauling the fundamental unit of time, the second (SN: 5/20/19). Now, comparisons between three atomic clocks mark an important step toward that goal. Since the 1960s, the second has been defined by atomic clocks made of cesium atoms, which absorb and emit light at a particular frequency that determines the length of a second. But “there have been a lot of improvements in atomic clocks since then,” says physicist David Hume of the National Institute of Standards and Technology in Boulder, Colo. Improved timepieces called optical atomic clocks (SN: 2/21/14) could be used to more precisely define the second. But first, scientists must ensure they fully understand the new clocks, for example by comparing the frequencies of light from different timepieces. Now, scientists with the Boulder Atomic Clock Optical Network, or BACON, have made such comparisons, measuring the ratios of frequencies of three atomic clocks, one made of ytterbium atoms, one of strontium atoms and one made with a single electrically charged aluminum atom (SN: 10/5/17). The results are the most precise clock comparisons yet, with uncertainties less than a quadrillionth of a percent, the researchers report in the March 25 Nature. Because the three clocks were in different locations — two at NIST and the other 1.5 kilometers away at the research institute JILA — the team compared the clocks by sending information across an optical fiber and through an open-air link. This ability to compare distant optical atomic clocks is a step toward clock networks that could be used to make precise measurements such as characterizing Earth’s gravity and testing fundamental physics.
Before the American Declaration of Independence, the moral legitimacy of governments were based on a concept called the Divine right of Kings. God granted the power to Kings and government, and the government was free to use people as it wanted. The American Revolution was based on the radical and revolutionary idea that government’s legitimacy was derived not from a king but the people. The American founders boldly proclaimed that the people had rights, and it was the government’s power that should be limited. As Jefferson said, “where the people fear the government there is tyranny—where the government fears the people there is liberty.” The 56 signers of the Declaration had committed treason and would be hunted down, drawn, and quartered. This is why John Hancock’s signature is so big, he wanted the British to not doubt his commitment. They were among the richest and most privileged men in the colonies. Yet they risked ALL their material wealth for the cause of freedom. Five of them were captured and tortured by the British. Nine were killed in the war. Twelve had their homes destroyed. The majority died broke because they had pledged their lives, fortunes, and sacred honor for the cause of freedom. The victory of Washington’s army over the British was nothing short of miraculous. It would be like the St. Olaf baseball team beating the Minnesota Twins. An act of divine providence.
Open Educational Practices/Wikis This lesson introduces creating and editing open educational resources using wikis. Objectives and Skills Objectives and skills for this lesson include: - Edit and create open content - Wikipedia: Wikipedia:Education program - Wikibooks: Using Wikibooks/Class Project Guidelines - Wikiversity: Wikiversity teachers - YouTube: Wikipedia in Education 1 of 12 Why do you teach Wikipedia? - YouTube: Wikipedia in Education 4 of 12 Assignments - YouTube: Wikipedia in Education 8 of 12 Work with the Wikipedia community - YouTube: Wikipedia Book Creator - YouTube: Wikibooks Overview - Edit and create open content. - Select a Wikipedia article, Wikibooks chapter, or Wikiversity lesson for something in which you are a content expert. Review the article, chapter, or lesson and identify opportunities for improvement. - Review the resource's talk page and edit history to understand current context and any open issues related to the resource. - Review Wikipedia:Wikipedia:VisualEditor. Enable visual editing on your selected platform (Wikipedia, Wikibooks, or Wikiversity) before continuing. On Wikiversity, the options to enable visual editing are at Special:Preferences#mw-prefsection-betafeatures. - Enhance the wiki resource, being careful to include references for any added content. If appropriate, engage in discussion on the resource’s Talk: page regarding any open issues which you are comfortable in addressing. - Access the Piazza web service at Piazza: Open Educational Practices to join the course discussion forums and review existing posts. Create a new post or respond to existing posts to address one or more of the following questions: - Which platform and resource did you choose to edit, and why? What enhancements did you make? - What difficulties did you encounter in editing wiki content? Were your contributions accepted or were they rejected? - Did you engage in discussion with other wiki editors on-wiki? Were your discussions productive? - What concerns do you have in using wikis for student assignments? How might these concerns be addressed? - Edit this page. - Review Wikiversity:Be bold. Wikis only work if people are bold. - Review your notes of new concepts or key terms from this lesson and compare them to the Lesson Summary and Key Terms listed below. - Be bold by improving this course wiki page using the Edit tab. For the Lesson Summary and Key Terms, include references for any content you add. If the Lesson Summary and Key Terms sections seem complete to you, review the Readings and Multimedia links for opportunities for improvement. But note, improving a wiki does not always mean adding to the wiki. Consider how much content you, yourself, are willing to view. Add, edit, update, delete, replace with links to better resources, etc. Your guide should always be to leave the wiki better than you found it. - Reflect on open educational practices. - Reflect on what you learned in this introduction to creating and editing open educational resources using wikis. What surprised you? What have you learned so far that you can apply to your own learning environment(s)? Post your reflection in the Piazza discussion forum, sharing it with either the entire class or one or more of the available discussion groups. - Review other reflection posts and respond to at least two that interest you. Post any questions you have that you would like others to address. Additional items will be contributed by course participants - Participants will edit and create open content within areas - Wikipedia article, Wikibooks chapter, or Wikiversity lessons - where they are content experts. - Wikipedia welcomes student editors through the Wikipedia Education Program and offers instructor training through the Wiki Education Dashboard. - Wikibooks welcomes group collaborative projects writing textbooks and manuals. It also allows readers to participate in a discussion feature. - Wikiversity is a learning environment where both teachers and students can simultaneously contribute to knowledge creation. It welcomes learning projects and communities around existing and new materials. - A note about interaction between these areas: Wikibooks can be created based on pages found on Wikipedia that the user has chosen to include in the book. Once the book is created, its "content can be copied, modified, and redistributed if and only if the copied version is made available on the same terms to others and acknowledgement of the work used is included." Additional items will be contributed by course participants - A citation is a reference to a published or unpublished source and is used to to uphold intellectual honesty (or avoiding plagiarism). - citation needed - The tag is used by Wikipedia editors to request verification of claims on the cite. This is a strategy to increase the reliability of the resource. - is a form of intellectual property that grants the creator of an original creative work an exclusive legal right to determine whether and under what conditions this original work may be copied and used by others. - five pillars - The fundamental principles of Wikipedia may be summarized in five "pillars". - A knowledge base website on which users collaboratively modify content and structure directly from the web browser, where text is typically written using a simplified markup language and often edited with the help of a rich-text editor. - Is an open online encyclopedia that relies on open collaboration. - Is a site for the creation and use of free learning materials and activities. - Wikipedia:Wikipedia:Education program - WikiEdu: Instructor Orientation Modules - Wikibooks:Using Wikibooks/Class Project Guidelines - WikiEducator: Communication and Interaction - "Wikibooks:Copyrights - Wikibooks, open books for an open world". en.wikibooks.org. Retrieved 2019-06-20. - Wikipedia: Citation - "Wikipedia:Citation needed" (in en). Wikipedia. 2019-06-08. https://en.wikipedia.org/w/index.php?title=Wikipedia:Citation_needed&oldid=900952801. - Wikipedia: Copyright - Wikipedia: Five pillars - Wikipedia: Wiki - "Wikipedia" (in en). Wikipedia. 2019-06-20. https://en.wikipedia.org/w/index.php?title=Wikipedia&oldid=902607855. - "Introduction to Wikiversity - Wikiversity". en.wikiversity.org. Retrieved 2019-06-20.
A standard is a document that provides a set of agreed-upon rules, guidelines or characteristics for activities or their results. Standards establish accepted practices, technical requirements, and terminologies for diverse fields. Most standards aim to achieve an optimum degree of order in a given context. Because they are easy to recognize and reference, standards enable organizations to ensure that their products or services can be manufactured, implemented and sold around the world. Standards can be either voluntary or mandatory: - Standards are voluntary when organizations are not legally required to follow them. Organizations may choose to follow them to meet customer or industry demands. - Standards are mandatory when they are enforced by laws or regulations, often for health or safety reasons. A standard is distinct from an Act, a regulation or a code: - An Act is a statute that establishes control or directives based on legal authority. - A regulation is a statutory instrument made by exercising a legislative power conferred by an Act of Parliament. Regulations have binding legal effects. If a voluntary standard is referenced in a regulation, it becomes mandatory. - A code is broad in scope and is intended to carry the force of law when adopted by a provincial, territorial or municipal authority. A code may include any number of referenced standards. There are many types of standards: - Performance standards test products by simulating their performance under actual service conditions. - Prescriptive standards identify product characteristics, such as material thickness, type, and dimension. - Design standards identify specific design or technical characteristics of a product. - Management system standards define and establish an organization’s quality policy and objective. Service standards specify the requirements that are to be fulfilled by a service and establish its fitness for purpose. Service standards may be prepared in fields such as laundering, hotel-keeping, transportation, car-servicing, telecommunications, trading, and insurance and banking.
The importance of vaccinations has long been contested as being unnecessary and unsafe. While cases of negative side effects for individuals vary, the impact of vaccinations on a community cannot be denied. When a large portion of a community is vaccinated, the chances of disease spreading among neighbors and family members are significantly diminished. This large-scale effort to immunize the community is regularly referred to as herd immunity. Here’s why it’s so important: Rejecting vaccines puts the “herd” in danger In recent years, more and more parents have chosen to forego vaccinations for their children. The root of this hesitation can most likely be traced back to anecdotal evidence of vaccine-induced side effects such as autism. While this would be cause to worry, research suggests that vaccines are only likely to result in mild side effects like fatigue, headache and joint pain. Vaccines serve to protect against the spread of infectious and often dangerous diseases and if enough people refuse the inoculations, their neighbors are unfortunately left vulnerable. It blocks the spread of reproductive disease Reproductive disease refers to how many people one infected person can spread a disease to. The more contagious the disease is, the more people need to get vaccinated to ensure safety. When vaccination rates fall, the number of new cases increase. Did you know measles is known as one of the most common and most contagious diseases and kills around 160,000 people globally? In order to be protected against infectious pathogens like this, at least 95 percent of the community needs to be vaccinated. It protects those who can’t be vaccinated The great thing about herd immunity is that not every single member of the community needs to be vaccinated in order to be protected. Children who are too young to be vaccinated or individuals who can’t be vaccinated due to age, health conditions or immune system problems become “immune” because the disease is contained by the other 90 to 95 percent who have chosen to get vaccinated. Vaccines help you avoid additional medical costs Getting vaccinated now will help you avoid crippling hospital bills, doctor visits and medications down the road. Besides being a personal burden, not getting vaccinated could have detrimental effects on your community and loved ones. In 2011, two hospitals in Arizona had to spend over $750,000 to treat a 14-person outbreak and contain the disease. This is just one example of how rejecting vaccinations can impact your community. To protect yourself, your family and your community, consider getting vaccinated. The side effects are minimal, and the positive effects can be enough to stop an entire population from falling ill and avoiding mortality. As with any medical decision, contact your health professional for more information on important vaccines and to determine which ones are right for you.
On August 20, 2013 at 4:24 am EDT, the sun erupted with an Earth-directed coronal mass ejection or CME, a solar phenomenon which can send billions of tons of particles into space that can reach Earth one to three days later. These particles cannot travel through the atmosphere to harm humans on Earth, but they can affect electronic systems in satellites and on the ground. Experimental NASA research models, based on observations from NASA’s Solar Terrestrial Relations Observatory show that the CME left the sun at speeds of around 570 miles per second, which is a fairly typical speed for CMEs. Earth-directed CMEs can cause a space weather phenomenon called a geomagnetic storm, which occurs when they funnel energy into Earth's magnetic envelope, the magnetosphere, for an extended period of time. The CME’s magnetic fields peel back the outermost layers of Earth's fields changing their very shape. In the past, geomagnetic storms caused by CMEs of this strength have usually been mild. Magnetic storms can degrade communication signals and cause unexpected electrical surges in power grids. They also can cause aurora. NOAA's Space Weather Prediction Center (http://swpc.noaa.gov) is the U.S. government's official source for space weather forecasts, alerts, watches and warnings. Updates will be provided if needed.
School Bus Safety School buses transport our most cherished passengers: children. As parents and caregivers, we must remember to teach our children proper bus etiquette and proper behavior at the bus stop. As motorists, we must always be aware that there are children everywhere waiting for school buses. Remember, when you see the flashing red lights and the stop bar, you must stop and allow children to get on or off the bus. Here are some other tips to remember: - School buses are one of the safest vehicles on the road. - Most school bus incidents happen off the bus, not on the bus. - If you have to cross the street in front of the bus, walk on the sidewalk or along the side of the road to a point at least five giant steps (10 feet) ahead of the bus before you cross. Be sure that the bus driver can see you, and you can see the bus driver. - When the bus approaches, stand at least five giant steps (10 feet) away from the curb, and line up away from the street. - School buses don't have seat belts in them because they have a built-in occupant protection system known as "compartmentalization," which is a system of seat height, seat length and padding, among other requirements. Compartmentalization is like an egg carton protecting a child.
Educate Parents. If you suspect your child is being bullied, it's natural to want to confront other parents. But, we all know parents who completely deny their kid's behavior. Education for coaches and parents is important in preventing hurtful and harmful behavior. Identify Negative Behavior. Bullying isn't always obvious and can be quite subtle. "A team's star player may tease a new, talented teammate. While this might be subtle at first, other children may join in, simply because they want to fit in with their peers." What began as seemingly innocent name-calling escalates into a kid being targeted by the team. Recognizing negative behavior and stopping it before it starts is key to controlling bullying. Learn the Risk Factors. Anyone can be a bully. Anyone can be a victim. No one is immune, but studies show there are some common risk factors. Bullies tend to be negative and use aggression or intimidation to solve their problems. Did you know that bullying usually starts at home? Again we see children mimicking the behavior of their parents, as we have talked about before, and why it's critical that parents set a positive example. Encourage Open Communication. Preventing bullying is hard, and many go along with the bully's behavior out of fear or a desire to fit in. Be proactive and put a stop to it before it gets out of hand. Kids who are bullied feel alone and may be scared or embarrassed to talk to to anyone about what's going on. So encourage open and honest communication with your kids. The more you listen, really listen, the more they are likely to talk and that's when you can teach them how to respond to bullying. Bullies need support from their peers, so it can stop before it starts if others refuse to participate. Ask your child's coach or the organization to implement zero-tolerance rules around unacceptable conduct, which can help children to stand up against bullies. Give Kids Better Options. "Coaches who teach kids to work together by rewarding them for positive performance in group-based activities can unify teams and reduce negative incidents. Coaches and parents can help children overcome bullying by working together to discourage ongoing teasing and establishing a culture of cooperation."
Or download our app "Guided Lessons by Education.com" on your device's app store. Tell Me About It! Writing Opinion Essays Students will be able to write short essays that include opinions and reasons that support them. - Distribute copies of the What is an Opinion sheet. - Ask the students to help you develop a definition for the word Opinion. - Have them write down the definition on their practise sheet and complete the second portion on their own. - Have students take out paper and pencils. Each student will now write a two paragraph essay stating one of his opinions, as well as three reasons why he has that opinion. - Distribute the Conclusion worksheets. Have students use it to summarize their stated opinions and reasons. Explicit Instruction/Teacher modeling(25 minutes) - The remaining two worksheets (My Topic and Connectors) will be used for group instruction. - Work as a group to understand the parts of an persuasive essay and establish expectations for the final writing assignment. - Have students select a topic to write on. - Hand out copies of the My Topic worksheet. - On the board, model how to state an opinion on the topic. You may want to choose a different topic to model so students do not copy your examples. - It is crucial that students understand how to give relevant reasons to support their opinion. Give some examples of reasons that do not support your opinion, and discuss with students why these reasons are not valid support for your opinion. - Demonstrate proficient supporting reasons for an opinion. - Discuss and check for student understanding. - Tell the students they will try this strategy on their My Topic worksheets. - Ask them to write down their topic, opinion, and three supporting reasons. - Have students volunteer to read their opinion and one of their reasons. - Discuss student answers and have students contribute to giving feedback. - Continue with this activity until most students seem to have a clear understanding of how to write a supporting reason. - Introduce the connector words "because," "since," and "for example." - Demonstrate how the connector words can be used in a sentence to connect supporting details to the opinion. - Tell students they will be practising this strategy with the next practise sheet. Guided practise(20 minutes) - Distribute copies of the Connector worksheets. - The worksheet will provide a structure for students to create connections between their opinions and their supporting reasons. - Review the procedure they will follow on their worksheets. - Once students have finished working, have them return to their Conclusion worksheets. - Have them use the backs of the sheets to write several concluding sentences and summarize their opinions. - Have each student work with a partner and share feedback. - Walk around and monitor student progress and quality of work. Independent working time(25 minutes) - Have students use their worksheets to create a two paragraph opinion essay. - The first paragraph will introduce their topic and opinion with supporting reasons. - Remind students to use the statements they created with connector words. - The second paragraph will consist of their conclusion sentences. - Enrichment:Challenge advanced students by asking them to write three or more paragraphs. - Support:Work with struggling students in a small group to monitor their progress and guide them through the writing process. - Collect and review students' final essays to assess their mastery of the process of opinion writing. Review and closing(15 minutes) - Ask for student volunteers to read their writing to the class. - Discuss the writing and point out successful usage of the strategies you went over.
Center of gravity The center of gravity of a body is that point through which the resultant of the system of parallel forces formed by the weights of all the particles constituting the body passes for all positions of the body. It is denoted as "C.G" or "G". In a uniform gravitational field the center of gravity is identical to the center of mass. Every body is attracted by gravity towards the center of the earth. This force of attraction is proportional to the mass of the body, perpendicular to the surface of the earth, and directed towards the center of the earth. This is known as the weight of the body. For bodies that are small relative to the earth, the constituting parts of the body can be assumed to be at equal distances from the center of the earth and therefore can also be assumed that the forces formed by those parts are also parallel to each other (in fact, those parts further from the center of the earth experience less gravitational pull than those closer to it— the gravitational pull of the earth is greater at sea level than it is on Mount Everest; also, the directions of pull are in fact always slightly angular). The resultant of all these parallel forces is the total weight of the body. This resultant force passes through a single point for all positions of the body. That point is called the center of gravity. Different geometrical shapes such as the circle, triangle, and rectangle are plane figures having only 2-dimensions. They have area but no mass. The center of gravity of these plane figures is called the centroid or geometrical center. The method of finding out the centroid of a plane figure is the same as that of finding out the center of gravity of a body. If the object is assumed to have uniform mass per unit area, then the centroid is also the center of gravity in a uniform gravitational field. Methods to calculate center of gravity - By geometrical consideration - By moments - By graphical method The first two methods are generally used to find out the center of gravity or centroid, as the third method can become tedious. Center of gravity by geometrical consideration - The center of gravity of a circle is its center. - The center of gravity of a square, rectangle or a parallelogram is at the points where its diagonals meet each other. It is also the middle point of the length as well as the width. - The center of gravity of a triangle is at the point where the medians of the triangle meet. - The center of gravity of a right circular Cone is at a distance of from its base. - The center of gravity of a hemisphere is at a distance of from its base. - The center of gravity of a segment of a sphere of radius h is at a perpendicular distance of from the center of the sphere. - The center of gravity of a semicircle is at a perpendicular distance of from its center. - The center of gravity of a trapezium with parallel side a and b is at a distance of measured from the base b. - The center of gravity of a cube of side L is at a distance of from every face. - The center of gravity of a Sphere of diameter d is at a distance of from every point. Center of gravity by Moments Consider a body of mass "M" whose center of gravity is required to be found out. Let "g" be the acceleration due to gravity. Then the weight of the body is "Mg". Divide the body into small particles having equal masses, whose center of gravity is known as shown in the figure. Let their weights be m1g1, m2g2, m3g3............,etc., and (X1, Y1), (X2, Y2), (X3,Y3)..............,etc., be the coordinates of their center of gravity from a fixed point "o". Let "G" be the center of gravity of the whole body then, and are the coordinates of "G" from "o". From the principle of moments, we know that, Mg = m1gX1 + m2gX2 + m3gX3............, Mg = g( m1X1 + m2X2 + m3X3............,) M = m1X1 + m2X2 + m3X3............, but, M = m1 + m2 + m3............ = The center of gravity of a body is always calculated with reference to some assumed axis known as the axis of reference. The axis of reference for plane figures (laminas) is usually taken as the lowest line touching the lamina which is parallel to the horizontal X axis, for calculating the vertical distance of the center of gravity from this axis. Similarly, the line touching the left outermost edge which is parallel to the vertical Y axis is usually used for calculating , the horizontal distance of the center of gravity from this axis. Plane geometrical figures such as T-sections, I-sections, L-sections etc., have only areas but no mass. The centroid (center of area) of those figures is found out in the same way as that of solid bodies. The centroid will also be the center of gravity if the lamina has uniform mass per unit area. Consider a lamina as the above figure,let its area be "A" .Divide the lamina into elemental areas a1, a2, a3............,etc.,. And (X1, Y1), (X2, Y2), (X3,Y3)..............,etc., are the coordinates of their center of areas from the reference axis "Y-Y" . Let "G" centroid of the whole lamina, and and are the coordinates of "G"(Centroidal distances) from the reference axes "Y-Y" and "X-". From the principle of moments, we know that, A = a1X1 + a2X2 + a3X3............, but, A = a1 + a2 + a3............ = - Center Of Gravity-NASA - Center Of Gravity-Encyclopaedia Britannica
Globalization and Protectionism By the end of this section, you will be able to: - Explain protectionism and its three main forms - Analyze protectionism through concepts of demand and supply, noting its effects on equilibrium - Calculate the effects of trade barriers When a government legislates policies to reduce or block international trade it is engaging in protectionism. Protectionist policies often seek to shield domestic producers and domestic workers from foreign competition. Protectionism takes three main forms: tariffs, import quotas, and nontariff barriers. Recall from International Trade that tariffs are taxes that governments impose on imported goods and services. This makes imports more expensive for consumers, discouraging imports. For example, in recent years large, flat-screen televisions imported to the U.S. from China have faced a 5% tariff rate. Another way to control trade is through import quotas, which are numerical limitations on the quantity of products that a country can import. For instance, during the early 1980s, the Reagan Administration imposed a quota on the import of Japanese automobiles. In the 1970s, many developed countries, including the United States, found themselves with declining textile industries. Textile production does not require highly skilled workers, so producers were able to set up lower-cost factories in developing countries. In order to “manage” this loss of jobs and income, the developed countries established an international Multifiber Agreement that essentially divided the market for textile exports between importers and the remaining domestic producers. The agreement, which ran from 1974 to 2004, specified the exact quota of textile imports that each developed country would accept from each low-income country. A similar story exists for sugar imports into the United States, which are still governed by quotas. Nontariff barriers are all the other ways that a nation can draw up rules, regulations, inspections, and paperwork to make it more costly or difficult to import products. A rule requiring certain safety standards can limit imports just as effectively as high tariffs or low import quotas, for instance. There are also nontariff barriers in the form of “rules-of-origin” regulations; these rules describe the “Made in Country X” label as the one in which the last substantial change in the product took place. A manufacturer wishing to evade import restrictions may try to change the production process so that the last big change in the product happens in his or her own country. For example, certain textiles are made in the United States, shipped to other countries, combined with textiles made in those other countries to make apparel—and then re-exported back to the United States for a final assembly, to escape paying tariffs or to obtain a “Made in the USA” label. Despite import quotas, tariffs, and nontariff barriers, the share of apparel sold in the United States that is imported rose from about half in 1999 to about three-quarters today. The U.S. Bureau of Labor Statistics (BLS), estimated the number of U.S. jobs in textiles and apparel fell from 666,360 in 2007 to 385,240 in 2012, a 42% decline. Even more U.S. textile industry jobs would have been lost without tariffs. However, domestic jobs that are saved by import quotas come at a cost. Because textile and apparel protectionism adds to the costs of imports, consumers end up paying billions of dollars more for clothing each year. When the United States eliminates trade barriers in one area, consumers spend the money they save on that product elsewhere in the economy. Thus, while eliminating trade barriers in one sector of the economy will likely result in some job loss in that sector, consumers will spend the resulting savings in other sectors of the economy and hence increase the number of jobs in those other sectors. Of course, workers in some of the poorest countries of the world who would otherwise have jobs producing textiles, would gain considerably if the United States reduced its barriers to trade in textiles. That said, there are good reasons to be wary about reducing barriers to trade. The 2012 and 2013 Bangladeshi fires in textile factories, which resulted in a horrific loss of life, present complications that our simplified analysis in the chapter will not capture. Realizing the compromises between nations that come about due to trade policy, many countries came together in 1947 to form the General Agreement on Tariffs and Trade (GATT). (We’ll cover the GATT in more detail later in the chapter.) This agreement has since been superseded by the World Trade Organization (WTO), whose membership includes about 150 nations and most of the world’s economies. It is the primary international mechanism through which nations negotiate their trade rules—including rules about tariffs, quotas, and nontariff barriers. The next section examines the results of such protectionism and develops a simple model to show the impact of trade policy. Demand and Supply Analysis of Protectionism To the non-economist, restricting imports may appear to be nothing more than taking sales from foreign producers and giving them to domestic producers. Other factors are at work, however, because firms do not operate in a vacuum. Instead, firms sell their products either to consumers or to other firms (if they are business suppliers), who are also affected by the trade barriers. A demand and supply analysis of protectionism shows that it is not just a matter of domestic gains and foreign losses, but a policy that imposes substantial domestic costs as well. Consider two countries, Brazil and the United States, who produce sugar. Each country has a domestic supply and demand for sugar, as [link] details and [link] illustrates. In Brazil, without trade, the equilibrium price of sugar is 12 cents per pound and the equilibrium output is 30 tons. When there is no trade in the United States, the equilibrium price of sugar is 24 cents per pound and the equilibrium quantity is 80 tons. We label these equilibrium points as point E in each part of the figure. |Price||Brazil: Quantity Supplied (tons)||Brazil: Quantity Demanded (tons)||U.S.: Quantity Supplied (tons)||U.S.: Quantity Demanded (tons)| If international trade between Brazil and the United States now becomes possible, profit-seeking firms will spot an opportunity: buy sugar cheaply in Brazil, and sell it at a higher price in the United States. As sugar is shipped from Brazil to the United States, the quantity of sugar produced in Brazil will be greater than Brazilian consumption (with the extra production exported), and the amount produced in the United States will be less than the amount of U.S. consumption (with the extra consumption imported). Exports to the United States will reduce the sugar supply in Brazil, raising its price. Imports into the United States will increase the sugar supply, lowering its price. When the sugar price is the same in both countries, there is no incentive to trade further. As [link] shows, the equilibrium with trade occurs at a price of 16 cents per pound. At that price, the sugar farmers of Brazil supply a quantity of 40 tons, while the consumers of Brazil buy only 25 tons. The extra 15 tons of sugar production, shown by the horizontal gap between the demand curve and the supply curve in Brazil, is exported to the United States. In the United States, at a price of 16 cents, the farmers produce a quantity of 72 tons and consumers demand a quantity of 87 tons. The excess demand of 15 tons by American consumers, shown by the horizontal gap between demand and domestic supply at the price of 16 cents, is supplied by imported sugar. Free trade typically results in income distribution effects, but the key is to recognize the overall gains from trade, as [link] shows. Building on the concepts that we outlined in Demand and Supply and Demand, Supply, and Efficiency in terms of consumer and producer surplus, [link] (a) shows that producers in Brazil gain by selling more sugar at a higher price, while [link] (b) shows consumers in the United States benefit from the lower price and greater availability of sugar. Consumers in Brazil are worse off (compare their no-trade consumer surplus with the free-trade consumer surplus) and U.S. producers of sugar are worse off. There are gains from trade—an increase in social surplus in each country. That is, both the United States and Brazil are better off than they would be without trade. The following Clear It Up feature explains how trade policy can influence low-income countries. clear it up Why are there low-income countries? Why are the poor countries of the world poor? There are a number of reasons, but one of them will surprise you: the trade policies of the high-income countries. Following is a stark review of social priorities which the international aid organization, Oxfam International has widely publicized. High-income countries of the world—primarily the United States, Canada, countries of the European Union, and Japan—subsidize their domestic farmers collectively by about $360 billion per year. By contrast, the total amount of foreign aid from these same high-income countries to the poor countries of the world is about $70 billion per year, or less than 20% of the farm subsidies. Why does this matter? It matters because the support of farmers in high-income countries is devastating to the livelihoods of farmers in low-income countries. Even when their climate and land are well-suited to products like cotton, rice, sugar, or milk, farmers in low-income countries find it difficult to compete. Farm subsidies in the high-income countries cause farmers in those countries to increase the amount they produce. This increase in supply drives down world prices of farm products below the costs of production. As Michael Gerson of the Washington Post describes it: “[T]he effects in the cotton-growing regions of West Africa are dramatic . . . keep[ing] millions of Africans on the edge of malnutrition. In some of the poorest countries on Earth, cotton farmers are some of the poorest people, earning about a dollar a day. . . . Who benefits from the current system of subsidies? About 20,000 American cotton producers, with an average annual income of more than $125,000.” As if subsidies were not enough, often, the high-income countries block agricultural exports from low-income countries. In some cases, the situation gets even worse when the governments of high-income countries, having bought and paid for an excess supply of farm products, give away those products in poor countries and drive local farmers out of business altogether. For example, shipments of excess milk from the European Union to Jamaica have caused great hardship for Jamaican dairy farmers. Shipments of excess rice from the United States to Haiti drove thousands of low-income rice farmers in Haiti out of business. The opportunity costs of protectionism are not paid just by domestic consumers, but also by foreign producers—and for many agricultural products, those foreign producers are the world’s poor. Now, let’s look at what happens with protectionism. U.S. sugar farmers are likely to argue that, if only they could be protected from sugar imported from Brazil, the United States would have higher domestic sugar production, more jobs in the sugar industry, and American sugar farmers would receive a higher price. If the United States government sets a high-enough tariff on imported sugar, or sets an import quota at zero, the result will be that the quantity of sugar traded between countries could be reduced to zero, and the prices in each country will return to the levels before trade was allowed. Blocking only some trade is also possible. Suppose that the United States passed a sugar import quota of seven tons. The United States will import no more than seven tons of sugar, which means that Brazil can export no more than seven tons of sugar to the United States. As a result, the price of sugar in the United States will be 20 cents, which is the price where the quantity demanded is seven tons greater than the domestic quantity supplied. Conversely, if Brazil can export only seven tons of sugar, then the price of sugar in Brazil will be 14 cents per pound, which is the price where the domestic quantity supplied in Brazil is seven tons greater than domestic demand. In general, when a country sets a low or medium tariff or import quota, the equilibrium price and quantity will be somewhere between those that prevail with no trade and those with completely free trade. The following Work It Out explores the impact of these trade barriers. work it out Effects of Trade Barriers Let’s look carefully at the effects of tariffs or quotas. If the U.S. government imposes a tariff or quota sufficient to eliminate trade with Brazil, two things occur: U.S. consumers pay a higher price and therefore buy a smaller quantity of sugar. U.S. producers obtain a higher price and they sell a larger quantity of sugar. We can measure the effects of a tariff on producers and consumers in the United States using two concepts that we developed in Demand, Supply, and Efficiency: consumer surplus and producer surplus. Step 1. Look at [link], which shows a hypothetical version of the demand and supply of sugar in the United States. Step 2. Note that when there is free trade the sugar market is in equilibrium at point A where Domestic Quantity Demanded (Qd) = Quantity Supplied (Domestic Qs + Imports from Brazil) at a price of PTrade. Step 3. Note, also, that imports are equal to the distance between points C and A. Step 4. Recall that consumer surplus is the value that consumers get beyond what they paid for when they buy a product. Graphically, it is the area under a demand curve but above the price. In this case, the consumer surplus in the United States is the area of the triangle formed by the points PTrade, A, and B. Step 5. Recall, also, that producer surplus is another name for profit—it is the income producers get above the cost of production, which is shown by the supply curve here. In this case, the producer surplus with trade is the area of the triangle formed by the points Ptrade, C, and D. Step 6. Suppose that the barriers to trade are imposed, imports are excluded, and the price rises to PNoTrade. Look what happens to producer surplus and consumer surplus. At the higher price, the domestic quantity supplied increases from Qs to Q at point E. Because producers are selling more quantity at a higher price, the producer surplus increases to the area of the triangle PNoTrade, E, and D. Step 7. Compare the areas of the two triangles and you will see the increase in the producer surplus. Step 8. Examine the consumer surplus. Consumers are now paying a higher price to get a lower quantity (Q instead of Qd). Their consumer surplus shrinks to the area of the triangle PNoTrade, E, and B. Step 9. Determine the net effect. The producer surplus increases by the area Ptrade, C, E, PNoTrade. The loss of consumer surplus, however, is larger. It is the area Ptrade, A, E, PNoTrade. In other words, consumers lose more than producers gain as a result of the trade barriers and the United States has a lower social surplus. Who Benefits and Who Pays? Using the demand and supply model, consider the impact of protectionism on producers and consumers in each of the two countries. For protected producers like U.S. sugar farmers, restricting imports is clearly positive. Without a need to face imported products, these producers are able to sell more, at a higher price. For consumers in the country with the protected good, in this case U.S. sugar consumers, restricting imports is clearly negative. They end up buying a lower quantity of the good and paying a higher price for what they do buy, compared to the equilibrium price and quantity with trade. The following Clear It Up feature considers why a country might outsource jobs even for a domestic product. clear it up Why are Life Savers, an American product, not made in America? In 1912, Clarence Crane invented Life Savers, the hard candy with the hole in the middle, in Cleveland, Ohio. Starting in the late 1960s and for 35 years afterward, a plant in Holland, Michigan produced 46 billion Life Savers a year, in 200 million rolls. However, in 2002, the Kraft Company announced that it would close the Michigan plant and move Life Saver production across the border to Montreal, Canada. One reason is that Canadian workers are paid slightly less, especially in healthcare and insurance costs that are not linked to employment there. Another main reason is that the United States government keeps the sugar price high for the benefit of sugar farmers, with a combination of a government price floor program and strict quotas on imported sugar. According to the Coalition for Sugar Reform, from 2009 to 2012, the price of refined sugar in the United States ranged from 64% to 92% higher than the world price. Life Saver production uses over 100 tons of sugar each day, because the candies are 95% sugar. A number of other candy companies have also reduced U.S. production and expanded foreign production. From 1997 to 2011, sugar-using industries eliminated some 127,000 jobs, or more than seven times the total employment in sugar production. While the candy industry is especially affected by the cost of sugar, the costs are spread more broadly. U.S. consumers pay roughly $1 billion per year in higher food prices because of elevated sugar costs. Meanwhile, sugar producers in low-income countries are driven out of business. Because of the sugar subsidies to domestic producers and the quotas on imports, they cannot sell their output profitably, or at all, in the United States market. The fact that protectionism pushes up prices for consumers in the country enacting such protectionism is not always acknowledged openly, but it is not disputed. After all, if protectionism did not benefit domestic producers, there would not be much point in enacting such policies in the first place. Protectionism is simply a method of requiring consumers to subsidize producers. The subsidy is indirect, since consumers pay for it through higher prices, rather than a direct government subsidy paid with money collected from taxpayers. However, protectionism works like a subsidy, nonetheless. The American satirist Ambrose Bierce defined “tariff” this way in his 1911 book, The Devil’s Dictionary: “Tariff, n. A scale of taxes on imports, designed to protect the domestic producer against the greed of his consumer.” The effect of protectionism on producers and consumers in the foreign country is complex. When a government uses an import quota to impose partial protectionism, Brazilian sugar producers receive a lower price for the sugar they sell in Brazil—but a higher price for the sugar they are allowed to export to the United States. Notice that some of the burden of protectionism, paid by domestic consumers, ends up in the hands of foreign producers in this case. Brazilian sugar consumers seem to benefit from U.S. protectionism, because it reduces the price of sugar that they pay (compared to the free-trade situation). On the other hand, at least some of these Brazilian sugar consumers also work as sugar farmers, so protectionism reduces their incomes and jobs. Moreover, if trade between the countries vanishes, Brazilian consumers would miss out on better prices for imported goods—which do not appear in our single-market example of sugar protectionism. The effects of protectionism on foreign countries notwithstanding, protectionism requires domestic consumers of a product (consumers may include either households or other firms) to pay higher prices to benefit domestic producers of that product. In addition, when a country enacts protectionism, it loses the economic gains it would have been able to achieve through a combination of comparative advantage, specialized learning, and economies of scale, concepts that we discuss in International Trade. Key Concepts and Summary There are three tools for restricting the flow of trade: tariffs, import quotas, and nontariff barriers. When a country places limitations on imports from abroad, regardless of whether it uses tariffs, quotas, or nontariff barriers, it is said to be practicing protectionism. Protectionism will raise the price of the protected good in the domestic market, which causes domestic consumers to pay more, but domestic producers to earn more. Explain how a tariff reduction causes an increase in the equilibrium quantity of imports and a decrease in the equilibrium price. Hint: Consider the Work It Out “Effects of Trade Barriers.” This is the opposite case of the Work It Out feature. A reduced tariff is like a decrease in the cost of production, which is shown by a downward (or rightward) shift in the supply curve. Explain how a subsidy on agricultural goods like sugar adversely affects the income of foreign producers of imported sugar. A subsidy is like a reduction in cost. This shifts the supply curve down (or to the right), driving the price of sugar down. If the subsidy is large enough, the price of sugar can fall below the cost of production faced by foreign producers, which means they will lose money on any sugar they produce and sell. Who does protectionism protect? From what does it protect them? Name and define three policy tools for enacting protectionism. How does protectionism affect the price of the protected good in the domestic market? Critical Thinking Questions Show graphically that for any tariff, there is an equivalent quota that would give the same result. What would be the difference, then, between the two types of trade barriers? Hint: It is not something you can see from the graph. From the Work It Out “Effects of Trade Barriers,” you can see that a tariff raises the price of imports. What is interesting is that the price rises by less than the amount of the tariff. Who pays the rest of the tariff amount? Can you show this graphically? Assume two countries, Thailand (T) and Japan (J), have one good: cameras. The demand (d) and supply (s) for cameras in Thailand and Japan is described by the following functions: P is the price measured in a common currency used in both countries, such as the Thai Baht. - Compute the equilibrium price (P) and quantities (Q) in each country without trade. - Now assume that free trade occurs. The free-trade price goes to 56.36 Baht. Who exports and imports cameras and in what quantities? Bureau of Labor Statistics. “Industries at a Glance.” Accessed December 31, 2013. http://www.bls.gov/iag/. Oxfam International. Accessed January 6, 2014. http://www.oxfam.org/. - import quotas - numerical limits on the quantity of products that a country can import - nontariff barriers - ways a nation can draw up rules, regulations, inspections, and paperwork to make it more costly or difficult to import products - government policies to reduce or block imports - World Trade Organization (WTO) - organization that seeks to negotiate reductions in barriers to trade and to adjudicate complaints about violations of international trade policy; successor to the General Agreement on Tariffs and Trade (GATT)
Subsistence and Commercial Activities. Into the late twentieth century, the Dogrib relied on the game and fish of the land, increasingly supplemented by flour and lard from the trading post. Caribou were a major resource from September through March when the caribou retreated to the farther reaches of the barren grounds. Moose were taken year round. A large game kill was shared among all families in the local group. Contingent on its ten-year population cycle, the snowshoe hare was the major small game. With the introduction in the nineteenth century of commercial twine for gill nets, fish became an important resource. The Dogrib were drawn into the fur trade after the end of the eighteenth century and by the middle of the nineteenth century were committed to a dual economy of subsistence hunting, fishing, and snaring combined with the taking of fur animals (such as beaver, marten, fox) whose skins they traded for metal implements, guns, cloth, clothing, and so on. As Rae expanded in population and services after 1950, a few Dogrib, especially those who were bilingual, found employment as trading store clerks and janitors in government installations. Bush clearing and fire fighting are seasonal summer employments for men. In the 1980s, an Indian-operated fishing lodge for tourists was opened at the Dogrib bush hamlet of Lac la Martre. The dog was the only domestic animal aboriginally. Dogs did not become significant in transport until the nineteenth century, once firearms and twine for fish nets allowed families to provision a multidog team. Industrial Arts. The making of snowshoes, toboggans, and birchbark canoes by men and the processing of caribou and moose hides for clothing and footgear by women were aboriginal crafts vital to survival. Decorative art rested in the hands of the women, as adornment on apparel. Aboriginal porcupine quill decoration largely gave way to silk floss Embroidery and beadwork in historic times. Containers of birch-bark, of furred and unfurred hides, and of rawhide netting, often handsomely executed, were women's work as well. Trade. There was no consequential precontact trade between the Dogrib and neighboring Indian peoples. The fur trade was regularized in the early nineteenth century and remains the single dominant trade relation in Dogrib history. Division of Labor. Into recent times men were the hunters of the large game without which the people could not survive. Husband and wife might share the task of gill-net fishing which became increasingly important after net twine was introduced. Women made dry meat and dry fish, processed hides for clothing and, sometimes aided by their husbands, the fur pelts for the fur trade. Rabbit snaring, firewood gathering, cooking, and other activities that could take place close to the hearth were ordinarily the responsibility of women. Especially in bush communities, all these tasks remain Important economic activities. Land Tenure. There was no ownership of land by either individuals or groups aboriginally, and so it has remained to the present day. The resources of the land were open to all. Government-registered trap lines were never established among the Dogrib.
The extent, volume and carbon content of the worlds tropical wetlands are not accurately known. Present estimates are based on disparate sources, of varying quality from different regions. As wetlands are key regulators not only of the global carbon cycle, but also other biogeochemical cycles, better maps of wetlands are urgently needed. This report presents a set of novel approaches for mapping global tropical wetlands from a variety of image data obtained from satellite images of earth. Wetlands only occur under certain topographic positions, and where the climate system provides sufficient water. Combining a global digital elevation model with global climate data, a tropical global map of topographic wetness was created. Using global optical satellite images from a moderate resolution imaging spectroradiometer (MODIS) a second wetness index was developed. In contrast to previous satellite-based wetness indexes, the index attempts to remove the vegetation influence and focus on the soil surface wetness. From an annual time-series of MODIS images, the inundation cycle of the global tropics was captured. As wetlands are characterised by annual variations in inundation, an approach for classifying wetlands from a chrono-sequence of annual MODIS images was developed. In the chrono-sequence, only locations with similar climatic seasonality, and within spatial proximity are classified based on any reference site. The wetness indexes and the chrono-sequence classification scheme are strong candidates for mapping the distribution of global tropical wetlands. Topic: mangroves,wetlands,climatic change Series: CIFOR Working Paper no. 103 Publisher: Center for International Forestry Research (CIFOR), Bogor, Indonesia Publication Year: 2012Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
five Diverse Forms of Printed Circuit Boards A printed circuit board (PCB) is a regular part in numerous distinct digital devices, these kinds of as personal computers, radars, beepers, and so forth. They are made from a selection of resources with laminate, composite and fiberglass the most common. Also, the type of circuit board can fluctuate with the supposed use. Let’s just take a appear at 5 of the distinct forms: One sided – this is the most usual circuit board and is crafted with a one layer or base material. The one layer is coated with a conductive material like copper. They may also have a silk display coat or a protecting solder mask on prime of the copper layer. A great benefit of this type of PCB is the low production charge and they are typically applied in mass-developed products. Double sided – this is much like the one sided, but has the conductive material on each sides. There are numerous holes in the board to make it simple to connect steel parts from the prime to base side. This type of circuit board increases operational adaptability and is a practical choice to establish the much more dense circuit designs. This board is also rather low-charge. Even so, it nonetheless isn’t really a practical choice for the most intricate circuits and is not able to get the job done with technological know-how that cuts down electromagnetic interference. They are ordinarily applied in amplifiers, ability checking units, and tests machines. Multi-layer – the multi-layer circuit board is crafted with more layers of conductive resources. The significant range of layers which can get to thirty or much more implies it is possible to make a circuit design with incredibly significant adaptability. The unique layers are separated by specific insulating resources and substrate board. A great benefit of this type of board is the compact size, which aids to help save house and bodyweight in a rather smaller solution. Also, they are typically applied when it is vital to use a significant-speed circuit. Flexible – this is a incredibly multipurpose circuit board. It is not only designed with a adaptable layer, but also available in the one, double, or multi-layer boards. They are a great choice when it is vital to help save house and bodyweight when making a individual device. Also, they are appreciated for significant ductility and low mass. Even so, the adaptable character of the board can make them much more tricky to use. Rigid – the rigid circuit board is crafted with a reliable, non-adaptable material for its layers. They are ordinarily compact in size and ready to take care of the intricate circuit designs. Additionally, the signal paths are simple to organize and the ability to preserve and repair service is rather straightforward.
Soot warming 'maybe bigger than greenhouse gases' - NASA Forget Copenhagen CO2 cuts, tune your diesel properly Researchers from NASA's Goddard Space Flight Centre, also the home of famous carbopocalypse doom-prophet James Hansen, have repeated earlier assertions that atmospheric soot may be as important as greenhouse gases in driving global warming. This could be good news for humanity, as atmospheric soot levels would be much easier to reduce. Filtering soot out of exhausts from diesel engines and coal burners is simple compared to removing and sequestering CO2, and as an added benefit the effects would be rapid: soot doesn't persist in the atmosphere for long periods the way greenhouse gases do, as it is washed out by rain or snow. However, many environmental campaigners would resist the idea of soot taking centre stage, fearing that this could lead to a reduced emphasis on greenhouse-gas emissions reductions. Ice running off the gutters of the 'roof of the world'. Earlier investigations including the effect of soot had focused on the Arctic, where Goddard scientists have previously suggested that "the impact of aerosols is just as strong as that of the greenhouse gases". Aerosols include soot, which tends to heat the atmosphere, plus sulphates and others which cool it. Unfortunately sulphates also cause acid rain, and clean-air regs in the US and Europe have seen them massively reduced - and the Arctic warm up. Now, Goddard researchers have carried out new investigations into the effects of sooty aerosols on the glaciers of the Himalayas - sometimes known as the planet's "Third Pole". Glacial melting in the Himalayas has received a lot of play in the greenly-inclined media lately against the background of the COP15 international climate talks underway in Copenhagen; it is widely felt that the mountain ice is disappearing much faster even than CO2-alarmist climate models predict, and that this is a reason to suggest that the Copenhagen talks - focused entirely on greenhouse-gas emissions - may be the last chance for humanity to save itself from a disastrous climate apocalypse. Forget about burping cows, airliners and green IT - just tune up your diesel engine and chip in towards modern stoves for everyone But according to NASA this week: The new research, by NASA’s William Lau and collaborators, reinforces with detailed numerical analysis what earlier studies suggest: that soot and dust contribute as much (or more) to atmospheric warming in the Himalayas as greenhouse gases. "We need to add another topic to the climate dialogue," says Lau. Hal Maring of NASA headquarters goes further, though he cautions that more field results from the "roof of the world" are necessary to validate Lau's modelling. "Even at this stage we should be compelled to take notice," says Maring. “Airborne particles have a much shorter atmospheric lifespan than greenhouse gases, so reducing particle emissions can have much more rapid impact on warming.” One of the most troublesome types of aerosol is "black carbon", dark particulate soot emitted when fuel is incompletely burned. Diesel engines are a particular villain here, but coal burning and primitive cooking are also big contributors. If the new research is right, huge reductions in warming are on offer from comparatively easy initiatives such as better stoves, more efficient modernised diesel engines and cleaner coal powerplants, boilers etc. These measures would also take effect much more quickly than comparatively difficult, expensive and unpopular cuts in emissions of greenhouse gases like CO2 or methane. Even Lau's Goddard colleague Dr James Hansen, who has spent the last several decades relentlessly bigging-up the greenhouse gas threat and pushing for emissions cuts, now admits that soot is a major issue - though he can't bear to suggest it might actually be bigger than greenhouse gases. "Black soot is probably responsible for as much as half of the glacial melt," he says. It seems that the assembled, warring delegates at the Copenhagen greenhouse-gas talks - trying to prevent global temperature rises in the next few decades, ie in the fairly short term - may be arguing over the wrong things. According to the latest NASA research the human race might achieve more by sorting out its soot emissions first, a thing which would be comparatively easy to do, and get to the much more difficult, unpopular and less effective greenhouse gas cuts afterwards. ®
Many computer programmers know numerous languages. The range of programs languages is broad, with some words used in particular contexts, but some more basic. For instance, Java can implement applications for both the desktop and the Web. Programming languages likewise take different techniques to performing processing so that composing applications can include various tasks and projects depending on the language in use. There are a couple of standard benefits to understanding many programming languages that can boost success in any development career. Innovation is in a continuous state of adjustment. From Web applications to desktop and mobile environments, the series of languages in use is always evolving. Programmers who continue to make a welcome contribution to the tasks they work on are those developers who want to discover brand-new skills, platforms, and languages continuously. The more languages a developer creates, the much easier it becomes to get brand-new languages, so making this a routine function of your working life puts you in a great position for the future. When discovering programs languages, designers typically find elements of how these words are executed within calculating systems. This indicates that each time you learn a new language, you learn something more about the effectiveness, performance and style aspects of programs in general. Numerous languages execute their structures in similar ways, so learning about general implementation concepts gives you the knowledge to process with efficiency in mind, whatever language you are utilizing. Some programming languages are similar, but some take vastly different techniques to application processing. For instance, object-oriented languages, such as Java, divided application tasks between a set of items with specific obligations. Languages are often categorized as high or low level. The higher level a language is, the more it involves abstraction from calculating hardware. Procedural languages provide the computer system a series of specific directions to perform, whereas functional languages specify application habits using mathematical functions. Understanding about various programming language approaches gives you a wider variety of choices regarding how you approach particular projects yourself.
The helmet-crested lambeosaur Corythosaurus sported a bony head crest that probably served as a resonating chamber for making sounds. Credit: Michael Skrepnick. The ornate headgear worn by duck-billed dinosaurs millions of years ago was used to make eerie, bellowing calls, suggests a new study. The study also showed that as the dinosaurs matured into adults, their voices probably changed from high-pitched to deep, just like ours do (at least for guys). The researchers specifically looked at a subfamily of duck-billed dinosaurs (plant-eaters with long, flattened snouts) called lambeosaurs that sported flashy caps that would have put to shame any "Star Wars" hairdo. The caps enclosed nasal passages that looped through the head crest to form large air chambers before passing into the airway (throat). Past explanations for the wonky headgear have proposed that it was used to boost the dinosaurs' sense of smell, to regulate temperature or to allow sound to resonate for communication. The new project represents the first time scientists have pieced together both the structures of the crests and nasal passages, along with reconstructions of the brain, the researchers say. The result confirms one of the theories, that the head crests were used for vocal communication — not as supersized sniffers. The upshot is a picture of lambeosaurs shouting out to one another, wooing mates and warning one another of nearby enemies. And if the study results hold true, when a lambeosaur made calls, air would travel through the nasal passages enclosed by the head crest. Since the sizes and shapes of head crests (and nasal pasasges) differed among lambeosaurs, each one had its own voice — their calls also would have sounded distinctive individual by individual, the researchers found. "Dinosaurs vocalized through their mouths, but because the nose connects to the mouth, the nasal passages act as resonance chambers," said researcher Lawrence Witmer of Ohio University’s College of Osteopathic Medicine. The results will be presented today by the researchers at a meeting of the Society for Vertebrate Paleontology in Cleveland, Ohio. In addition, the research will be detailed in a forthcoming issue of the journal Anatomical Record. Witmer, Ryan Ridgely, also of Ohio University’s College of Osteopathic Medicine, and their colleagues used computed tomography scans to peek inside the head crests and reconstruct the brains and nasal cavities of individuals from four genuses of the lambeosaur subfamily, including Parasaurolophus, Corythosaurus, Lambeosaurus and Hypacrosaurus. "The shape of the brain can tell us a lot about what senses were important in a dinosaur's everyday life, and give insight into the function of the crests," said study lead author David Evans, a paleontologist at the Royal Ontario Museum and the University of Toronto. Evans worked with Witmer and Ridgely on the research. In addition, Evans' team examined such systems in dinosaurs' closest living relatives, birds and crocodilians. They found that the brain region linked with all things olfactory was relatively small and primitive in the lambeosaurs, suggesting, the researchers say, that the dinosaurs' head crests did not evolve to improve smell. Instead, the researchers think the dinosaurs used the nasal passages within the crests to make bellowing sounds that could have been used to call for mates or warn others of predators. (The ornamented external appearances of the crests served as visual displays.) When a lambeosaur did call out, the size and shape of its head crest would have modified the sound coming out. The same phenomenon happens for us, Witmer explained. When we get stuffy noses, our voices change. That's because our nasal passages act as sound resonators. "We have a sense that these animals used low frequency sound, so, very deep sounds that actually travel long distances and they may have been able to use those to communicate," Witmer said. The CT scans showed a delicate inner ear, supporting the idea that the dinosaurs could hear the low-frequency calls produced by the crest. If the lambeosaurs were in fact communicating with one other through vocal calls, the researchers suspected a well-developed brain could be at work to support such sophisticated behaviors. And that's what they found. The reconstructed brains showed relatively large cerebral hemispheres, which are linked with higher thought and problem-solving. "What it suggests is that they indeed did have the brain power to pull off some of these sophisticated behaviors," Witmer said, "that they probably did communicate in perhaps fairly subtle ways and they could make sense of it." By examining the headgear of both juvenile and adult dinosaurs, the researchers also saw evidence for new details about dinosaur development and breeding. As the crests got bigger and the animal matured, the dinosaurs' nasal passages became longer and more convoluted. "The idea is that as these animals grow they would actually be starting to, in a sense, develop the tools and the ornaments to enter into the breeding pool," Witmer said. "The shape and size of the crest provides visual information. The nasal passages on the inside actually probably relate to voice and vocal communication." Their voices may have changed like teen-age boys' do as they go through puberty. "We could easily imagine that a little pipsqueak literally would've had a higher pitched voice," Witmer told LiveScience, "and as they got older [their voices] would become deeper." The variation in crests among species and among individuals of the same species suggests the dinosaurs may have produced subtly different bellows, Witmer said. And so like us and other modern animals, a dinosaur's unique voice may have served as a distinguishing feature for relatives and members of another species. - A Brief History of Dinosaurs - Avian Ancestors: Dinosaurs That Learned to Fly - Images: Dinosaur Fossils
Global warming and crops Rising temperatures are likely to reduce crop yieldsThe costs of climate change are unclear, but Kansas farmers know the cost of drier summers, since several of the last few years have seen drought conditions in Kansas, letting South Dakota steal the wheat producing crown at least once. John R. Porter 1. Department of Agricultural Sciences, The Royal Veterinary and Agricultural University, Hoejbakkegaard Avenue, 2630 Taastrup, Denmark Your News story about the Royal Society meeting on climate change and food production ("Hikes in surface ozone could suffocate crops" Nature 435, 7; 2005) noted that rising CO2 levels will generally benefit crop growth, as this stimulates photosynthesis in most crop plants. However, the links between climate change and food production are even more complex than your story suggests. Rising temperatures could extend the geographical distribution and growing season of some agricultural crops, such as pasture grasses, by allowing the threshold temperature for the start of growth to be reached sooner. This assumes that water and nutrients are supplied at a level that permits pasture crops to benefit from a longer growing season. But in general, and contrary to common perceptions, most crop physiologists expect global warming to reduce crop yields. This is because higher temperatures shorten the life cycle of most cereals, hastening senescence and reducing the length of the growing season. Other effects such as an increase in tropospheric ozone level can exacerbate crop senescence, as noted in your News story. The staple cereal crops can only tolerate narrow temperature ranges, which, if exceeded during the flowering phase, can damage fertile seed production and thus reduce yield. Global warming would also be expected to increase the frequency of exposure to extreme temperatures and thus damage crop fertility. So far, efforts to predict climate change effects on food production and quality have been fragmented. For major crops, except wheat and soybean, we lack the agronomic-scale experiments needed to understand and robustly predict the direct effects of CO2 and ozone, and their interactions with temperature and water. With an extra three billion people to feed during the coming 40 to 50 years, closer cooperation among crop physiology, crop agronomy and climate science would be a positive outcome of the Royal Society meeting. I don't know if that's because of climate change. That's a statistical assessment that will require years of data. How many years of drought can Kansas tolerate in order to gather that data? This is called the "precautionary principle." We can't fully estimate the effects of climate change because agriculture and climate are both incredibly complex systems, and their interaction is no less complex. But we can see serious dangers in one direction, and none in the other. The precautionary principle says it's worth investing in a solution now, because by the time we have enough data for a full economic analysis it'll be too late to change and too expensive to help Kansas farmers. A more moderate approach is to make an assessment of the probability of various outcomes (p_i) and the cost of each outcome (c_i) and spend the sum of p_i*c_i across all i. That'd be a lot more than anyone is spending on alternative energy as it is. I'm not an economist, so I don't have those numbers at hand, but I can eyeball it.
The thyroid gland is a butterfly-shaped gland found in your neck, just in front of your larynx (voice-box). It produces a hormone which is carried in the bloodstream, and which is essential for the normal functioning of every single cell in your body. This hormone – called T4 – helps regulate energy, maintain body temperature and generally assists with the normal functioning of other organs such as muscles, brain and heart. Because of its various biological functions, disturbances of thyroid hormone can have wide-ranging effects. Problems can arise if the thyroid gland becomes more or less active than normal. An overactive gland is called hyperthyroidism, and an underactive gland is called hypothyroidism. There are a number of different causes for each of these conditions. WHAT BLOOD TESTS CAN TELL US Hormone production is regulated by the brain. Thyroid Stimulating Hormone (TSH) from the brain instructs the thyroid gland to produce thyroid hormone, T4, using dietary Iodine. In the tissues where it is needed, T4 is converted into more biologically active T3. When enough T3 and T4 are in circulation, TSH levels are reduced to normal. If your thyroid gland is overactive, there will be too much T3 and T4 in your blood, and only traces of TSH. If your thyroid is underactive, there will be very low levels of T3 and T4 in circulation, but very high levels of TSH as your brain tries to stimulate your sluggish gland to produce more hormone. TYPES OF DISORDERS Various conditions can affect the thyroid, but there are essentially only two outcomes: the gland becomes either over-active or under-active. Some autoimmune conditions can result in alternating over- and under-activity. In any thyroid disorder there may be little or no change in the appearance of the gland, but swelling with or without nodules may occur. An underactive thyroid will result in a general slowing down of all body processes. Symptoms could thus include fatigue, weight gain, constipation, fuzzy thinking, low blood pressure, fluid retention, depression, general body pain and slow reflexes. Destruction of the thyroid by surgery, radiation or your own antibodies (Hashimoto’s disease) can cause hypothyroidism, which is treated by replacing the missing hormone. Autoimmune hypothyroidism (Hashimoto’s disease) is best managed by a specialist. The symptoms of hyperthyroidism are due to an overstimulated metabolism resulting from excess thyroid hormone. Common symptoms include anxiety, insomnia, rapid weight loss, diarrhoea, high heart rate, high blood pressure, eye sensitivity/bulging and vision disturbances. Autoimmune hyperthyroidism (Graves’s disease) is also often found. Excess hormone levels are reduced in several different ways: e.g. • antithyroid drugs • radioactive iodine treatment (RAI or radioidine ablation) • surgical removal of all or part of the thyroid, called thyroidectomy • symptom control using beta blocker medication Goiter — an enlarged thyroid –may cause a tender or tight feeling in the neck or throat, hoarseness or coughing, and difficulty swallowing or breathing. Symptoms caused by nodules vary with how biologically active they are: some cause no symptoms, while others may cause difficulty swallowing, a feeling of fullness, pain or pressure in the neck, a hoarse voice, or neck tenderness. Some overactive nodules trigger hyperthyroid-like symptoms such as palpitations, insomnia, weight loss, anxiety, and tremors.
HistoryGrowth of a State The Roman provinces of Pannonia and Dacia, conquered under Tiberius and Trajan (1st cent. A.D.), embraced part of what was to become Hungary. The Huns and later the Ostrogoths and the Avars settled there for brief periods. In the late 9th cent. the Magyars, a Finno-Ugric people from beyond the Urals, conquered all or most of Hungary and Transylvania. The semilegendary leader, Arpad, founded their first dynasty. The Magyars apparently merged with the earlier settlers, but they also continued to press westward until defeated by King (later Holy Roman Emperor) Otto I, at the Lechfeld (955). Halted in its expansion, the Hungarian state began to solidify. Its first king, St. Stephen (reigned 1001–38), completed the Christianization of the Magyars and built the authority of his crown—which has remained the symbol of national existence—on the strength of the Roman Catholic Church. Under Bela III (reigned 1172–1196), Hungary came into close contact with Western European, particularly French, culture. Through the favor of succeeding kings, a few very powerful nobles—the magnates—won ever-widening privileges at the expense of the lesser nobles, the peasants, and the towns. In 1222 the lesser nobles forced the extravagant Andrew II to grant the Golden Bull (the "Magna Carta of Hungary"), which limited the king's power to alienate his authority to the magnates and established the beginnings of a parliament. Under Andrew's son, Bela IV, the kingdom barely escaped annihilation: Mongol invaders, defeating Bela at Muhi (1241), occupied the country for a year, and Ottocar II of Bohemia also defeated Bela, who was further threatened by his own rebellious son Stephen V. Under Stephen's son, Ladislaus IV, Hungary fell into anarchy, and when the royal line of Arpad died out (1301) with Andrew III, the magnates seized the opportunity to increase their authority. In 1308, Charles Robert of Anjou was elected king of Hungary as Charles I, the first of the Angevin line. His autocratic rule checked the magnates somewhat and furthered the growth of the towns. Under his son, Louis I (Louis the Great), Hungary reached its greatest territorial extension, with power extending into Dalmatia, the Balkans, and Poland. After the death of Louis I, a series of foreign rulers succeeded: Sigismund (later Holy Roman Emperor), son-in-law of Louis; Albert II of Austria, son-in-law of Sigismund; and Ladislaus III of Poland (Uladislaus I of Hungary). During their reigns the Turks began to advance through the Balkans, defeating the Hungarians and their allies at Kosovo Field (1389), Nikopol (1396), and Varna (1444). John Hunyadi, acting after 1444 as regent for Albert II's son, Ladislaus V, gave Hungary a brief respite through his victory at Belgrade (1456). The reign of Hunyadi's son, Matthias Corvinus, elected king in 1458, was a glorious period in Hungarian history. Matthias maintained a splendid court at Buda, kept the magnates subject to royal authority, and improved the central administration. But under his successors Uladislaus II and Louis II, the nobles regained their power. Transylvania became virtually independent under the Zapolya family. The peasants, rising in revolt, were crushed (1514) by John Zapolya. Louis II was defeated and killed by the Turks under Sulayman the Magnificent in the battle of Mohács in 1526. The date is commonly taken to mark the beginning of Ottoman domination over Hungary. Ferdinand of Austria (later Emperor Ferdinand I), as brother-in-law of Louis II, claimed the Hungarian throne and was elected king by a faction of nobles, while another faction chose Zapolya as John I. In the long wars that followed, Hungary was split into three parts: the western section, where Ferdinand and his successor, Rudolf II, maintained a precarious rule, challenged by such Hungarian leaders as Stephen Bocskay and Gabriel Bethlen; the central plains, which were completely under Turkish domination; and Transylvania, ruled by noble families (see Báthory and Rákóczy). The Protestant Reformation, supported by the nobles and well-established in Transylvania, nearly succeeded throughout Hungary. Cardinal Pázmány was a leader of the Counter Reformation in Hungary. In 1557 religious freedom was proclaimed by the diet of Transylvania, and the principle of toleration was generally maintained throughout the following centuries. Hungarian opposition to Austrian domination included such extreme efforts as the assistance Thököly gave to the Turks during the siege of Vienna (1683). Emperor Leopold I, however, through his able generals Prince Eugene of Savoy and Duke Charles V of Lorraine, soon regained his lost ground. Budapest was liberated from the Turks in 1686. In 1687, Hungarian nobles recognized the Hapsburg claim to the Hungarian throne. By the Peace of Kalowitz (1699), Turkey ceded to Austria most of Hungary proper and Transylvania. Transylvania continued to fight the Hapsburgs, but in 1711, with the defeat of Francis II Rákóczy (see under Rákóczy, family), Austrian control was definitely established. In 1718 the Austrians took the Banat from Turkey. The Austrians brought in Germans and Slavs to settle the newly freed territory, destroying Hungary's ethnic homogeneity. Hapsburg rule was uneasy. The Hungarians were loyal to Maria Theresa in her wars, but many of the unpopular centralizing reforms of Joseph II, who had wanted to make German the sole language of administration and to abolish the Hungarian counties, had to be withdrawn. In the second quarter of the 19th cent. a movement that combined Hungarian nationalism with constitutional liberalism gained strength. Among its leaders were Count Szechenyi, Louis Kossuth, Baron Eötvös, Sándor Petőfi, and Francis Deak. Inspired by the French Revolution of 1848, the Hungarian diet passed the March Laws (1848), which established a liberal constitutional monarchy for Hungary under the Hapsburgs. But the reforms did not deal with the national minorities problem. Several minority groups revolted, and, after Francis Joseph replaced Ferdinand VII as emperor, the Austrians waged war against Hungary (Dec., 1848). In Apr., 1849, Kossuth declared Hungary an independent republic. Russian troops came to the aid of the emperor, and the republic collapsed. The Hungarian surrender at Vilagos (Aug., 1849) was followed by ruthless reprisals. But after its defeat in the Austro-Prussian War (1866), Austria was obliged to compromise with Magyar national aspirations. The Ausgleich of 1867 (largely the work of Francis Deak) set up the Austro-Hungarian Monarchy, in which Austria and Hungary were nearly equal partners. Emperor Francis Joseph was crowned (1867) king of Hungary, which at that time also included Transylvania, Slovakia, Ruthenia, Croatia and Slovenia, and the Banat. The minorities problem persisted, the Serbs, Croats, and Romanians being particularly restive under Hungarian rule. During this period industrialization began in Hungary, while the condition of the peasantry deteriorated to the profit of landowners. By a law of 1874 only about 6% of the population could vote. Until World War I, when republican and socialist agitation began to threaten the established order, Hungary was one of the most aristocratic countries in Europe. As the military position of Austria-Hungary in World War I deteriorated, the situation in Hungary grew more unstable. Hungarian nationalists wanted independence and withdrawal from the war; the political left was inspired by the 1917 revolutions in Russia; and the minorities were receptive to the Allies' promises of self-determination. In Oct., 1918, Emperor Charles I (King of Hungary as Charles IV) appointed Count Michael Károlyi premier. Károlyi advocated independence and peace and was prepared to negotiate with the minorities. His cabinet included socialists and radicals. In November the emperor abdicated, and the Dual Monarchy collapsed. Károlyi proclaimed Hungary an independent republic. However, the minorities would not deal with him, and the Allies forced upon him very unfavorable armistice terms. The government resigned, and the Communists under Béla Kun seized power (Mar., 1919). The subsequent Red terror was followed by a Romanian invasion and the defeat (July, 1919) of Kun's forces. After the Romanians withdrew, Admiral Horthy de Nagybanya established a government and in 1920 was made regent, since there was no king. Reactionaries, known as White terrorists, conducted a brutal campaign of terror against the Communists and anyone associated with Károlyi or Kun. The Treaty of Trianon (see Trianon, Treaty of), signed in 1920, reduced the size and population of Hungary by about two thirds, depriving Hungary of valuable natural resources and removing virtually all non-Magyar areas, although Budapest retained a large German-speaking population. The next twenty-five years saw continual attempts by the Magyar government to recover the lost territories. Early endeavors were frustrated by the Little Entente and France, and Hungary turned to a friendship with Fascist Italy and, ultimately, to an alliance (1941) with Nazi Germany. The authoritarian domestic policies of the premiers Stephen Bethlen and Julius Gombos and their successors safeguarded the power of the upper classes, ignored the demand for meaningful land reform, and encouraged anti-Semitism. Between 1938 and 1944, Hungary regained, with the aid of Germany and Italy, territories from Czechoslovakia, Yugoslavia, and Romania. It declared war on the USSR (June, 1941) and on the United States (Dec., 1941). When the Hungarian government took steps to withdraw from the war and protect its Jewish population, German troops occupied the country (Mar., 1944). The Germans were driven out by Soviet forces (Oct., 1944–Apr., 1945). The Soviet campaign caused much devastation. National elections were held in 1945 (in which the Communist party received less than one fifth of the vote), and a republican constitution was adopted in 1946. The peace treaty signed at Paris in 1947 restored the bulk of the Trianon boundaries and required Hungary to pay $300 million in reparations to the USSR, Czechoslovakia, and Yugoslavia. A new coalition regime instituted long-needed land reforms. Early in 1948 the Communist party, through its control of the ministry of the interior, arrested leading politicians, forced the resignation of Premier Ferenc Nagy, and gained full control of the state. Hungary was proclaimed a People's Republic in 1949, after parliamentary elections in which there was only a single slate of candidates. Radical purges in the national Communist party made it thoroughly subservient to that of the USSR. Industry was nationalized and land was collectivized. The trial of Cardinal Mindszenty aroused protest throughout the Western world. By 1953 continuous purges of Communist leaders, constant economic difficulties, and peasant resentment of collectivization had led to profound crisis in Hungary. Premier Mátyás Rákosi, the Stalinist in control since 1948, was removed in July, 1953, and Imre Nagy became premier. He slowed down collectivization and emphasized production of consumer goods, but he was removed in 1955, and the emphasis on farm collectivization was restored. In 1955, Hungary joined the Warsaw Treaty Organization and was admitted to the United Nations. On Oct. 23, 1956, a popular anti-Communist revolution, centered in Budapest, broke out in Hungary. A new coalition government under Imre Nagy declared Hungary neutral, withdrew it from the Warsaw Treaty, and appealed to the United Nations for aid. However, János Kádár, one of Nagy's ministers, formed a counter-government and asked the USSR for military support. Some 500,000 Soviet troops were sent to Hungary, and in severe and brutal fighting they suppressed the revolution. Nagy and some of his ministers were abducted and were later executed, and thousands of other Hungarians, many of them teenagers, were imprisoned or executed. In addition, about 190,000 refugees fled the country. Kádár became premier and sought to win popular support for Communist rule and to improve Hungary's relations with Yugoslavia and other countries. He carried out a drastic purge (1962) of former Stalinists (including Mátyás Rákosi), accusing them of the harsh policies responsible for the 1956 revolt. Collectivization, which had been stopped after 1956, was again resumed in 1958–59. Kádár's regime gained a degree of popularity as it brought increasing liberalization to Hungarian political, cultural, and economic life. Economic reforms introduced in 1968 brought a measure of decentralization to the economy and allowed for supply and demand factors; Hungary achieved substantial improvements in its standard of living. Hungary aided the USSR in the invasion of Czechoslovakia in 1968. The departure (1971) of Cardinal Mindszenty from Budapest after 15 years of asylum in the U.S. legation and his removal (1974) from the position of primate of Hungary improved relations with the Catholic church. Due to Soviet criticism, many of the economic reforms were subverted during the mid-1970s only to be reinstituted at the end of the decade. During the 1980s, Hungary began to increasingly turn to the West for trade and assistance in the modernization of its economic system. The economy continued to decline and the high foreign debt became unpayable. Premier Károly Grósz gave up the premiership in 1988, and in 1989 the Communist party congress voted to dissolve itself. That same year Hungary opened its borders with Austria, allowing thousands of East Germans to cross to the West. By 1990, a multiparty political system with free elections had been established; legislation was passed granting new political and economic reforms such as a free press, freedom of assembly, and the right to own a private business. The new prime minister, József Antall, a member of the conservative Hungarian Democratic Forum who was elected in 1990, vowed to continue the drive toward a free-market economy. The Soviet military presence in Hungary ended in the summer of 1991 with the departure of the final Soviet troops. Meanwhile, the government embarked on the privatization of Hungary's state enterprises. Antall died in 1993 and was succeeded as prime minister by Péter Boross. Parliamentary elections in 1994 returned the Socialists (former Communists) to power. They formed a coalition government with the liberal Free Democrats, and Socialist leader Gyula Horn became prime minister. Árpád Göncz was elected president of Hungary in 1990 and reelected in 1995. In 1998, Viktor Orbán of the conservative Fidesz–Hungarian Civic Union became prime minister as head of a coalition government. Hungary became a member of the North Atlantic Treaty Organization in 1999. Ference Mádl succeeded Göncz as president in Aug., 2000. A 2001 law giving ethnic Hungarians in neighboring countries (but not worldwide) social and economic rights in Hungary was criticized by Romania and Slovakia as an unacceptable extraterritorial exercise of power. The following year, negotiations with Romania extended the rights to all Romanian citizens, and in 2003 the benefits under the law were reduced. The 2002 elections brought the Socialists and the allies, the Free Democrats, back into power; former finance minister Péter Medgyessy became prime minister. In August, 2004, Medgyssey fired several cabinet members, angering the Free Democrats and leading the Socialists to replace him. The following month Ferenc Gyurcsány, the sports minister, became prime minister. Hungary became a member of the European Union earlier in the year. A Dec., 2004, referendum on granting citizenship to ethnic Hungarians in other countries passed, but it was not legally binding because less than 25% of the Hungarian electorate voted for it. László Sólyom was elected president of Hungary in June, 2005. In Apr., 2006, Gyurcsány's Socialist-led coalition won a majority of seats in the parliamentary elections, marking the first time a government had won a second consecutive term in office since the establishment of free elections in 1990. In September, however, the prime minister suffered a setback when a recording of a May, 2006, Socialist party meeting was leaked and he was heard criticizing the government's past performance and saying that the party had lied to win the 2006 election. The tape sparked opposition demonstrations and riots, which were encouraged by the opposition Fidesz, and led to calls for the government to resign. Gyurcsány apologized for not having campaigned honestly, and the coalition was trounced in local elections in early October, but he retained the support of his parliamentary coalition and the government remained in power. In Apr., 2008, the Alliance of Free Democrats left the governing coalition, and the Socialists formed a minority government. The 2008 global financial crisis led to a sharp drop in the value of the Hungarian currency in October, forcing Hungary to seek a In parliamentary elections a year later, Orbán and Fidesz defeated the Socialists in a landslide, winning more than two thirds of the seats, but the voting also produced a surge for the far right Movement for a Better Hungary (Jobbik), which appealed to anti-Semitic and anti-Romani sentiments and won nearly 17% of the vote in the first round. Fidesz subsequently passed a law enabling ethnic Hungarians in Central Europe to more easily acquire Hungarian citizenship; the legislation provoked Slovakia, which passed a bill that would generally strip Slovakian citizenship from Hungarians who did so. The government also reduced the powers of the constitutional court, ending its right to rule on budget matters; forced the nationalization of pension plans to cut the budget deficit; and enacted a media law that was denounced as stifling free expression and drew criticism from the European Union. Other measures adopted to avoid the austerities used elsewhere in the EU to combat recession-induced government deficits included higher taxes on economic sectors dominated by foreign firms. In June, 2010, Pál Schmitt, the speaker of the National Assembly and a member of Fidesz, was elected to succeed Sólyom as Hungarian president. The failure of an alumina plant sludge pond in Oct., 2010, resulted in an ecological disaster in W Hungary that covered 6 villages and 16 sq mi (40 sq km) with toxic mud and also poisoned local rivers. A new constitution, enacted by Fidesz in Apr., 2011, and effective in 2012, was criticized in a number of quarters for attempting to bind future Hungarian governments to Fidesz's conservative political program. By late 2011, legal changes that reduced the independence of the central bank had led to conflict with the European Union and International Monetary Fund. Schmitt resigned as president in Apr., 2012, after it was discovered that he had plagiarized parts of his doctoral thesis. János Áder, a member of Fidesz and former National Assembly speaker, was elected to succeed Schmitt in May. In Jan., 2013, the constitutional court struck down a new election law that had been passed in late 2012; the court ruled that the law unjustifiably restricted voter rights. The opposition had criticized the law as intentionally designed to favor Fidesz. The appointment in Mar., 2013, of a new governor for the central bank gave Orban greater influence over the bank, and the bank subsequently adopted economic stimulus measures. In September the parliament approved a number of constitutional amendments that partially reversed provisions that had been criticized by the European Union. More on Hungary History from Fact Monster: See more Encyclopedia articles on: Hungarian Political Geography
CLIMATE change resulting from greenhouse gases in countries like China and India is being offset by acid rain, according to research from MMU’s Earth Systems Science group. Curiously, the smog currently plaguing Asian cities appears to be mitigating one of the worst greenhouse emissions, that of methane which is 21 times more powerful as a greenhouse gas than CO2, and is a by-product of the Asian rice paddy field. The British team of MMU's Professor Nancy Dise and Dr Vincent Gauci of the Open University added sulphate to laboratory rice paddies in an effort to mimic the effect of acid rain on Asia’s most common food crop. The acid rain surrogate reduced methane emissions by up to a quarter in the research funded by the Natural Environment Research Council. "The reduction in pollution happens during a stage of the life cycle when the rice plant is producing grain. This period is normally associated with around half of all methane emissions from rice and we found that simulated acid rain pollution reduced this emission by 24 per cent," say Dise and Gauci. To test the effects of acid rain, the researchers added frequent small doses of sulphate, which simulate acid rain experienced in polluted areas of China. "We had similar results when exposing natural wetlands to simulated acid rain but this could be more important since natural wetlands are mostly located far from major pollution sources, whereas for rice agriculture, the methane source and the largest source of acid rain coincide in more urban environments," said the researchers. And they said more research was vital: "One line of investigation we'd like to confirm is that the sulphate component of acid rain may actually boost rice yields. This might, paradoxically, have the effect of reducing a source of food for the methane producing microorganisms that live in the soil. "There is also likely to be competition between these microorganisms and sulphate-reducing bacteria. Normally in these conditions sulphate-reducers win which results in less methane." But they added a note of caution to the results. "Acid rain is one of several pollution problems in Asia that need solving but we need to appreciate the potential consequences of that clean-up, one of which could be an increase in methane emissions as the effect of the acid rain wears off." The paper Suppression of rice methane emission by sulphate deposition in simulated acid rain is published in the Journal of Geophysical Research, volume 113. The authors are Vincent Gauci (lead author) Nancy Dise - Manchester Metropolitan University (grant holder), Graham Howell - Open University, and Meaghan Jenkins - Open University and University of New South Wales.
This worksheet is useful for practicing vocabulary. The goal of this exercise is to match the pictures and the words on the right. To make it more effective I recommend following these simple steps: 1. Give the students this worksheet for homework. 2. Tell each student to follow this link: http://www.theenglishalley.es/House/utilityroom.html 3. Using this webpage, students can look for the different words to complete the exercise. The best thing about this page is that simply by putting the cursor over the pictures they will hear the pronunciation and see the spelling of each object. 4. The next day you can review this vocabulary together. Because of the homework, students should already have a good understanding of both the vocabulary and pronunciation. I hope that you and your students find this worksheet useful. The English Alley Upload date: 2014-01-31 19:30:59
Common Reed (Phragmites) What is it? Common reed, also known as phragmites, is a large perennial, grass or reed with creeping rhizomes. It typically is found in or near wetlands but also can be found in sites that hold water, such as roadside ditches and depressions. Phragmites form dense stands, which include both live stems and standing dead stems from previous years. The plant spreads horizontally by sending out rhizome runners, which can grow 10 or more feet in a single growing season, rapidly crowding out native grasses. Is it here yet? Yes. Extensive stands exist in both eastern and western Washington in marshes and along river edges and shores of lakes and ponds. Why should I care? The Washington State Noxious Weed Control Board has listed common reed as a Class B noxious weed. The goals are to contain the plants where they already are widespread and prevent their spread into new areas. Cutting has been used successfully for control. Because it is a grass, cutting several times during a season, at the wrong times, may increase stand density. However, if cut just before the end of July, most of the food reserves produced that season are removed with the aerial portion of the plant, reducing the plant's vigor. What should I do if I find one? How can we stop it? Do not purchase, plant, or trade this species. What are its characteristics? - Common reed is a perennial wetland grass that is able to grow to heights of 15 feet or more. - Leaves are 8-16 inches long, .2 to 1.5 inches wide. - Leaf blade is smooth and lanceolate, which tapers from a rounded base toward its top; lance-shaped. - Their hollow stems can reach 12 feet tall and have a rough texture. - The flowers are dense, silky, floral spikelets and grow from 1–16 inches long. These feathery plumes are purplish in color and flower in late July and August. How do I distinguish it from native species? Non-native common reed may be confused with native populations of phragmites. Native genotypes are less dense and the stems are thin and shiny. Native phragmite flowers are also less dense.
Find out how bubbles work with this experiment. You won't actually blow any bubbles, but you will learn the science that makes a bubble! Water is made up of lots of tiny molecules. The molecules are attracted to each other and stick together. The molecules on the very top of the water stick together very closely to make a force called surface tension. Surface tension is what caused the water to rise up above the rim of the glass in the experiment - the water molecules stuck together to make a dome instead of spilling over the side. Why didn't the dome break when you stuck your finger through it? Why didn't the water spill over the glass? Well, the surface tension was strong enough that it just went around your finger. The water molecules still stuck to each other and nothing spilled! What happened when you put your soapy finger into the water? The soap on your finger broke the water's surface tension and some of the water molecules didn't stick to each other any more and they were pushed out of the glass! The force of surface tension also creates bubbles. In plain water, the surface tension is strong and the water might make some bubbles, but they will not last very long and they will be very small, because the other molecules in the water will pull on the bubbles and flatten them. Soap needs to be mixed with the water to make bubbles that can float through the air. When you add soap, the water becomes flexible, sort of like elastic, and it can hold the shape of a bubble when air is blown into it. Make your own bubble-blowing solution out of soap and water, then see what happens when you add a special ingredient to the bubble solution! The first bubble solution was just soap and water. As you learned from the Surface Tension experiment, soap helps bubbles form. You probably got some small bubbles that didn't last very long from the soap and water. Then you added glycerin or corn syrup to the soap and water and probably noticed that the bubbles you blew were stronger and better than before. Did they last longer? Were they bigger? The glycerin or corn syrup mixes with the soap to make it thicker. When the water that is trapped between the layers of soap in a bubble evaporates(or dries up), the bubble will pop! The thicker skin of the glycerin bubble keeps the water from evaporating as quickly. You can probably also blow a much bigger bubble with the second bubble solution that you made than with the plain soap and water one. Adding glycerin or corn syrup makes bubbles stronger and helps them last longer. It makes super bubbles! After you make the super bubble solution and let it set for at least one day, try doing some of these cool bubble tricks! Can you think of any of your own tricks to do with bubbles? Trick 1 - A Square Bubble? You will need two pipe cleaners and your super bubble solution for this trick. The bubble was round even though it came from a square! Bubbles are always round when they detach and float through the air because the skin of soap always tries to take up the least amount of space it can and still keep the same amount of air inside the bubble. The soap molecules always stretch into a round shape automatically! A round shape takes up less space than a square shape. Try the trick again, but make a wand in any shape you want - what about a star or a triangle? Do bubbles from those shapes become round too? Trick 2 - Don't Pop the Bubble! You will need the super bubble solution, the lid from the container, a straw, and some objects with pointed ends. You should have been able to push the scissors through the wall of the bubble without popping it! When something wet touches a bubble, it doesn't poke a hole in the wall of the bubble, it just slides through and the bubble forms right around it. The bubble solution on the scissors filled in the hole that would have been made. If you try poking dry scissors through your bubble, you will see it pop instantly! (If it popped when you put the wet scissors in, something was probably too dry. Try it again and make sure anything that touches the bubble is completely wet with bubble solution.) For another trick, get one hand completely wet in the bubble solution then use the other hand to hold your bubble blower and blow a big bubble in the palm of your wet hand. Molecule - a very tiny part of a substance that is too small to see with your eyes. A water molecule is smaller than one drop of water! Surface tension - molecules in a liquid are attracted to each other and make the top of the liquid very tight. The surface tension is what causes water to form drops. It also makes a dome shape across the top of a container that is filled to the top. Evaporate - when a liquid dries up and goes into the air. The liquid is then in the air, but it is a vapor or a gas now and you can't see it. When we say the air is humid, it means that a lot of water has evaporated into the air and now water vapor (gas) is floating around in the air. It makes the air moist and heavy, and it might make you feel sticky when you go outside.
About Chemical Hazards What Is a Chemical Hazard? A chemical hazard is any substance that can cause harm, primarily to people. Chemicals of all kinds are stored in our homes and can result in serious injuries if not properly handled. Household items such as bleach can result in harmful chlorine gas or hydrochloric acid if carelessly used. Gasoline fumes from containers for lawnmowers or boats can result in major health hazards if inhaled. DOE Oak Ridge uses thousands of chemicals in its varied research and other operations. New chemicals are or can be created as a result of the research or other activities. DOE follows national safety requirements in storing and handling these chemicals to minimize the risk of injuries from its chemical usage. However, accidents can occur despite careful attention to proper handling and storage procedures. Types of Chemicals Used at the Oak Ridge Facilities • Cyanide compounds • Hydrogen Chloride • Hydrogen Fluoride • Lithium compounds • Methylene Chloride A federal law called the Emergency Planning and Community Right to Know Act gives you the right to know about toxic chemicals being released into the environment. The Toxics Release Inventory maintained by the U.S. Environmental Protection Agency provides information about the types and amounts of toxic chemicals that are released each year to the air, water, and land as well as information on the quantities of toxic chemicals sent to other facilities for further waste management. Data for Department of Energy facilities in Oak Ridge is included in this Inventory. You can view this information at the Web site www.epa.gov/tri. By entering the Oak Ridge zip code "37831" at the prompt, you can view information on the types of chemicals used at the DOE facilities. Chemical Emergency in Oak Ridge DOE Oak Ridge has dozens of facilities engaged in chemical operations. Most operations involve such small quantities of chemicals that an accident poses little threat to people. However, DOE also has some larger chemical operations and, in some locations, larger amounts of stored chemicals where workers and the public can be impacted by accidents. While accidents are possible, DOE believes the risk of exposure to its workers is low due to the safety precautions followed throughout the DOE Oak Ridge Reservation. The risk to the public from harmful chemicals being released outside of the DOE property areas is even lower. In the event of a chemical release with the potential for off-site impacts, the sirens will sound and a message will be broadcast on the Emergency Alert System. However, as a matter of simple prudence and for compliance with Federal government safety requirements, DOE has prepared emergency response plans for accidents that could occur. DOE and its contractors maintain an experienced group of emergency response personnel trained to respond to chemical accidents.
A Japanese university and zoo are creating a sperm bank for endangered animals that could one day be used to bring extinct species back to life and even help to colonise other planets with Earth’s rarest creatures. To date, scientists at Kyoto University’s Graduate School of Medicine and the city’s zoo have managed to freeze dry the sperm of chimpanzees and a Sunda slow loris, both of which are listed as primates at risk, as well as giraffes. Takehito Kaneko, an associate professor at the university, spent a decade perfecting a method of incorporating a buffer solution in the freeze-drying process to preserve the sperm at the same time as protecting the genetic information within the sample. The scientists were able to bring the sperm back to life by thawing it gently in water. This method preserves the sperm samples very well and technically we believe it is possible to store them for decades or even longer into the future,” he told The Daily Telegraph. “After they have been preserved, we want to continually examine the condition of the genetic information.” Prof. Kaneko began his research in mice and rats and was successful in breeding young of both species through artificial insemination from sperm that had been freeze-dried for five years. He has now set his sights on collecting sperm samples from a range of larger animals that are also at risk of extinction, starting with elephants, tigers and rhinoceros. The zoo is home to 132 species of animals and the university team aims to collect samples from each species in the future. Freeze-drying of sperm has advantages over using liquid nitrogen as it does not require large amounts of bulky equipment and samples can be stored in a regular refrigerator. Prof. Kakeko emphasised that he is not a medical doctor and is not sure if there are human applications for the technology, but added that it might be something that doctors consider. “It’s a long way in the future, of course, but if we can store this genetic information in this way it could be something that we can take into space,” he said, adding that the samples could be used to create colonies of creatures on other planets.
Greek: “ruler of a quarter”) in Greco-Roman antiquity, the ruler of a principality; originally the ruler of one-quarter of a region or province. The term was first used to denote the governor of any of the four tetrarchies into which Philip II of Macedon divided Thessaly in 342 bc—namely, Thessaliotis, Hestiaeotis, Pelasgiotis, and Phthiotis. (These may, however, have constituted a revival of a division of earlier origin.) Later, the term tetrarchy was applied to the four divisions of Galatia (in Anatolia) before its conquest by the Romans (169 bc). Even later, “tetrarch” became familiar as the title of certain Hellenized rulers of petty dynasties in Syria and Palestine, whom the Romans allowed a measure of independent sovereignty. In this usage it lost its original precise sense and meant only the ruler of a divided kingdom or of a district too minor to justify a higher title. After the death of Herod the Great (4 bc), his realm was shared among his three sons: the chief part, including Judaea, Samaria, and Idumaea, fell to Archelaus, with the title of ethnarch; Philip received the northeast of the realm and was called tetrarch; and Galilee was given to Herod Antipas, who also was called tetrarch. These three sovereignties were reunited under Herod Agrippa from ad 41 to 44.
A Siberian unicorn roamed the Earth with the humans until at least 39,000 years ago. Scientists believe that the first modern humans, or homo sapiens, were present around 200,000 to 300,000 years ago. While researchers previously believed that the Siberian unicorn died off around 200,000 years ago, new evidence emerges that, according to DNA analysis, the mysterious creature actually lived until 36,000 years ago. It is possible that, with DNA, genetic engineers can recreate this species today. Just like Chinese scientists were able to genetically engineer super-strong dogs, scientists should also be able to genetically engineer a Siberian unicorn to a smaller size… to the size of your dog. Siberian Unicorn Had Unicorn Features No wonder why we have all those stories of unicorns. The Siberian unicorn might have existed alongside humans; or, something might have existed that looked like a unicorn to ancient humans. These species had a long horn and ate grass. While the ancient creature vaguely resembles a modern-day rhino, it is not closely related. Weighing four tonnes, the animal had a huge head with a horn that was up to 1 meter in length. From the rhino ancestral line, it split off from modern-day rhinos over 40,000,000 years ago. Just as rhinos are picky about their habitat, scientists believe that the Siberian unicorns might also have been very picky about where they lived and what they ate. Because this ancient grass-eater was selective with its diet, it might have died off from grass shrinkage and lack of adequate food. The extinction might have occurred when the Earth warmed at the end of the last Ice Age, scientists say. Genetic Engineers Can Create Siberian Unicorn Pet the Size of Your Dog Genetic engineers were able to analyze the DNA. Now, take the DNA, and they might possibly be able to recreate the Siberian Unicorn. Even now, some gene engineers want to recreate the dinosaur, by retro-engineering the DNA of birds. Given the way Chinese genetic engineers seem to be experimenting these days, they might fancy creating a Siberian unicorn pet that is the size of your dog. Whether or not the DNA question is ethical, humankind is constantly pushing for progress. Thus, scientists have been trying to push all kinds of boundaries since the beginning of scientific discoveries. Were it not for lab experimentation with animals, how can we get the cosmetic products that we get today? In any case, we digress. A Siberian unicorn that is dog-sized might be a cute pet to have, if you have enough grass to feed it. New Tech and Inventions today. Snowboarding, golfing, and jogging – tomorrow. Email: [email protected]
Tens of thousands of years ago, prehistoric men wrapped in skins against the blistering cold of the Ice Age were creating exquisite cave art on the walls of Chauvet Cave in southern France. To the light of flickering torches and fire in hearths, they painted lions, rhinoceroses, bears and other animals that existed in Europe back then, and still reverberate with long-gone life. The images done in charcoal were mainly in the deep chambers of the cave, Jean-Michael Geneste of the Université de Bordeaux tells Haaretz. (The color more commonly used in the forward chambers was red ochre.) The large number of charcoal images has to mean they were producing charcoal in copious amounts just for the purpose of art. Now a group of researchers has analyzed the charcoal used at Chauvet, some found in hearths. Of 171 samples, all but one originated in pine, a tree that survives in cold conditions, conclude Isabelle Théry-Parisot of the Côte d'Azur University and colleagues in their paper published Wednesday in Antiquity. The one exception was buckthorn, which also grows in conditions we would not appreciate, they report. The team does note that their results may have been constrained by their sampling method, i.e., tweezers. But assuming their find, that all but one fragment was pine, accurately reflects the prehistoric reality, that begs the question of why they chose to make charcoal out of pine: choice or necessity. Reindeer and snow The Cave of Chauvet-Pont-d'Arc was discovered in 1994 by amateur spelunkers in a cliffside above the prehistoric path of the Ardeche River. Previous research demonstrated human use of the cave in two phases: during the Aurignacian, 37,000 to 33,500 years ago, and the Gravettian, from 31,000 to 28,000 years ago. The Aurignacian and Gravettian phases both fell within the last Ice Age, known as the Quaternary glaciation period, which began 2.6 million years ago and is still petering out (based on the definition that the Antarctic ice sheet has existed continuously since then.) The ice sheet did extend that far south, Geneste explains: it would have been as cold at Chauvet as it is in (say) Norway today. Reindeer abounded. Indeed, the nigh-exclusive use of pine to make charcoal during both phases, each spanning millennia, attests that in both the Aurignacian and Gravittian periods, the people at Chavet lived in a very harsh climate. That predilection for pine Having established that the artists of Chauvet drew using pine charcoal in both the Aurignacian and Gravettian phases, the researchers set out to learn why. Were they picking up what lay around, which was dead pine branches? Or were they cherry-picking pine and scorning other potential platforms for fire, which they sorely needed for heat, light, to cook, and to make charcoal for art? Chauvet Cave itself apparently existed in what Théry-Parisot and the group calls a refuge area, which, for whatever reason, was warmer and more humid than neighboring areas during the long glacial episodes of the Quaternary. In the case of Chauvet, it would have been protected from the elements by the Ardeche gorges, Geneste says. That is why it had trees, most of which were pines, which shed branches that the prehistoric hunter-gatherers could conveniently collect and transport to the cave. Though pine predominated the landscape, there were other trees, Geneste says. Also, the researchers feel that default exploitation of the only wood at hand is simplistic. In short, they suspect pine was partly "just there" and partly a choice, which boils down to culture. In other words, the choice passed down the generations. Use pine to make charcoal to paint lions on the wall, kids – the texture is right. Not all Paleolithic imagery done in black employed charcoal, by the way. Geneste notes that in Lascaux, for instance, manganese oxide was used. Which leads us to a last point. Did anybody ever actually live in Chauvet? Collecting deadwood rather than going to the sweaty extreme of hewing down trees and drying them out in storage nicely suits the model of occasional occupation of Chauvet, if the occasional occupants came in summer, not when it was snowing, which would cover and soak the deadwood. You can't burn wet wood. And they drew pictures on the walls of animals to the light of the flickering pine fires. Possibly they might have drawn more, but the cave was sealed off by rockfall over 21,000 years ago, conserving the art for our awestruck observation today.
Bullying Questions and Answers Mike answers your questions about bullying. What is bullying? Bullying is when a person, or group of people, uses power -- such as physical, verbal, or social -- to harass or intimidate one or more people who have less power. Are there methods for resolving conflicts respectfully? Yes, and the key is understanding that confrontation does not cause conflict; it resolves conflict. Avoiding conflict may result in pent-up emotions, unhealthy choices, direct bullying, or social aggression. When used effectively, confrontation can make a friendship stronger by showing the people involved they care enough about the relationship to work through differences. How do you control emotions and anger to prevent it from turning into bullying? The key is learning to identify and recognize physical cues to determine how to act and react toward others. Using guided visualizations and activities, students learn the connections between their emotions so they can be addressed in a productive manner. Mike Dreiblatt answers all of your questions about Bullying and Social Aggression. Learn more in Mike's book on bullying. Do YOU have a question? Contact me for free answers.
Although the two countries had fought intermittently since 1931, full-scale war started in earnest in 1937 and ended only with the surrender of Japan in 1945. The war was the result of a decades-long Japanese imperialist policy aiming to dominate China politically and militarily to secure its vast raw material reserves and other resources. At the same time, the rising tide of Chinese nationalism and notions of self determination stoked the coals of war. Before 1937, China and Japan fought in small, localized engagements in so-called "incidents". Yet the two sides, for a variety of reasons, refrained from fighting a total war. The 1931 invasion of Manchuria by Japan is known as the "Mukden Incident". The last of these incidents was the Marco Polo Bridge Incident of 1937, marking the official beginning of full scale war between the two countries. In Japan, the name is most commonly used because of its neutrality. When the war began in July 1937 near Beijing, the government of Japan used North China Incident (北支事変, Hokushi Jihen), and with the outbreak of war in Central China next month, it was changed to China Incident (支那事変, Shina Jihen). The word incident (事変, jihen) was used by Japan as neither country had declared war on each other. Japan wanted to avoid intervention by other countries such as the United Kingdom and particularly the United States, which had been the biggest steel exporter to Japan. American President Franklin D. Roosevelt would have had to impose an embargo due to the Neutrality Acts had the fighting been named a war. In Japanese propaganda however, the invasion of China became a "holy war" (seisen), the first step of the Hakko ichiu (eight corners of the world under one roof). In 1940, prime minister Konoe thus launched the League of Diet Members Believing the Objectives of the Holy War. When both sides formally declared war in December 1941, the name was replaced by Greater East Asia War (大東亜戦争, Daitōa Sensō). Although the Japanese government still uses "China Incident" in formal documents, because the word Shina is considered a derogatory word by China, media in Japan often paraphrase with other expressions like The Japan-China Incident (日華事変 [Nikka Jihen], 日支事変 [Nisshi Jihen], which were used by media even in the 1930s. Also, the name Second Sino-Japanese War is not usually used in Japan, as the First Sino-Japanese War (日清戦争, Nisshin-Sensō), between Japan and the Qing Dynasty in 1894 is not regarded to have obvious direct linkage with the second, between Japan and the Republic of China. The origin of the Second Sino-Japanese War can be traced to the First Sino-Japanese War of 1894-95, in which China, then under the Qing Dynasty, was defeated by Japan and was forced to cede Taiwan and recognize the independence of Korea in the Treaty of Shimonoseki. The Qing Dynasty was on the brink of collapse from internal revolts and foreign imperialism, while Japan had emerged as a great power through its effective measures of modernization. The Republic of China was founded in 1912, following the Xinhai Revolution which overthrew the Qing Dynasty. However, the nascent Republic was even weaker than its predecessor because of the dominance of warlords. Unifying the nation and repelling imperialism seemed a very remote possibility. Some warlords even aligned themselves with various foreign powers in an effort to wipe each other out. For example, warlord Zhang Zuolin of Manchuria openly cooperated with the Japanese for military and economic assistance. It was during the early period of the Republic that Japan became the greatest foreign threat to China. In 1915, Japan issued the Twenty-One Demands to further its political and commercial interests in China. Following World War I, Japan acquired the German sphere of influence in Shandong. China under the Beiyang government remained fragmented and unable to resist foreign incursions until the Northern Expedition of 1926-28, launched by the Kuomintang (KMT, or Chinese Nationalist Party) in Guangzhou against various warlords. The Northern Expedition swept through China until it was checked in Shandong, where Beiyang warlord Zhang Zongchang, backed by the Japanese, attempted to stop the Kuomintang Army from unifying China. This situation culminated in the Jinan Incident of 1928 in which the Kuomintang army and the Japanese were engaged in a short conflict. In the same year, Manchurian warlord Zhang Zuolin was also assassinated when he became less willing to cooperate with Japan. Following these incidents, the Kuomintang government under Chiang Kai-shek finally succeeded in unifying China in 1928. Still, numerous conflicts between China and Japan persisted as Chinese nationalism had been on the rise and one of the ultimate goals of the Three Principles of the People was to rid China of foreign imperialism. However, the Northern Expedition had only nominally unified China, and civil wars broke out between former warlords and rival Kuomintang factions. In addition, the Chinese Communists revolted against the central government following a purge of its members from the KMT. Because of these situations, the Chinese central government diverted much attention into fighting these civil wars and followed a policy of "first internal pacification before external resistance". This situation provided an easy opportunity for Japan to further its goals. In 1931, the Japanese invaded Manchuria right after the Mukden Incident. After five months of fighting, in 1932, the puppet state Manchukuo was established with the last emperor of China, Puyi, installed as its head of state. Unable to challenge Japan directly, China appealed to the League of Nations for help. The League's investigation was published as the Lytton Report, which condemned Japan for its incursion of Manchuria, and led Japan to withdraw from the League of Nations. From the late 1920s and throughout the 1930s, appeasement was the policy of the international community and no country was willing to take an active stance other than a weak censure. Japan saw Manchuria as a limitless supply of raw materials and as a buffer state against the Soviet Union. Incessant conflicts followed the Mukden Incident. In 1932, Chinese and Japanese soldiers fought a short war in the January 28 Incident. The war resulted in the demilitarization of Shanghai, which forbade the Chinese from deploying troops in their own city. In Manchukuo there was an ongoing campaign to defeat the volunteer armies that arose from the popular frustration at the policy of nonresistance to the Japanese. In 1933, the Japanese attacked the Great Wall region, and in its wake the Tanggu Truce was signed, which gave Japan the control of Rehe province and a demilitarized zone between the Great Wall and Beiping-Tianjin region. The Japanese aim was to create another buffer region, this time between Manchukuo and the Chinese Nationalist government whose capital was Nanjing. In addition, Japan increasingly utilized the internal conflicts among the Chinese factions to reduce their strength one by one. This was precipitated by the fact that even some years after the Northern Expedition, the political power of the Nationalist government only extended around the Yangtze River Delta region, and other regions of China were essentially held in the hands of regional powers. Japan sought various Chinese collaborators and helped these men lead governments that were friendly to Japan. This policy was called the Specialization of North China or more commonly known as the North China Autonomous Movement. The northern provinces affected by this policy were Chahar, Suiyuan, Hebei, Shanxi, and Shandong. This Japanese policy was most effective in the area of what is now Inner Mongolia and Hebei. In 1935, under Japanese pressure, China signed the He-Umezu Agreement, which forbade the KMT from conducting party operations in Hebei. In the same year, the Ching-Doihara Agreement was signed and vacated the KMT from Chahar. Thus, by the end of 1935, the Chinese central government had virtually vacated North China. In its place, the Japanese-backed East Hebei Autonomous Council and the Hebei-Chahar Political Council were established. There in the vacated area of Chahar the Mongol Military Government (蒙古軍政府) was formed on May 12 1936 with Japan providing military and economic aid. Most historians place the beginning of the Second Sino-Japanese War on July 7, 1937 at the Marco Polo Bridge Incident, when a crucial access point to Beijing was assaulted by the Imperial Japanese Army (IJA). Because the Chinese defenders were the poorly equipped infantry divisions of the former Northwest Army, the Japanese easily captured Beiping and Tianjin. The Imperial General Headquarters in Tokyo were initially reluctant to escalate the conflict into full scale war, being content with the victories achieved in northern China following the Marco Polo Bridge Incident. However, the KMT central government determined that the "breaking point" of Japanese aggression had been breached and Chiang Kai-shek quickly mobilized the central government army and airforce under his direct command to attack the Japanese Marines in Shanghai, which led to the Battle of Shanghai. The IJA had to mobilize over 200,000 troops, coupled with numerous naval vessels and aircraft to capture Shanghai after more than three months of intense fighting, with casualties far exceeding initial expectations. Building on the hard won victory in Shanghai, the IJA captured the KMT capital city of Nanjing and Southern Shanxi in campaigns involving approximately 350,000 Japanese soldiers, and considerably more Chinese soldiers. Historians estimate up to 300,000 people perished in the Nanking Massacre, after the fall of Nanjing on December 13, 1937, while some Japanese historians wrongly deny the existence of a massacre. At this point the Headquarters in Tokyo still hoped to limit the scope of the conflict at occupying areas around Shanghai, Nanjing and most of northern China, in order to preserve strength for an anticipated showdown with the Soviet Union. But by now the Headquarters had effectively lost command over Japanese generals fighting in China. With many victories achieved, these generals escalated the war and finally met with defeat in Taierzhuang. Afterwards the IJA had no choice but to fight on, hoping to destroy the fighting strength of the National Revolutionary Army (NRA) and force the KMT government to negotiate the terms for surrender. The height of Japanese army full-scale attacks cumulated in capturing the city of Wuhan in October, 1938, but it failed to destroy the NRA as the KMT retreated to Chongqing to set up a provisional capital and Chiang refused to negotiate unless the Japanese agree to a complete withdrawal to 1937 levels. While the KMT central government struggled on from the provisional capital of Chongqing, Chiang Kai-shek's stern refusal to surrender after suffering tremendous losses deeply fustrated the IJA, therefore in retaliation throughout the next few years the Imperial air force of the Navy and the Army began launching the world's first massive air bombing raids of civilian targets on nearly every major city in unoccupied China, leaving millions dead, injured and homeless. The Japanese tried to solve its occupation problems by implementing a strategy of creating friendly puppet governments favorable to Japanese interests in the territories conquered, the most prominent being the Nanjing Nationalist Government headed by former KMT premier Wang Jingwei. However, the atrocities committed by the Japanese army, as well as Japanese refusal to yield any real power made them very unpopular and ineffective. The only success the Japanese had was the ability to recruit a large Collaborationist Chinese Army to maintain public security in the occupied areas. With very low military-industrial capacities and limited experience in modern warfare, the NRA was defeated in a large-scale counter-offensive against IJA in early 1940. Afterwards Chiang could not risk any more all-out offensive campaigns given the poorly-trained, under-equipped, and disorganized state of his armies and opposition to his leadership both within Kuomintang and in China at large. He had lost a substantial portion of his best trained and equipped army defending Shanghai and was at times at the mercy of his generals, who maintained a high degree of independence from the central KMT government. On the other hand, while Japan held most of the eastern coastal areas of China and Vietnam, guerrilla fighting continued in the conquered areas. Japan had suffered tremendous casualties from unexpectedly stubborn resistance in China and already developed problems in administering and garrisoning the seized territories. Neither side could make any swift progress in a manner resembling the fall of France and Western Europe to Nazi Germany. The basis of Chinese strategy before the entrance of Western Allies can be divided into two periods: Unlike Japan, China was unprepared for total war and had little military-industrial strength, no mechanized divisions, and few armored forces. Up until the mid-1930s China had hoped that the League of Nations would provide countermeasures to Japan's aggression. In addition, the Kuomintang government was mired in a civil war against the Communists, as Chiang Kai-shek was famously quoted: "the Japanese are a disease of the skin, the Communists are a disease of the heart". The United Front between KMT and CCP was never truly unified, as each side was preparing for a showdown with the other once the Japanese were driven out. Even under these extremely unfavorable circumstances, Chiang realized that in order to win the support from the United States and other foreign nations, China must prove that it was indeed capable of fighting. A fast retreat would discourage foreign aid so Chiang decided to make a stand in the Battle of Shanghai. Chiang sent the best of his German-trained divisions to defend China's largest and most industrialized city from the Japanese. The battle lasted over three months, saw heavy casualties on both sides and ended with a Chinese retreat towards Nanjing. While this was a military defeat for the Chinese, it proved that China would not be defeated easily and showed China's determination to the world, which became an enormous morale booster for the Chinese people as it ended the Japanese taunt that Japan could conquer Shanghai in three days and China in three months. Afterwards the Chinese began to adopt the strategy of "trading space for time" (Chinese: 以空間換取時間). The Chinese army would put up fights to delay Japanese advance to northern and eastern cities, to allow the home front, along with its professionals and key industries, to retreat west into Chongqing. As a result of Chinese troops' scorched earth strategies, where dams and levees were intentionally sabotaged to create massive flooding, the consecutive Japanese advancements and conquests began to stall in late-1938. Second Period: 25 October 1938 (Fall of Wuhan) - December 1941 (before the Allie's declaration of war on Japan). During this period, the Chinese main objective was to prolong the war and waited for the Japanese to make the mistake of attacking the United States. American general Joseph Stilwell called this strategy "winning by outlasting". Therefore the Chinese army adopted the concept of "magnetic warfare" to attract advancing Japanese troops to definite points where they were subjected to ambush, flanking attacks, and encirclements in major engagements. The most prominent example of this tactic is the successful defense of Changsha numerous times. Also, CCP and other local guerrillas forces continued their resistance in occupied areas to pester the enemy and make their administration over the vast lands of China difficult. As a result the Japanese really only controlled the cities and railroads, while the countrysides were almost always hotbeds of partisan activity. By 1940, the war had reached a stalemate with both sides making minimal gains. The Chinese had successfully defended their land from oncoming Japanese on several occasions, while strong resistance in areas occupied by the Japanese made a victory seem impossible to the Japanese. This frustrated the Japanese and led them to employ the "Three Alls Policy" (kill all, loot all, burn all) (三光政策, Hanyu Pinyin: Sānguāng Zhèngcè, Japanese On: Sankō Seisaku). It was during this time period that the bulk of Japanese atrocities were committed. After the Mukden Incident, Chinese public opinion strongly criticized the leader of Manchuria, the "young marshal" Zhang Xueliang, for his nonresistance to the Japanese invasion, even though the Kuomintang central government was indirectly responsible for this policy. Afterwards Chiang Kai-shek assigned Zhang and his Northeast Army the duty of suppressing the Red Army of the Chinese Communist Party(CCP) in Shaanxi after their Long March. This resulted in great casualties for his Northeast Army, and Chiang Kai-shek did not give him any support in manpower and weaponry. On 12 December, 1936 a deeply disgruntled Zhang decided to conspire with the CCP and kidnapped Chiang Kai-shek in Xi'an. In order to secure the release of Chiang, the KMT was forced to agree to a temporary end to the Chinese Civil War and the forming of a United Front between the CCP and KMT against Japan on 24 December 1936. The cooperation took place with salutary effects for the beleaguered CCP, and they agreed to form the New Fourth Army and the 8th Route Army which were nominally under the command of the National Revolutionary Army. The Red Army of CCP fought in alliance with the KMT forces during the battle of Taiyuan, and the high point of their cooperation came in 1938 during the Battle of Wuhan. However, despite Japan's steady territorial gains in northern China, the coastal regions, and the rich Yangtze River Valley in central China, the distrust between the two antagonists was scarcely veiled. As a result of the Communists efforts to aggressively expand their military strength through absorbing Chinese guerrilla forces behind enemy lines (frequently by the use of force), the uneasy alliance began to break down by late 1938. Starting in 1940, open conflicts between the Nationalists and Communists became more frequent in the occupied areas outside of Japanese control, culminating in the New Fourth Army Incident. Afterwards, the Second United Front completely broke down and the CCP begin to build up their sphere of influence wherever opportunities were presented, mainly through rural mass organizations, administrative, land and tax reform measures favoring poor peasants; while the Nationalists attempted to neutralize the spread of Communist influence by military blockade of areas controlled by CCP and fighting the Japanese at the same time See also: Motives of the Second Sino-Japanese War At the outbreak of full scale war, many global powers were reluctant to provide support to China; because in their opinion the Chinese would eventually lose the war, and they did not wish to antagonize the Japanese who might, in turn, eye their colonial possessions in the region. They expected any support given to Kuomintang might worsen their own relationship with the Japanese, who taunted the Kuomintang with the prospect of conquest within three months. However, Germany and the Soviet Union did provide support to the Chinese before the war escalated to the Asian theatre of World War II. Prior to the outbreak of the war, Germany and China had close economic and military cooperation, with Germany helping China modernize its industry and military in exchange for raw materials. More than half of the German arms exports during its rearmament period were to China. Nevertheless the proposed 30 new divisions equipped and trained with German assistance did not materialize when Germany withdrew its support in 1938. The Soviet Union wished to keep China in the war to hinder the Japanese from invading Siberia, thus saving itself from a two front war. In September 1937 the Soviet leadership signed Sino-Soviet Non-Aggression Pact, began aiding China and approved Operation Zet, a Soviet volunteer air force. As part of the secret operation Soviet technicians upgraded and handled some of the Chinese war-supply transport. Bombers, fighters, military supplies and advisors arrived, including future Soviet war hero Georgy Zhukov, who won the Battle of Halhin Gol. Prior to the entrance of Western allies, the Soviet Union provided the largest amount of foreign aid to China, totalling some $250 million of credits in munitions and supplies. In 1941 Soviet aid ended as a result of the Soviet-Japanese Neutrality Pact and the beginning of Great Patriotic War. This pact enabled the Soviet Union to avoid fighting against Germany and Japan at the same time. From December 1937 events such as the Japanese attack on the USS Panay and the Nanking Massacre swung public opinion in the West sharply against Japan and increased their fear of Japanese expansion, which prompted the United States, the United Kingdom, and France to provide loan assistance for war supply contracts to Kuomintang. Furthermore, Australia prevented a Japanese Government-owned company from taking over an iron mine in Australia, and banned iron ore exports in 1938. Japan retaliated by invading Vietnam in 1940, and successfully blockaded China and prevented the import of arms, fuel and 10,000 tons/month of materials supplied by the Western Powers through the Haiphong-Yunnan Fou railway line. By mid-1941, the United States organized the American Volunteer Group, or Flying Tigers. Led by Claire Chennault, their early combat success of 300 kills against a loss of 12 of their shark painted P-40 fighters earned them wide recognition at the time when Allies were suffering heavy losses. Entering soon after the U.S. and Japan were at war, their dogfighting tactics would be adopted by US forces. They would also transmit the appreciative Chinese thumbs-up gesture for number one into military culture. In addition, the United States, Britain and the Netherlands East Indies began oil and/or steel embargos. The loss of oil imports made it impossible for Japan to continue operations in China. This set the stage for Japan to launch a series of military attacks against the western Allies when the Imperial Navy raided Pearl Harbor on December 8 1941 (December 7 in U.S. time zones). Within a few days of the attack on Pearl Harbor, both the United States and China officially declared war against Japan. Chiang Kai-shek continued to receive supplies from the United States, as the Chinese conflict was merged into the Asian theatre of World War II. However, in contrast to the Arctic supply route to the Soviet Union that stayed open most of the war, sea routes to China had long been closed, so between the closing of the Burma Road in 1942 and its re-opening as the Ledo Road in 1945, foreign aid was largely limited to what could be flown in over The Hump. Most of China's own industry had already been captured or destroyed by Japan, and the Soviet Union could spare little from the Eastern Front. Because of these reasons, the Chinese government never had the supplies and equipment needed to mount a major offensive. Chiang was appointed Allied Commander-in-Chief in the China theater in 1942. General Joseph Stilwell served for a time as Chiang's Chief of Staff, while commanding US forces in the China Burma India Theater. However, relations between Stilwell and Chiang soon broke down for many reasons. Many historians (such as Barbara Tuchman) suggested it was largely due to the corruption and inefficiency of the Chinese government. However, other historians (such as Ray Huang) found that it was a more complicated situation. Stilwell had a strong desire to assume total control of Chinese troops, which Chiang vehemently opposed. Stilwell also did not appreciate the complexity of the situation, including the buildup of the Chinese Communists during the war (essentially Chiang had to fight a multi-front war - the Japanese on one side, the Communists on the other). Stilwell openly criticized the Chinese government's conduct of the war in the American media, and to President Franklin D. Roosevelt. Chiang was hesitant to deploy more Chinese troops away from the main front because China already suffered tens of millions of war casualties, and believed that Japan would eventually capitulate to America's overwhelming industrial output and manpower. Therefore the Allies began to lose confidence in the Chinese ability to conduct offensive operations from the Asian mainland, and instead concentrated their efforts against the Japanese in the Pacific Ocean Areas and South West Pacific Area, employing an island hopping strategy. Conflicts among China, the United States, and the United Kingdom also emerged in the Pacific war. Winston Churchill was reluctant to devote British troops, the majority of whom were defeated by the Japanese in earlier campaigns, to reopen the Burma Road. On the other hand, Stilwell believed that the reopening of the Burma Road was vital to China as all the ports on mainland China were under Japanese control. Churchill's "Europe First" policy obviously did not sit well with Chiang. Furthermore, the later British insistence that China send in more and more troops into Indochina in the Burma Campaign was regarded as an attempt by Great Britain to use Chinese manpower to secure Britain's colonial holdings in Southeast Asia and prevent the gate to India from falling to Japan. Chiang also believed that China should divert its troops to eastern China to defend the airbases of the American bombers, a strategy that U.S. General Claire Chennault supported. In addition, Chiang voiced his support of Indian independence in a meeting with Mahatma Gandhi in 1942, which further soured the relationship between China and the United Kingdom. The United States saw the Chinese theater as a means to tie up a large number of Japanese troops, as well as being a location for American airbases from which to strike the Japanese home islands. In 1944, as the Japanese position in the Pacific was deteriorating fast, the Imperial Japanese Army launched Operation Ichigo to attack the airbases which had begun to operate. This brought the Hunan, Henan, and Guangxi provinces under Japanese administration. The failure of the Chinese forces to defend these areas led to the replacement of Stilwell by Major General Albert Wedemeyer. However, Chinese troops under the command of Sun Li-jen drove out the Japanese in North Burma to secure the Ledo Road, a supply route to China. In Spring 1945 the Chinese launched offensives and retook Hunan and Guangxi. With the Chinese army well in the progress training and equipping, Albert Wedemeyer planned to launch Operation Carbonado in summer 1945 to retake Guangdong, obtaining a coastal port, and from there drive northwards toward Shanghai. However, the dropping of the atomic bombs hastened Japanese surrender and these plans were not put into action. As of mid 1945, all sides expected the war to continue for at least another year. On August 6, an American B-29 bomber dropped the first atomic bomb used in combat on Hiroshima. On August 9, the Soviet Union renounced its non-aggression pact with Japan and attacked the Japanese in Manchuria, fulfilling its Yalta Conference pledge to attack the Japanese within three months after the end of the war in Europe. The attack was made by three Soviet army groups. In less than two weeks the Kwantung Army in Manchuria, consisting of over a million men but lacking in adequate armor, artillery, or air support, and depleted of many of its best soldiers by the demands of the Allies' Pacific drive, had been destroyed by the Soviets. Later in the day on August 9, a second atomic bomb was dropped by the United States on Nagasaki. Emperor Hirohito officially capitulated to the Allies on August 15, 1945, and the official surrender was signed aboard the battleship USS Missouri on September 2. The Japanese troops in China formally surrendered on September 9, 1945 and by the provisions of the Cairo Conference of 1943, Manchuria, Taiwan and Pescadores Islands were reverted to China. In 1945 China emerged from the war nominally a great military power but was actually a nation economically prostrated and on the verge of all-out civil war. The economy deteriorated, sapped by the military demands of a long, costly war and internal strife, by spiraling inflation, and by Nationalist profiteering, speculation, and hoarding. Starvation came in the wake of the war, as large swathes of the prime farming areas had been ravaged by the fighting. Millions were rendered homeless by floods and the destruction of towns and cities in many parts of the country. The problems of rehabilitating the formerly Japanese-occupied areas and of reconstructing the nation from the ravages of a protracted war were staggering. The situation was further complicated by an Allied agreement at the Yalta Conference in February 1945 that brought Soviet troops into Manchuria to hasten the termination of war against Japan. Although the Chinese had not been present at Yalta, they had been consulted; they had agreed to have the Soviets enter the war in the belief that the Soviet Union would deal only with the Nationalist government. After the war, the Soviet Union, as part of the Yalta agreement's allowing a Soviet sphere of influence in Manchuria, dismantled and removed more than half the industrial equipment left there by the Japanese. The Soviet presence in northeast China enabled the Communists to move in long enough to arm themselves with the equipment surrendered by the withdrawing Japanese army. The war left the Nationalists severely weakened and their policies left them unpopular. Meanwhile the war strengthened the Communists, both in popularity and as a viable fighting force. At Yan'an and elsewhere in the liberated areas, Mao Zedong was able to adapt Marxism-Leninism to Chinese conditions. He taught party cadres to lead the masses by living and working with them, eating their food, and thinking their thoughts. When this failed, however, more repressive forms of coercion, indoctrination and ostracization were also employed. The Red Army fostered an image of conducting guerrilla warfare in defense of the people. In addition, the Chinese Communist Party (CCP) was effectively split into "Red" (cadres working in the liberated areas) and "White" (cadres working underground in enemy-occupied territory) spheres, a split that would later sow future factionalism within the CCP. Communist troops adapted to changing wartime conditions and became a seasoned fighting force. Mao also began preparing for the establishment of a new China, well away from the front at his base in Yan'an. In 1940 he outlined the program of the Chinese Communists for an eventual seizure of power and began his final push for consolidation of CCP power under his authority. His teachings became the central tenets of the CCP doctrine that came to be formalized as "Mao Zedong Thought". With skillful organizational and propaganda work, the Communists increased party membership from 100,000 in 1937 to 1.2 million by 1945. Soon, all out war broke out between the KMT and CCP, a war that would leave the Nationalists banished to Taiwan and the Communists victorious on the mainland. The question as to which political group directed the Chinese war effort and exerted most of the effort to resist the Japanese remains a controversial issue. In the Chinese People's War of Resistance Against Japan Memorial near the Marco Polo Bridge and in mainland Chinese textbooks, the People's Republic of China (PRC) claims that the Nationalists mostly avoided fighting the Japanese in order to preserve its strength for a final showdown with the Communists, while the CCP directed Chinese resistance efforts against the Japanese invasion. Recently, however, with a change in the political climate, the CCP has admitted that certain Nationalist generals made important contributions in resisting the Japanese. The official history in mainland China now states that the KMT fought a bloody, yet indecisive, frontal war against Japan, while the CCP engaged the Japanese forces in far greater numbers behind enemy lines. For the sake of Chinese reunification and appeasing the ROC on Taiwan, the PRC has begun to "acknowledge" the Nationalists and the Communists as "equal" contributors, because the victory over Japan belonged to the Chinese people, rather than to any political party. Leaving aside Nationalists sources, scholars researching third party Japanese and Soviet sources have documented quite a different view. Such studies claim that the Communists actually played a minuscule involvement in the war against the Japanese compared to the Nationalists and used guerrilla warfare as well as opium sales to preserve its strength for a final showdown with the Kuomintang. This is congruent with the Nationalist viewpoint, as demonstrated by history textbooks published in Taiwan, which gives the KMT credit for the brunt of the fighting. According to these third-party scholars, the Communists were not the main participants in any of the 22 major battles, most involving more than 100,000 troops on both sides, between China and Japan. Soviet liaison to the Chinese Communists Peter Vladimirov documented that he never once found the Chinese Communists and Japanese engaged in battle during the period from 1942 to 1945. He also expressed frustration at not being allowed by the Chinese Communists to visit the frontline, although as a foreign diplomat Vladimirov may have been overly optimistic to expect to be allowed to join Chinese guerrilla sorties. The Communists usually avoided open warfare (the Hundred Regiments Campaign and the Battle of Pingxingguan are notable exceptions), preferring to fight in small squads to harass the Japanese supply lines. In comparison, right from the beginning of the war the Nationalists committed their best troops (including the 36th, 87th, 88th divisions, the crack divisions of Chiang's Central Army) to defend Shanghai from the Japanese. The Japanese considered the Kuomintang rather than the Communists as their main enemy and bombed the Nationalist wartime capital of Chongqing to the point that it was the most heavily bombed city in the world to date. The KMT army suffered some 3.2 million casualties while the CCP increased its military strength from minimally significant numbers to 1.7 million men. This change in strength was a direct result of Japanese forces fighting mainly in Central and Southern China, away from major Communist strongholds such as those in Shaanxi. While the PRC government has been accused of greatly exaggerating the CCP's role in fighting the Japanese, the legacy of the war is more complicated in the Republic of China on Taiwan. Traditionally, the government has held celebrations marking the Victory Day on September 9 (now known as Armed Forces Day), and Taiwan's Retrocession Day on October 25. However, with the power transfer from KMT to the pro-Taiwan independence Democratic Progressive Party in 2000 and the rise of desinicization, events commemorating the war have become less commonplace. Many supporters of Taiwan independence see no relevance in preserving the memory of the war of resistance that happened primarily on mainland China (some native Taiwanese were even drafted into the IJA and fought for Japan). Still, many KMT supporters, particularly veterans who retreated with the government in 1949, still have an emotional interest in the war. For example, in celebrating the sixtieth anniversary of the end of war in 2005, the cultural bureau of KMT stronghold Taipei held a series of talks in the Sun Yat-sen Memorial Hall regarding the war and post-war developments, while the KMT held its own exhibit in the KMT headquarters. In 2008 KMT won the presidential election, which will impact the government position once more. To this day the war is a major point of contention between China and Japan. The war remains a major roadblock for Sino-Japanese relations, and many people, particularly in China, harbour grudges over the war and related issues. A small but vocal group of Japanese nationalists and/or right-wingers deny a variety of crimes attributed to Japan. The Japanese invasion of its neighbours is often glorified or whitewashed, and wartime atrocities, most notably the Nanjing Massacre, comfort women, and Unit 731, are frequently denied by such individuals. The Japanese government has also been accused of historical revisionism by allowing the approval of school textbooks omitting or glossing over Japan's militant past. In response to criticism of Japanese textbook revisionism, the PRC government has been accused of using the war to stir up already growing anti-Japanese feelings in order to whip up nationalistic sentiments and divert its citizens' minds from internal matters. The conflict lasted for 8 years, 1 month, and 3 days (measured from 1937 to 1945). Chinese sources list the total number of military and non-military casualties, both dead and wounded, at 35 million. Most Western historians believed that the total number of casualties was at least 20 million. The property loss suffered by the Chinese was valued at 383 billion US dollars according to the currency exchange rate in July 1937, roughly 50 times the GDP of Japan at that time (US$7.7 billion). The National Revolutionary Army (NRA) throughout its lifespan employed approximately 4,300,000 regulars, in 370 Standard Divisions (正式師), 46 New Divisions (新編師), 12 Cavalry Divisions (騎兵師), 8 New Cavalry Divisions (新編騎兵師), 66 Temporary Divisions (暫編師), and 13 Reserve Divisions (預備師), for a grand total of 515 divisions. However, many divisions were formed from two or more other divisions, and many were not active at the same time. The number of active divisions, at the start of the war in 1937, was about 170 NRA divisions. The average NRA division had 4,000–5,000 troops. A Chinese army was roughly the equivalent to a Japanese division in terms of manpower but the Chinese forces largely lacked artillery, heavy weapons, and motorized transport. The shortage of military hardware meant that three to four Chinese armies had the firepower of only one Japanese division. Because of these material constraints, available artillery and heavy weapons were usually assigned to specialist brigades rather than to the general division, which caused more problems as the Chinese command structure lacked precise coordination. The relative fighting strength of a Chinese division was even weaker when relative capacity in aspects of warfare, such as intelligence, logistics, communications, and medical services, are taken into account. The National Revolutionary Army can be divided roughly into two groups. The first one is the so-called dixi (嫡系, "direct descent") group, which comprised divisions trained by the Whampoa Military Academy and loyal to Chiang Kai-shek, and can be considered the Central Army (中央軍) of the NRA. The second group is known as the zapai (雜牌, "miscellaneous units"), and comprised all divisions led by non-Whampoa commanders, and is more often known as the Regional Army or the Provincial Army (省軍). Even though both military groups were part of the National Revolutionary Army, their distinction lies much in their allegiance to the central government of Chiang Kai-shek. Many former warlords and regional militarists were incorporated into the NRA under the flag of the Kuomintang, but in reality they retained much independence from the central government. They also controlled much of the military strength of China, the most notable of them being the Guangxi, Shanxi, Yunnan and Ma Cliques. Although during the war the Chinese Communist forces fought as a nominal part of the NRA, the number of those on the CCP side, due to their guerrilla status, is difficult to determine, though estimates place the total number of the Eighth Route Army, New Fourth Army, and irregulars in the Communist armies at 1,300,000. For more information of combat effectiveness of communist armies and other units of Chinese forces see Chinese armies in the Second Sino-Japanese War. The Central Army possessed 80 Army infantry divisions with approximately 8,000 men each, nine independent brigades, nine cavalry divisions, two artillery brigades, 16 artillery regiments and three armored battalions. The Chinese Navy displaced only 59,000 tonnes and the Chinese Air Force comprised only about 700 obsolete aircraft. Chinese weapons were mainly produced in the Hanyang and Guangdong arsenals. However, for most of the German-trained divisions, the standard firearms were German-made 7.92 mm Gewehr 98 and Karabiner 98k. A local variant of the 98k style rifles were often called the "Chiang Kai-shek rifle" a Chinese copy from the Mauser Standard Modell. Another rifle they used was Hanyang 88. The standard light machine gun was a local copy of the Czech 7.92 mm Brno ZB26. There were also Belgian and French LMGs. Surprisingly, the NRA did not purchase any of the famous Maschinengewehr 34s from Germany, but did produce their own copies of them. On average in these divisions, there was one machine gun set for each platoon. Heavy machine guns were mainly locally-made 1924 water-cooled Maxim guns, from German blueprints. On average every battalion would get one HMG. The standard sidearm was the 7.63 mm Mauser M1932 semi-automatic pistol. Some divisions were equipped with 37 mm PaK 35/36 anti-tank guns, and/or mortars from Oerlikon, Madsen, and Solothurn. Each infantry division had 6 French Brandt 81 mm mortars and 6 Solothurn 20 mm autocannons. Some independent brigades and artillery regiments were equipped with Bofors 72 mm L/14, or Krupp 72 mm L/29 mountain guns. They were 24 Rheinmetall 150 mm L/32 sFH 18 howitzers (bought in 1934) and 24 Krupp 150 mm L/30 sFH 18 howitzers (bought in 1936). Infantry uniforms were basically redesigned Zhongshan suits. Leg wrappings are standard for soldiers and officers alike since the primary mode of movement for NRA troops was by foot. The helmets were the most distinguishing characteristic of these divisions. From the moment German M35 helmets (standard issue for the Wehrmacht until late in the European theatre) rolled off the production lines in 1935, and until 1936, the NRA imported 315,000 of these helmets, each with the 12-ray sun emblem of the ROC on the sides. Other equipment included cloth shoes for soldiers, leather shoes for officers and leather boots for high-ranking officers. Every soldier was issued ammunition, ammunition pouch/harness, a water flask, combat knives, food bag, and a gas mask. On the other hand, warlord forces varied greatly in terms of equipment and training. Some warlord troops were notoriously under-equipped, such as Shanxi's Dadao Teams and the Yunnanese army. Some however were highly professional forces with their own air force and navies. The quality of Guangxi's army was almost on par with the Central Army's, as the Guangzhou region was wealthy and the local army could afford foreign instructors and arms. The Muslim Ma clique to the Northwest was famed for its well-trained cavalry divisions.
EXAMINE HUMAN IMPACT ON BIODIVERSITY Biodiversity is the variety of different types of life found on Earth and the variations within species. It is a measure of the variety of organisms present in different ecosystems. This can refer to genetic variation, ecosystem variation, or species variation (number of species) within an area, biome, or planet. Terrestrial biodiversity tends to be greater near the equator, which seems to be the result of the warm climate and high primary productivity. Biodiversity is not distributed evenly on Earth. It is richest in the tropics. HUMAN IMPACT ON BIODIVERSITY The impact of man on biodiversity can either be positive or negative, nevertheless some of its impacts can be outline and examine below; IMPACT ON FLORA AND FUANA Globally, over 1,000 (87%) of a total of 1,226 threatened bird species are impacted by agriculture. More than 70 species are affected by agricultural pollution, 27 of them seriously. Europe’s farmland birds have declined by 48% in the past 26 years (European Bird Census Council, 2008). Pesticides and herbicides pose a threat to 37 threatened bird species globally (BirdLife, 2008), in addition to deleterious effects of agricultural chemicals on ground water (Bexfield, 2008). Domesticated species diversity is also under threat. Worldwide, 6,500 breeds of domesticated mammals and birds are under immediate threat of extinction, reducing the genetic diversity for options in a changing environment (Diaz et al., 2007; MA, 2005). IMPACT ON THE ECOSYSTEM With the loss of biodiversity in both natural and agricultural systems comes the loss of other ecosystem services. In addition to food, fibre and water provisioning, regulating services such as air, water and climate regulation, water purification, pollination and pest control, as well as providing resilience against natural hazards and disasters and environmental change, are among the numerous examples of ecosystem services being lost under increasing intensification and expansion of agriculture. After habitat loss, overharvesting has had the greatest effect on biodiversity. In fact, overharvesting and habitat loss often occur simultaneously, as removal of an organism from its environment can have irreversible impacts on the environment itself. Humans have historically exploited plant and animal species in order to maximize short-term profit, at the expense of sustainability of the species or population. This exploitation follows a predictable pattern: initially, a species harvested from the wild can turn a substantial profit, encouraging more people to get involved in its extraction. This increased competition encourages the development of more large-scale and efficient methods of extraction, which inevitably deplete the resource. Eventually, quota systems are applied, leading to more competition, decreased earnings and the need for government subsidies to support the extraction industry. Most of Quebec’s population (98%) and human activity occurs in the St. Lawrence watershed. Throughout the 20th century, degradation of the St. Lawrence has followed technological progress and urbanization and this has taken its toll. Pollution involves the addition of materials that are usually not present or present in very different amounts and can be due to the following factors: toxic discharges: This includes metals, organic chemicals, and suspended sediments usually found in industrial and municipal effluents that are discharged directly into waterbodies. Toxic discharges can inversely impact the biota (living organisms) in an ecosystem by killing them, weakening them, or affecting their ability to carry out essential biological functions (feeding, reproducing, etc.). With the urbanization of the population, proportionally fewer numbers of people were involved in food production. This led to changes in agricultural practices such as the development of modernized agricultural techniques with the use the moldboard plow, motorized tractors, hybrid cultivars, inorganic fertilizers and pesticides. This created new pressures on the land, dramatically increasing the influence of agricultural practices on biodiversity. Human impact on biodiversity are enormous and gradually hurting mother earth, But it’s not all bad news. Many animal and plant species have adapted to the new stresses, food sources, predators and threats in urban and suburban environments, where they thrive in close proximity to humans. Their success provides researchers with valuable—and sometimes unexpected—insights into evolutionary and selective processes. Because these adaptations have had to be rapid, cities are, in some respects, ideal laboratories for studying natural selection. Lastly, The impact of mankind on biodiversity has clearly been detrimental to many animals and plants, but the story is more complex and subtle than has been appreciated. Urbanization provides ready-made laboratories for studying evolution and adaptive processes, and examining the influence of humans on flora and fauna creates the potential to mitigate any negative effects. According to Marzluff, we should be more positive about our relationship with the natural world: We should celebrate the creative aspects of our impact on animals in addition to concerning ourselves with the negative effects.” • “What is biodiversity?”. United Nations Environment Programme, World Conservation Monitoring Centre. • • Gaston, Kevin J. (11 May 2000). “Global patterns in biodiversity”. Nature 405 (6783): 220–227. doi:10.1038/35012228. PMID 10821282. • • Field, Richard; Hawkins, Bradford A.; Cornell, Howard V.; Currie, David J.; Diniz-Filho, J. Alexandre F.; Guégan, Jean-François; Kaufman, Dawn M.; Kerr, Jeremy T.; Mittelbach, Gary G.; Oberdorff, Thierry; O’Brien, Eileen M.; Turner, John R. G. (1 January 2009). “Spatial species-richness gradients across scales: a meta-analysis”. Journal of Biogeography 36 (1): 132–147. doi:10.1111/j.1365-2699.2008.01963.x. • • Tittensor, Derek P.; Mora, Camilo; Jetz, Walter; Lotze, Heike K.; Ricard, Daniel; Berghe, Edward Vanden; Worm, Boris; Jetz, Walter; Lotze, Heike K.; Ricard, Daniel; Berghe, Edward Vanden; Worm, Boris (28 July 2010). “Global patterns and predictors of marine biodiversity across taxa”. Nature 466 (7310): 1098–1101. Bibcode:2010Natur.466.1098T. doi:10.1038/nature09329. PMID 20668450.
Choosing a Book in the Comfort Zone Estimates indicate that students can acquire and retain two or three words per day through direct, explicit, contextualized instruction. Given a 180-day school year, that comes to 360-540 words per year, a far cry from the vast number of words necessary for adequate vocabulary growth. In addition to direct, explicit instruction of word meanings, you can encourage students to read more. To accomplish this, students must be taught how to select books at their appropriate reading level, avoiding boredom and frustration. Reading in the "comfort zone" means that students read well enough to understand the text. There are three components to reading in the comfort zone: - Accurate decoding of 95% of the words or better - Knowledge of at least 90% of the words - Comprehension of at least 75% of the words Frequently, student novels and textbooks are written above the comfort zone of at-risk readers. Therefore, students must be explicitly taught how to select appropriate books while enjoying Incentives that encourage them to read. A book in the comfort zone means you can read 98% of the words accurately. - Select a book that seems interesting. - Read the title and front and back covers - Look at the size of the font, the illustrations, the white space, and the number of pages. - If the book still seems interesting to you, continue with the following steps. If not, choose another book. - Choose three sections in the book to test: one near the beginning, one near the middle, and one near the end. - Count out about 20 words in the first section, or about three lines of text. - "Whisper-read" the passage. - Mark any words you have trouble with or do not understand. (Do not count names of people.) - Look away from three passage and tell yourself what you just read. - If you missed more than one word, the passage was too hard. - If you could not explain what the passage was about, the passage was too hard. - Repeat Steps 3-8 for the middle and ending sections of the book. - If you missed zero or one word in each passage and you could explain each passage, the book is in your comfort zone. Read and enjoy! (If two or more passages are too hard, save the book for later in the year.)
A Guide to Crafts For All Around the Calendar Crafts are Crucial for a Child's Development The majority of children love to draw, paint, and create pieces of artwork and crafts. Making crafts and art raises a child’s self-esteem and gives them a chance to freely express themselves and bring to life their imagination while contributing to vital development skills at every age. Not only do children develop hand-eye coordination and learn to express themselves non-verbally, but they are also gaining full potential of their right brain hemisphere, which controls creativity, imagination, and emotions It doesn’t matter how the child’s artwork or craft looks. What is important is that each child is allowed to put his imagination and creativity to work and make his project with as little assistance as possible. Crafts also help children gain a better understanding of whatever lesson, animal or subject they are learning about. That's why our experts have put together a guide to craft projects for all four seasons. These crafts are sure to amaze you, and the kids will love them, too. Fall Into Autumn Crafts These sensational autumn crafts are more than nice looking fall displays for your classroom. A leaf collage bookmark and impasto 3D apples are just a glimpse of what's inside. All our crafts will teach children important facts or teach a variety of art lessons. Mexican crafts help children learn about different cultures and traditions. So this fall hop on the bus and let your students have fun imagining, creating and expressing themselves through spectacular artwork and crafts. - Elementary Art Lesson Plans: Apple Pie Plates - A Handprint Craft for Halloween - Paper Plate Pumpkin Patch: A Kindergarten Art Project - Elementary Art Lesson Plan: Very Busy Spider by Eric Carle - Art Ideas Using Paper Collage for Your Little Picassos - Indian Teepee Craft: School Project for 5-Year-Olds - Fall Leaf Creations and Leaf Man Art Projects for Art Class - Mexican Crafts: Projects for the Kindergarten Classroom This winter celebrate the holidays with cool crafts that will help students learn while creating beautiful Christmas ornaments, dragons, elves and more. Students will learn about the Chinese New Year while making these colorful, dancing dragons. Teaching children about hibernation couldn't be easier while having them create these hibernation crafts and snow scenes. Children enjoy making beautiful ornaments to hang on the Christmas tree and parents will be amazed when they open these magnificent ornaments on Christmas morning. - Have Your Students Celebrate Winter by Creating a Scene Out of Food - How to Make A Snowman Using Styrofoam Balls - Hibernation Crafts and Recipe for Kindergarten Kids in the Classroom - Create Dancing Dragons in Art to Celebrate the Chinese New Year - Kindergarten Christmas Ornaments to Practice Phonics Sounds - Elementary Art Lesson Plan: Papa Please Get Me the Moon - Valentine's Day Crafts for Fifth Grade - Valentine's Day Crafts for Kindergarten Students Students will love making these beautiful spring crafts. Here you will find a Mother's Day gift that not only makes a great present for mom but also helps save the Earth; or make dad a special pencil holder for Father's Day. St. Patrick's Day Irish flags are a great way to teach students about Ireland. There's an Easter craft to keep excited kids preoccupied while they wait for the Easter Bunny to visit.l Once the school year comes to an end, teachers will find gorgeous picture frames with ocean shells, fish and other crafts right here in this guide to craft projects for all four seasons. - Make Dad a Special Pencil Holder for Father's Day - Easter Bunny Jars: A Quick and Easy Easter Craft for Kids - Mother's Day Crafts and PresentsThat Protect Mother Earth - Celebrating St. Patrick's Day With Some Great St. Patrick's Day Crafts for Kids - Kindergarten Women's History Lesson: Make a Brown Paper Bag Kite Most teachers would agree that summer is their favorite time of year. However, those teaching summer school or parents looking for fun crafts to involve their children in this season may want to take a look at the fresh, crisp crafts we have in store for you. Shark crafts, birds nest crafts, and Fourth of July crafts are just a few that will help combat children's boredom this summer. - Little Bird Nests - A Mouse Craft Puppet for Young Students - A Shark Craft for Kindergarten Students - Super Summer Crafts for the Kindergarten Class - Summer Art Crafts for Kids In addition to these awesome crafts there are several more right here on our site. Check out our early childhood crafts which include mysterious crows, mosaic owls, beach and ocean crafts, spiders, bats and much more.
¿We had a really nice theoretical model in the nineties and some of the data seemed to support it, but the more data we got, the muddier the waters became,¿ says Luisa Rebull of NASA's Spitzer Science Center in Pasadena, Calif., lead author of the paper that appeared in The Astrophysical Journal July 20. The model was based on the fact that as a large cloud of gas and debris begins contracting under its own gravity, conservation of angular momentum dictates that it will rotate faster and faster as it shrinks, much like a spinning skater who pulls in her arms in order to twirl at top speed. These infant stars eventually spin so fast that any excess gas and dust is flattened into a pancakelike disk around the star, which may eventually yield planets. Although all stars are thought to form these protoplanetary disks, for unknown reasons some disks remain prominent and others become smaller or nearly disappear. Because a disk is composed of ionized gas and dust, a spinning star¿s magnetic field gets caught up in it and the result is ¿like a spoon being dragged through molasses,¿ Rebull says. Angular momentum is transferred to the disk and the star rotates more slowly, with a much greater effect seen in stars with larger disks. ¿It was too good a solution to let go,¿ says Rebull, who has been working on this problem for over a decade. Until now, observations have yielded mixed results--only some of the data supported the theory--and all the observations suffered from small sample sizes and the inaccuracies inherent in using optical telescopes to determine which young stars had prominent disks. Finally, the launch of Spitzer in August 2003 meant astronomers could readily identify these large disks; they are heated by the stars and give off infrared light in the frequencies Spitzer is designed to detect. Rebull examined nearly 500 stars in the Orion Nebula and found that slowly spinning stars were five times more likely to have prominent disks than fast spinners. ¿This is the clearest evidence yet that indeed the disks are where the angular momentum is going,¿ says Rebull. ¿It¿s very encouraging that we¿re on the right track.¿
There's still a great degree of skepticism about the value of virtual worlds in education but there's plenty of activity going on and valuable experience being gained. At Stanford University's Virtual Human Interaction Lab they've carried out an interesting piece of research on how virtual worlds can be used to bring about changes in attitude in ways that cannot be duplicated by more traditional communication. This particular experiment was about people's attitudes to the environmental impact of paper use. One group was given written descriptions of the effects our paper consumption have on our forests and how non-recycled paper leads to deforestation. Another group went into a virtual world and cut down trees with a virtual chainsaw. Despite the graphic details and convincing rhetoric of the written accounts the groups who simply read about the problem did not change their paper consumption after the experiment whereas those who had been in the virtual world really did change their behaviour afterwards. The research was not about paper consumption or deforestation but about how virtual experiences can have a real effect on our behaviour. This is not the first example of such research but it shows that virtual and augmented reality will be increasingly used for simulations and training as an effective behaviour reinforcer and in many cases will enable us to simulate experiences that would be impossible or extremely costly to carry out for real. Watch the video below for a report on the Stanford experiment and read the article from Stanford Report, New virtual reality research – and a new lab – at Stanford.
Database Management System Tutorial Database Management System or DBMS in short refers to the technology of storing and retrieving users’ data with utmost efficiency along with appropriate security measures. This tutorial explains the basics of DBMS such as its architecture, data models, data schemas, data independence, E-R model, relation model, relational database design, and storage and file structure and much more. This tutorial will especially help computer science graduates in understanding the basic-to-advanced concepts related to Database Management Systems. Before you start proceeding with this tutorial, it is recommended that you have a good understanding of basic computer concepts such as primary memory, secondary memory, and data structures and algorithms.
Palm-leaf manuscripts (Tamil: ஓலைச் சுவடி, Telugu: తాళపత్ర గ్రంథం, Oriya: ତାଳପତ୍ର ପୋଥି, Kannada: ತಾಳೆಗರಿ, Marathi: ताडपत्री ग्रंथ, Hindi: तालपत्र, Malayalam: താളിയോല, Sinhala: පුස්කොළ ලෙඛන, Javanese: rontal, Indonesian: lontar) are manuscripts made out of dried palm leaves. Palm leaves were used as writing materials in South Asia and in Southeast Asia dating back to the 5th century BCE, and possibly much earlier. They were used to record actual and mythical narratives. Initially knowledge was passed down orally, but after the invention of alphabets and their diffusion throughout South Asia, people eventually began to write it down in dried and smoke treated palm leaves of Palmyra palm or talipot palm. Once written down, each document had a limited time before which the document had to be copied onto new sets of dried palm leaves.[why?] With the spreading of Indian culture to South East Asian countries such as Indonesia, Cambodia, Thailand, and the Philippines, these nations became home to collections of documents in palm leaf. In Indonesia the palm-leaf manuscript is called lontar. The Indonesian word 'lontar' was a misspelling of Old Javanese rontal. It is composed of two Old Javanese words, namely 'ron' (leaf) and 'tal' (tal tree). The word 'rontal' therefore means 'leaf of the tal tree'. The rontal tree belongs to the family of palm trees (Borassus flabellifer). Due to the shape of its leaves, which are spread like a fan, these trees are also known as 'fan trees'. The leaves of the rontal tree have always been used for many purposes, such as for the making of plaited mats, palm sugar wrappers, water scoops, ornaments, ritual tools, and writing material. Today, the art of writing in rontal still survive in Bali, performed by Balinese Brahmin as sacred duty to rewrite Hindu sacred texts. With the introduction of printing presses in the early 19th century this cycle of copying from palm leaves came to an end. Many governments are making efforts to preserve what is left of their palm leaf documents. The rounded or diagonal shapes of the letters of many of the scripts of southern India and Southeast Asia, such as Lontara, Javanese, Balinese, Oriya and Tamil are believed to have developed as an adaptation to writing on palm leaves, as angular letters tend to split the leaf. Palm leaf manuscripts of Odisha have both text of scriptures, pictures of Devadasi and various mudras of Kamasutra. Some of the early discoveries of the Oriya palm leaf manuscripts include writings like Smaradipika, Ratimanjari, Pancasayaka and Anangaranga in both Oriya and Sanskrit. State Museum of Odisha at Bhubaneswar houses 40,000 palm leaf manuscripts.Most of them are written in the Oriya script, though the language is Sanskrit.The oldest manuscript here belongs to the 14th century but the text can be dated to the 2nd century. In 1997 The United Nations Educational Scientific and Cultural Organisation (UNESCO) recognised the Tamil Medical Manuscript Collection as part of the Memory of the World Register. A very good example of usage of palm leaf manuscripts to store the history is a Tamil grammar book named Tolkāppiyam which was written c. 4th century. A global digitalization project led by the Tamil Heritage Foundation collects, preserves, digitizes and makes ancient palm-leaf manuscript documents available to users via the internet. Javanese and Balinese Many old manuscripts dated from ancient Java, Indonesia, were written on rontal palm-leaf manuscript. Manuscripts dated from 14th to 15th century Majapahit period, or even earlier, such as Arjunawiwaha, Smaradhana, Nagarakretagama, Sutasoma, are discovered in neighboring island of Bali and Lombok. This suggested that the tradition to preserving, copying and rewriting palm-leaf manuscript still continues for centuries. Other palm-leaf manuscripts such as Sundanese Carita Parahyangan, Sanghyang Siksakanda ng Karesian, and Bujangga Manik. - "Literature | The Story of India - Photo Gallery". PBS. Retrieved 2013-11-13. - "IAS Memory of Asia palm-leaf manuscript preservation". Xlweb.com. 2001-10-16. Retrieved 2013-11-13. - "Conservation and Digitisation of Rolled Palm Leaf Manuscripts in Nepal". Asianart.com. 2005-11-14. Retrieved 2013-11-13. - "Digital Library of Lao Manuscripts". Laomanuscripts.net. Retrieved 2013-11-13. - Sanford Steever, 'Tamil Writing'; Kuipers & McDermott, 'Insular Southeast Asian Scripts', in Daniels & Bright, The World's Writing Systems, 1996, p. 426, 480 - Nāgārjuna Siddha (2002). Conjugal Love in India: Ratiśāstra and Ratiramaṇa : Text, Translation, and Notes. BRILL. pp. 3–. ISBN 978-90-04-12598-8. Retrieved 28 March 2013. - Interview: Digitalizing heritage for the coming generation. Bhasha India. Microsoft. Retrieved 17 January 2012. Media related to Palm-leaf manuscripts at Wikimedia Commons
Basics of Information Theory Computer Science Department Carnegie Mellon University Version of 24 November 2004 Translations available: Belorussian Although information is sometimes measured in characters, as when describing the length of an email message, or in digits (as in the length of a phone number), the convention in information theory is to measure information in bits. A "bit" (the term is a contraction of binary digit) is either a zero or a one. Because there are 8 possible configurations of three bits (000, 001, 010, 011, 100, 101, 110, and 111), we can use three bits to encode any integer from 1 to 8. So when we refer to a "3-bit number", what we mean is an integer in the range 1 through 8. All logarithms used in this paper will be to the base two, so log 8 is 3. Similarly, log 1000 is slightly less than 10,and log 1,000,000 is slightly less than 20. Suppose you flip a coin one million times and write down the sequence of results. If you want to communicate this sequence to another person, how many bits will it take? If it's a fair coin, the two possible outcomes, heads and tails, occur with equal probability. Therefore each flip requires 1 bit of information to transmit. To send the entire sequence will require one million bits. But suppose the coin is biased so that heads occur only 1/4 of the time, and tails occur 3/4. Then the entire sequence can be sent in 811,300 bits, on average. (The formula for computing this will be explained below.) This would seem to imply that each flip of the coin requires just 0.8113 bits to transmit. How can you transmit a coin flip in less than one bit, when the only language available is that of zeros and ones? Obviously, you can't. But if the goal is to transmit an entire sequence of flips, and the distribution is biased in some way, then you can use your knowledge of the distribution to select a more efficient code. Another way to look at it is: a sequence of biased coin flips contains less "information" than a sequence of unbiased flips, so it should take fewer bits to transmit. Let's look at an example. Suppose the coin is very heavily biased, so that the probability of getting heads is only 1/1000, and tails is 999/1000. In a million tosses of this coin we would expect to see only about 1,000 heads. Rather than transmitting the results of each toss, we could just transmit the numbers of the tosses that came up heads; the rest of the tosses can be assumed to be tails. Each toss has a position in the sequence: a number between 1 and 1,000,000. A number in that range can be encoded using just 20 bits. So, if we transmit 1,000 20-bit numbers, we will have transmitted all the information content of the original one million toss sequence, using only around 20,000 bits. (Some sequences will contain more than 1,000 heads, and some will contain fewer, so to be perfectly correct we should say that we expect to need 20,000 bits on average to transmit a sequence this way.) We can do even better. Encoding the absolute positions of the heads in the sequence takes 20 bits per head, but this allows us to transmit the heads in any order. If we agree to transmit the heads systematically, by going through the sequence from beginning to end, then instead of encoding their absolute positions we can just encode the distance to the next head, which takes fewer bits. For example, if the first four heads occurred at positions 502, 1609, 2454, and 2607, then their encoding as "distance to the next head" would be 502, 1107, 845, 153. On average, the distance between two heads will be around 1,000 flips; only rarely will the distance exceed 4,000 flips. Numbers in the range 1 to 4,000 can be encoded in 12 bits. (We can use a special trick to handle the rare cases where heads are more than 4,000 flips apart, but we won't go into the details here.) So, using this more sophisticated encoding convention, a sequence of one million coin tosses containing about 1,000 heads can be transmitted in just 12,000 bits, on average. Thus a single coin toss takes just 0.012 bits to transmit. Again, this claim only makes sense because we're actually transmitting a whole sequence of tosses. What if we invented an even cleverer encoding? What is the limit on how efficient any encoding can be? The limit works out to about 0.0114 bits per flip, so we're already very close to the optimal encoding. The information content of a sequence is defined as the number of bits required to transmit that sequence using an optimal encoding. We are always free to use a less efficient coding, which will require more bits, but that does not increase the amount of information transmitted. Variable Length Codes The preceding examples were based on fixed-length codes,such as 12-bit numbers encoding values between 1 and 4,000. We can often do better by adopting a variable length code. Here is an example. Suppose that instead of flipping a coin we are throwing an eight-sided die. Label the sides A-H. To encode a number between 1 and 8 (or between 0 and 7, if you're a computer scientist) takes 3 bits, so a thousand throws of a fair die will take 3,000 bits to transmit. Now suppose the die is not fair, but biased in a specific way: the chances of throwing an A are 1/2, the chances of throwing a B are 1/4, C is 1/8, D is 1/16, E is 1/32, F is 1/64, and G and H are each 1/128. Let us verify that the sum of these probabilities is 1, as it must be for any proper probability distribution: Now let's consider an encoding ideally suited to this probability distribution. If we throw the die and get an A, we will transmit a single 0. If we throw a B we will transmit a 1 followed by a 0, which we'll write 10. If we throw a C the code will be 11 followed by 0, or 110. Similarly we'll use 1110 for D, 11110 for E, 111110 for F, 1111110 for G, and 1111111 for H. Notice that the code for A is very concise, requiring a single bit to transmit. The codes for G and H require 7 bits each, which is way more than the 3 bits needed to transmit one throw if the die were fair. But Gs and Hs occur with low probability, so we will rarely need to use that many bits to transmit a single throw. On average we will need fewer than 3 bits. We can easily calculate the average number of bits required to transmit a throw: it's the sum of the number of bits required to transmit each of the eight possible outcomes, weighted by the probability of that outcome: So 1,000 throws of the die can be transmitted in just 1,984 bits rather than 3,000. This simple variable length code is the optimal encoding for the probability distribution above. In general, though, probability distributions are not so cleanly structured,and optimal encodings are a lot more complicated. Exercise: suppose you are given a five-sided biased die that has a probability of 1/8 of coming up A, 1/8 for B, and 1/4 for each of C, D, and E. Design an optimal code for transmitting throws of this die. (Answer at end.) Measuring Information Content In the preceding example we used a die with eight faces. Since eight is a power of two, the optimal code for a uniform probability distribution is easy to caclulate: log 8 = 3 bits. For the variable length code, we wrote out the specific bit pattern to be transmitted for each face A-H, and were thus able to directly count the number of bits required. Information theory provides us with a formula for determining the number of bits required in an optimal code even when we don't know the code. Let's first consider uniform probability distributions where the number of possible outcomes is not a power of two. Suppose we had a conventional die with six faces. The number of bits required to transmit one throw of a fair six-sided die is: log 6 = 2.58. Once again,we can't really transmit a single throw in less than 3 bits, but a sequence of such throws can be transmitted using 2.58 bits on average. The optimal code in this case is complicated, but here's an approach that's fairly simple and yet does better than 3 bits/throw. Instead of treating throws individually, consider them three at a time. The number of possible three-throw sequences is = 216. Using 8 bits we can encode a number between 0 and 255, so a three-throw sequence can be encoded in 8 bits with a little to spare; this is better than the 9 bits we'd need if we encoded each of the three throws seperately. In probability terms, each possible value of the six-sided die occurs with equal probability P=1/6. Information theory tells us that the minmum number of bits required to encode a throw is -log P = 2.58. If you look back at the eight-sided die example,you'll see that in the optimal code that was described, every message had a length exactly equal to -log P bits. Now let's look at how to apply the formula to biased (non-uniform) probability distributions. Let the variable x range over the values to be encoded,and let P(x) denote the probability of that value occurring. The expected number of bits required to encode one value is the weighted average of the number of bits required to encode each possible value,where the weight is the probability of that value: Now we can revisit the case of the biased coin. Here the variable ranges over two outcomes: heads and tails. If heads occur only 1/4 of the time and tails 3/4 of the time, then the number of bits required to transmit the outcome of one coin toss is: A fair coin is said to produce more "information" because it takes an entire bit to transmit the result of the toss: The Intuition Behind the -P log P Formula The key to gaining an intuitive understanding of the -P log P formula for calculating information content is to see the duality between the number of messages to be encoded and their probabilities. If we want to encode any of eight possible messages, we need 3 bits, because log 8 = 3. We are implicitly assuming that the messages are drawn from a uniform distribution. The alternate way to express this is: the probability of a particular message occurring is 1/8, and -log(1/8) = 3, so we need 3 bits to transmit any of these messages. Algebraically, log n = -log (1/n), so the two approaches are equivalent when the probability distribution is uniform. The advantage of using the probability approach is that when the distribution is non-uniform, and we can't simply count the number of messages, the information content can still be expressed in terms of probabilities. Sometimes we write about rare events as carrying a high number of bits of information. For example, in the case where a coin comes up heads only once in every 1,000 tosses, the signal that a heads has occurred is said to carry 10 bits of information. How is that possible, since the result of any particular coin toss takes 1 bit to describe? Transmitting when a rare event occurs, if it happens only about once in a thousand trials, will take 10 bits. Using our message counting approach, if a value occurs only 1/1000 of the time in a uniform distribution, there will be 999 other possible values, all equally likely, so transmitting any one value would indeed take 10 bits. With a coin there are only two possible values. What information theory says we can do is consider each value separately. If a particular value occurs with probability P, we assume that it is drawn from a uniformly distributed set of values when calculating its information content. The size of this set would be 1/P elements. Thus, the number of bits required to encode one value from this hypothetical set is -log P. Since the actual distribution we're trying to encode is not uniform, we take the weighted average of the estimated information content of each value (heads or tails, in the case of a coin), weighted by the probability P of that value occuring. Information theory tells us that an optimal encoding can do no better than this. Thus, with the heavily biased coin we have the following: P(heads) = 1/1000, so heads takes -log(1/1000) = 9.96578 bits to encode P(tails) = 999/1000, so tails takes -log(999/1000) = 0.00144 bits to encode Avg.bits required = = (1/1000) × 9.96578 + (999/1000) × 0.00144 = 0.01141 bits per coin toss Answer to Exercise The optimal code is: Created by Mathematica (November 25, 2004)
At the beginning of each school year, I would administer a couple of quick individual tests to my kindergarten students to gain an idea of where the students were in their math and literacy skills. Which students entered the classroom on day one already able to count to 10? What about 20? Which students could already name all the uppercase letters of the alphabet? Could they also name the letter sounds? While this information was helpful for learning the strengths and growth areas of my individual students, I wasn’t too worried about my students who entered kindergarten a bit behind in their math or literacy skills. After all, I had a whole school year to work with them and ensure that they left my classroom fully prepared for first grade. The students that did worry me were the ones who started kindergarten lacking important social-emotional skills. Students lacking social-emotional skills experience challenges in following directions, managing their emotions, and getting along with other children and the adults that share their classroom. As a teacher, I knew that these skills were more difficult to instill in my students than the basic math and literacy skills that would be covered throughout the school year. But these, and other, skills for success are equally important. In fact, research suggests that social-emotional skills are significantly associated with children’s academic success and productivity in the classroom. It’s also now known that children from low-income families are less likely to enter school with these skills than their more affluent peers and this gap only widens with time. Now, a new study from researchers at Johns Hopkins University, in collaboration with the Baltimore Education Research Consortium, provides further evidence of the high costs of entering kindergarten without these important social-emotional skills. The study examined the relationship between kindergarteners’ social-emotional readiness and key educational outcomes in more than 9,000 elementary school students enrolled in Baltimore City Public Schools. In the study, the social-emotional skills of incoming kindergarten students were measured against the Personal and Social Development domain of the Maryland Model for School Readiness (MMSR). The MMSR is an assessment tool developed by the University of Michigan to determine children’s readiness for school, and the Personal and Social Development domain includes indicators such as, “Follows classroom rules and routines” and “Participates cooperatively in group activities.” After observing student behavior, teachers rated students on each indicator and the student raw scores translated into one of three categories: Developing, Approaching, or Fully Ready. For purposes of this study, students whose personal and social development scores placed them in the Developing or Approaching category were labeled as “Not Ready” socially and behaviorally for kindergarten while students falling in the Fully Ready category were labeled as “Ready.” The researchers found large differences in the students who scored in the Developing or Approaching Ranges (Not Ready) and their Fully Ready peers. Students rated as Not Ready were more likely to be male and low-income while being less likely to have attended a formal pre-K program. But what’s most interesting about the study is what the researchers found when they tracked these students through the fourth grade. It turns out that, by the fourth grade, students who entered kindergarten behind in social-emotional skills (the “Not Ready” group) were: - up to 80 percent more likely to have been retained; - up to 80 percent more likely to require special education services; and - up to seven times more likely to be suspended or expelled at least once. Like all studies, this one comes with some limitations that must be taken into account. For example, all of the more than 9,000 students who were followed from kindergarten to fourth grade were enrolled in Baltimore City Public Schools. Since the students studied do not constitute a nationally representative sample, we should be careful in making national policy recommendations based on these data alone. Also, the Personal and Social Development domain of the MMSR that was used to rank the students as “Ready” or “Not Ready” is heavily dependent on teachers’ subjective assessments of student behavior. Some research suggests that what teachers judge to be acceptable behavior can be influenced by their own cultural perspective, which could disadvantage minority and male students. Taking at face value the fact that students who enter kindergarten behind in social-emotional development are more likely to experience negative school outcomes, what actions can be taken to ensure that more students enter kindergarten equipped with the skills needed for success? In a press release, Deborah Gross, one of the report’s authors, stated that the results “show how critical social and behavioral skills are for learning, how early the struggle begins for young children, and how important it is to address the problem of social-behavioral readiness well before children enter kindergarten.” Certainly one component to addressing the problem is ensuring that more children have access to high-quality pre-K programs. Currently, only about 39 percent of four-year-olds and 12 percent of three-year-olds are enrolled in either state-funded pre-K or Head Start. A study of Boston’s pre-K program found small, but positive impacts on students’ executive functioning and emotion recognition. Pre-K programs that utilize a research-based curriculum that is focused on building students’ social-emotional skills are helping to set students up for success. A national evaluation of three distinct approaches to improving the social-emotional skills of Head Start students found consistent positive impacts on a range of children’s social-emotional outcomes using two of the three approaches. It’s not only what happens inside the classroom that matters. One way pre-K programs can help better support families in their children’s social-emotional development is to supplement the curriculum with parenting programs that strengthen parents’ knowledge for managing their children’s behavior and helping their children prepare for the expectations of a formal school environment. Once students leave pre-K programs and enter the formal K-12 education system, their social-emotional skills should be developed just like their math and literacy skills. One way states can support this development is to set social-emotional learning standards with indicators at each grade level that note what students should know and be able to do. As our recent From Crawling to Walking report found, currently only six states meet this benchmark for students up through third grade. As I realized early on in my teaching career, developing the social-emotional skills of students is difficult work. And investing in increased access to high-quality pre-K and parenting programs doesn’t come cheap. But what this study illustrates is that ignoring the social-emotional deficits of young children could ultimately cost more in the long run in the form of grade retention and special education services. "
As a follow up to my post on wobbegong shark behavior: researchers in Ireland are studying their resident populations of basking sharks and, as with the wobbegong studies, are able to draw conclusions on changing environmental conditions based on changes in animal behavior. At the 14th European Elasmobranch Association conference held in Galway, Ireland, marine biologists and shark experts from across Europe gathered to discuss the state of shark populations and consider new research techniques to better understand the fate and future of sharks and rays worldwide. Irish researchers who have been working with Ireland's National Parks and Wildlife Service, presented their study which indicated that a high percentage of the remaining number of basking sharks move through Ireland's local waters. Basking sharks are typically a cold water species and the second largest fish on the planet, topped only by the whale shark. Like the whale shark, the basking shark is a filter feeder, opening its cavernous mouth to strain hundreds of gallons of water, searching for zooplankton - a collection of tiny creatures including larval or minute juvenile forms of fish, mollusks, and crustaceans. Zooplankton are sensitive to changes in the aquatic environment, such as temperature changes due to global warming or changes in oxygen levels or pH, which can occur in response to acidification. Where the zooplankton go, so go the basking sharks. “Tracking basking sharks may be far more effective than tracking zooplankton, and [may] provide one of the best indicators of the health of our seas and thus the planet,” said Dr. Simon Berrow, the study's group leader. Extensive tracking of basking sharks has taken place off of Ireland, replacing the intense hunting that used to occur as the basking shark was prized for its sizable supply of shark oil. Worldwide estimates of basking shark populations have been placed as low as 20,000. Extrapolating population estimates from the 250 sharks that have been tagged, the study claimed that there are probably several thousand that frequent the cold waters off Ireland's coast, making the island nation a prime location and home for an increasingly rare shark species. Read about the research in Irishtimes.com.
Among individuals sensitive to mold, exposure typically results in symptoms such as nasal stuffiness, eye irritation, wheezing or skin irritation, according to the Centers for Disease Control and Prevention. Some people with serious mold allergies or chronic lung diseases experience more severe symptoms including fever, shortness of breath and lung infections.Continue Reading A study by the Institute of Medicine links indoor mold exposure to upper respiratory tract symptoms, wheezing and cough among otherwise healthy individuals, reports the CDC. Additionally, indoor mold exposure is linked to asthma symptoms among those with asthma. Limited or suggestive evidence links upper respiratory tract symptoms to indoor mold exposure among children. However, additional studies on the effects of mold show a correlation between early asthma development among children who are genetically predisposed to the condition. To reduce the risk of mold exposure, clean bathrooms with mold-killing cleaners, never carpet bathrooms or basements, keep humidity levels at 50 percent or lower, and use an air conditioner during humid weather, recommends the CDC. Areas typically high in mold exposure risk include saunas, antique shops, farms, flower shops and greenhouses. A person who discovers mold growing in the home most often does not need to identify the type of mold. As individual reactions and susceptibility to mold vary, sampling is not indicative of the degree of health risk.Learn more about Conditions & Diseases
Watch the Video: RealVideo or Windows Media Kids are explorers. The entire world is unknown territory to them, from the strange insects crawling around outdoors to the task of tying their own shoelaces. As the first and most important adults in their lives, you have a remarkable opportunity to guide - and to teach - your children about their world. Research by the U.S. Department of Education has shown that it's especially important in the first three years of your child's life. During these years, children are especially sensitive to their environment; positive and negative experiences influence them much more than they do later in life. What happens in these years has more effect on how successful children are later on than anything else. This doesn't mean that you should break out the flashcards for your two-year-old. Instead, you should consider how to create a nurturing environment for your young children, and how to continue to support your children as they grow. Here are some thoughts on creating a nurturing environment for your children: - Young children are fascinated by the basic rules of the world. These seem obvious to us now, but to a young child, nothing can be taken for granted: for instance, that things stay where they are even when a kid's eyes are closed. Simple games like peekaboo help teach these rules. - Try to constantly give your children new experiences. Keep their world lively with new things they can investigate, but also be careful to avoid overwhelming them. This guideline is true for children of any age: for a young child, being able to play with your keys might be a new and interesting experience. For an older child, it might be a trip to a museum. - Your support is just as important when your children enter school. The only difference is that you now have some partners to work with to help educate your children. Acting as your child's first teacher is probably something you've done as a parent without realizing it. But it's important to be conscious of how important your work is, and that your efforts should continue as your child grows. For more information on becoming your child's first teacher, try one of these links: - The U.S. Department of Education pamphlet "Including Your Child" is a great comprehensive resource on supporting your child through her early years. - FamilyTLC has a good article that explains early child psychology from a down-to-earth, straightforward point of view. - Rigby offers a collection of activities for primary and intermediate grade children.