content
stringlengths
275
370k
The Comanche Empire In the eighteenth and early nineteenth centuries, at the high tide of imperial struggles in North America, an indigenous empire rose to dominate the fiercely contested lands of the American Southwest, the southern Great Plains, and northern Mexico. This powerful empire, built by the Comanche Indians, eclipsed its various European rivals in military prowess, political prestige, economic power, commercial reach, and cultural influence. Yet, until now, the Comanche empire has gone unrecognized in historical accounts. This compelling and original book uncovers the lost story of the Comanches. It is a story that challenges the idea of indigenous peoples as victims of European expansion and offers a new model for the history of colonial expansion, colonial frontiers, and Indian-Euramerican relations in North America and elsewhere. Pekka Hamalainen shows in vivid detail how the Comanches built their unique empire and resisted European colonization, and why they fell to defeat in 1875. With extensive knowledge and deep insight, the author brings into clear relief the Comanches' remarkable impact on the trajectory of history.
Salt Water Painting An Educator’s Reference Desk Lesson Plan Submitted by: Jean VanLoy Endorsed by: Don Descy, Mankato State University Date: February 28, 1997 Grade Level(s): 3 Salt Water Painting is a lesson to go along with a Science unit on Weather. The students will paint with colored salt and water. In doing this, they will see that the water will evaporate and the salt will not. The students will observe and understand the process of evaporation. The student will recognize liquids that evaporate. The student will describe the process of evaporation. The lesson goes along with a Science unit. The water cycle that goes along with the elements of weather. How the sun warms the water in the oceans and lakes and the water evaporates into the air. The water vapor in the air condenses into clouds. The clouds become heavy with water and the rain falls back into the ocean and lakes. Concepts: The students will be able to: Relate the weather cycle to the evaporation of the water on their painting. Give examples of what substances evaporate and what substances do not. Measure 1/4 cup of salt into a container. Add 1/4 cup warm water to the salt. Add several drops of food coloring to the mixture. Giving all groups different colors. Paint with the paint brushes a picture with the mixture. The students are allowed to paint what they like. Lay the paintings to dry overnight. The water will evaporate from the painting and the colored salt will stay on the paper. The students will examine their paintings the next day and see what happened. The students will write a summary of what happened with their painting.
Life on Earth may have started with the help of tiny hollow spheres that formed in the cold depths of space, a new study suggests. The analysis of carbon bubbles found in a meteorite shows they are not Earth contaminants and must have formed in temperatures near absolute zero. The bubbles, called globules, were discovered in 2002 in pieces of a meteorite that had landed on the frozen surface of Tagish Lake in British Columbia, Canada, in 2000 (see Hydrocarbon bubbles discovered in meteorite). Although the meteorite is a fragile type called a carbonaceous chondrite, many pieces of it have been remarkably well preserved because they were collected as early as a week after landing on Earth, so did not have much time to weather. Researchers were excited to find the globules because they could have provided the raw organic chemicals needed for life as well as protective pockets to foster early organisms. But despite the relatively pristine nature of the meteorite fragments, there was no proof that the globules were originally present in the meteorite, and were not the result of Earthly contamination. Now, analysis of atomic isotopes shows that the globules could not have come from Earth and must have formed in very cold conditions, possibly before the Sun was born. The research was led by Keiko Nakamura-Messenger of NASA’s Johnson Space Center in Houston, Texas, US. Cold gas cloud The globules are enriched in heavy forms of hydrogen and nitrogen, called deuterium and nitrogen-15, respectively, ruling out their formation on Earth. The relative amounts of these isotopes is characteristic of formation in a very cold environment: between 10 and 20 Kelvin above absolute zero. This means that the globules may predate our Sun, since temperatures like these would have prevailed in the cold cloud of gas from which our Sun formed and ignited. Alternatively, the globules might have formed after the Sun but while the planets were still developing. The right temperatures would also have existed in the outer reaches of the developing solar system where the comets are thought to have formed. Intriguingly, comets are known to contain particles of organic material of roughly the same size, although the shape of these particles is not known. Either way, the globules are extremely old, says team member Scott Messenger, also of the Johnson Space Center. “We’re looking at the original structures of organic objects that formed long before the Earth formed,” he told New Scientist. Nakamura-Messenger’s team says the globules could have been important for the origin of life by providing the raw materials and membrane-like structures needed. Some scientists think that the presence of some sort of container that could separate an organism’s internal chemistry from its environment was a crucial stage in the evolution of life. “It’s sort of reminiscent of membrane type structures,” agrees Larry Nittler, at the Carnegie Institution of Washington in Washington DC, US. But as for whether the structures could have kick-started life on Earth, “I think that’s highly speculative at this point,” he says. Journal reference: Science (vol 314, p 1439) More on these topics:
Map Making/Floor Plans/Map Reading Students apply their knowledge of scale when mapping the classroom. They determine the use of a map legend and orient a map using a compass. They create the classroom maps using transfer graph paper. 4th - 6th Social Studies & History 4 Views 10 Downloads Designing a Hiking Trail Put your students' map skills to the test with this engaging cross-curricular project. Given the task of developing new hiking trails for their local community, young cartographers must map out beginner and intermediate paths that meet a... 6th - 12th Science CCSS: Adaptable The 3-D Map Project After choosing a continent or state, young geographers will draw outline maps and create a three-dimensional map of their chosen areas using flour and salt dough. This resource includes project guidelines, construction instructions, and... 4th - 6th Social Studies & History CCSS: Adaptable Mapping the Americas Celebrate the geography of the Americas and develop map skills through a series of activities focused on the Western Hemisphere. Learners study everything from earthquakes and volcanoes of the Americas and the relationship between... 3rd - 6th Social Studies & History CCSS: Adaptable How Women Earned a Living Develop reading fluency while learning about what women did for work in the early 1900s. Class members learn a bit about the time period, study vocabulary, and read the text, "Piecework" by Marie Ganz, several times, eventually... 4th - 6th English Language Arts CCSS: Adaptable Jim Murphy, The Great Fire - Grade 6 The Great Fire by Jim Murphy provides the text for a study of the Chicago fire of 1871. The plan is designed as a close reading activity so that all learners have the same background information require for writing. Richly detailed, the... 6th English Language Arts CCSS: Designed Reading Strategies for the Social Studies Class Word splashes, read-draw-talk-write activities, exhibits. Middle schoolers use the Storypath approach to a unit study of America's concerns during the Cold War and the Cuban Missile Crisis. Teams organize a 21st century world's fair,... 6th - 8th English Language Arts CCSS: Adaptable Reading the World: Latitude and Longitude Lesson latitude and longitude, maps, coordinates on a map, prime meridian, reading maps, equator, globes Find five activities all about longitude and latitude! Use oranges to show the equator and prime meridian, plot birthdays on a map using coordinates, and plan a dream vacation. 4 mins 4th - 8th Social Studies & History CCSS: Designed
If you were camping this past weekend (December 12, 2008) and the sky was clear, it might have seemed that the night was especially well lighted. It wasn’t your imagination. Scientists say that the full moon appeared 14% larger and 30% brighter than normal this past weekend. Why was that? The moon’s orbit around Earth is not circular, it is elliptical. This means that the moon’s distance from earth varies depending on where it is in the orbit. The closest point of the orbit is the perigee. The point where the moon is farthest from earth is called the apogee. The moon’s average distance from earth is approximately 238,000 miles. On December 12 (the day of the full moon) the moon was also at its perigee, about 222,000 miles form Earth. That’s why the moon seemed larger and brighter. "It's only every few years that a full moon happens to coincide with the part of the Moon's orbit when its closest to the Earth," said Marek Kukula, an astronomer at the UK's Royal Observatory. In fact the last time this occurred was about 15 years ago. The gravitational attraction of the moon also affects our oceans. Tides are affected by the pull of the sun and the moon. Tides are also higher when the moon is at its perigee. They are called “perigean tides.” Some other conditions cause the moon to seem larger or brighter. During moon rise or moon set when the moon is close to the horizon, it appears larger than when it is higher in the sky. "The moon appears largest as it rises and sets, but this is a psychological illusion," Dr Kukula said. “When it's close to the horizon, our brain interprets it as being bigger than it actually is, this is called the moon illusion," In the northern hemisphere the moon sometimes seems brighter in the winter and that is because it is higher in the sky. Some other interesting moon facts: - The moon is moving away from the Earth at about 1.6 inches per year. - When we see a full moon it is not completely full. “For that to happen, all three objects (sun, Earth and moon) have to be in a perfect line, and when that rare circumstance occurs, there is a total eclipse of the moon.” Year's Biggest Full Moon Friday Night, Robert Roy Britt, editorial director, space.com – Thu Dec 11, 1:45 pm ET - A full moon has not been proven to cause werewolves to go on the prowl. Hopefully you have learned something about our moon and got an opportunity to enjoy a well lighted campout.
Guest Author - Heidi Shelton Jenck The eight parts of speech in grammar are nouns, pronouns, verbs, adjectives, adverbs, conjunctions, prepositions, and interjections. Words in sentences represent these different parts of speech. Patterns of English word usage have developed over time, creating grammar and syntax rules that are expected in written and spoken language. The eight parts of speech are put together in certain ways to form sentences. At a minimum, a complete sentence must have a noun and a verb (or a subject and a predicate). Parts of speech are added to sentences to create complexity and description. The eight parts of speech are listed below with examples: This type of noun refers to a person, place, thing, or idea. table, state, apartment, anger, sky, woman The bird flew in the sky. Proper nouns refer to specific people, things, or places. They are always capitalized. August, Heidi, White House, Monday, Paris Heidi visited the White House. Pronouns substitute for nouns in a sentence. her, he, him, his, she, our, their, who, which, himself She was driving her car. An adjective is a word that describes, or modifies, nouns and pronouns. It can describe such things as the quality, amount, color, or size. large, bright, young, Greek, this, seventh, her, many A large, brick building sat on the abandoned lot. Verbs are action words. They indicate a state of being for the object. Verbs describe what the noun or pronoun is doing, feeling, or thinking. walked, choose, makes, spend, love, go, wishes, thinking The girl thought about her two choices. Adverbs describe, or modify, a verb, adjective, phrase, or other adverbs in a sentence. They indicate how, where, when, or how much. quickly, thus, faster, forever, seldom, too, moderately The dog barked loudly when the delivery truck arrived. Conjunctions join words or groups of words such as phrases or clauses. and, either, because, or, both, yet, since, still Anna couldn’t walk very well because her toe was broken. Prepositions show how a noun or a pronoun is related to another word in the sentence. behind, into, until, across, during, from, after, across, within We parked behind the mall. An interjection is a word not related grammatically to a sentence. It is used with an exclamation point, and shows strong emotion or surprise. Wow! Oh! Aha! Cheers! Wow! That was a great movie! Jake Learns All 8 Parts of Speech, a teacher resource book by Drema McNeal, is an interactive grammar book. It introduces students to the eight parts of speech in a fun, story format. Look for it in your library, or visit Amazon.com for more information by clicking on the book below. Parts of Speech Bingo Game is a fun way for students to review. The game is available at Amazon.com by clicking on the image below:
Studying a zebrafish might be the key to increasing students’ science knowledge and attitudes toward science education–at least, that’s what a five-year evaluation of 20,000 K-12 students indicates. Students taking part in the Project BioEYES program were tested before and after the one-week program and demonstrated significant positive gains in learning in the post-test. Of eight knowledge questions, elementary students demonstrated significant positive gains on seven. Middle school students demonstrated significant positive gains on 8 of 9 knowledge questions. The program uses live zebrafish to teach students about basic scientific principles, animal development and genetics. The zebrafish embryo is clear, making it ideal for observations. As of spring 2016, 100,000 students and 1,400 teachers in six states and two countries have participated in the week-long program. During the week-long BioEYES experiment, students take on the role of scientists in a student-centered approach, a key strategy that has been shown to increase learning, researchers noted.
INTRODUCTION TO GRAM STAINING Grams staining technique is the most important and widely used differential staining technique in Bacteriology. This technique was developed in 1884 by a Danish bacteriologist, Hans Christian Gram. The Gram staining technique differentiates the mixed culture cells into two terms – One which retains the color of primary stain is known as Gram-Positive Bacteria and the cells which get decolorized and takes the counterstain are known as Gram-Negative Bacteria. PRINCIPLE / MECHANISM OF GRAMs STAINING ⇒ The exact principle or mechanism of gram’s Staining is still unknown; however certain theories have been proposed to explain the mechanism of Gram’s reaction which is as follows – - The Acid Protoplasmic Theory – This theory states that As the protoplasm of Gram-Positive Bacteria is more Acidic as compare to Gram-Negative Bacteria, their affinity with basic stain is more and they resists the Decolorization when decolorizer is applied whereas the affinity of Gram-Negative Bacteria with basic dyes is less so they easily get decolorized when decolorizer is applied. - The Lipid Theory – The lipid content of the cell wall is more in Gram-negative bacteria than Gram-positive bacteria. During Gram’s reaction, there is the formation of Dye-Iodine complex in Gram positive & Gram negative bacteria, when the decolorizer is added to the lipid cell wall dissolves, leading to increases in pore size and the Dye-Iodine complex diffuses out during decolorization. - The Cell Wall Theory – This theory is most accurate of all the others and widely accepted. It states that: - The peptidoglycan layer of the gram +ve bacteria is thick while that of gram –ve bacteria is thin. Over the peptidoglycan layer, a Lipopolysaccharide layer is present which is thick in Gram-ve bacteria and thin in Gram +ve bacteria. - When the primary stain is applied to the mixture of bacteria & fixed (via mordant), it gets fixed in the peptidoglycan layer of Gram +ve bacteria and in Gram-ve bacteria, it gets fixed in the lipopolysaccharide and minutely in peptidoglycan. - The lipopolysaccharide layer is soluble in organic solvents so when the decolorizer is applied it gets dissolved and removed out from the bacteria. Here, the Gram-ve bacteria lose the Primary stain and got colorless again whereas the Gram +ve bacteria appears violet/blue color because they retain the primary stain as the stain was bonded in the peptidoglycan layer of Gram +ve bacteria which does not get washed away. - After this, when the counterstain is applied, the colorless gram –ve cells retain it and appears as pink color bodies under the microscope. REQUIREMENTS FOR GRAMs STAINING - Glass Slides - Specimen/Bacterial culture - Tissue paper - Inoculating loop - Spirit lamp/Bunsen burner - Staining tray - Wash Bottle - Crystal Violet - Gram’s Iodine - Decolorizer/95% Ethanol PROCEDURE OF GRAMs STAINING - Take a Clean, Grease free glass slide. - Prepare a smear on it from the Clinical specimen or Culture. - Fix the smear onto the slide by passing over the flame of Spirit lamp or Bunsen burner. - Gently flood the Smear with Crystal violet & let it stand for 1 minute. - Wash the smear with distilled water using Wash bottle. - Cover the smear with Gram’s iodine for 1 minute. - Wash the smear with distilled water using wash bottle. - The smear will appear as a blue-black circle on the slide. Allow it to air dry. - Apply the Decolorizer or 95% Ethanol for 5-10 seconds or until the alcohol runs almost clear. Be careful not to over decolorize it. - Immediately wash with distilled water using wash bottle and allow it to air dry. - Cover the smear with Safranine for 45 seconds to 1 minute. - Wash with distilled water. - Air dry the smear and observes under the microscope with the 100X objective lens. INTERPRETATION OF GRAMs STAINING The bacteria that retain the primary stain i.e. Crystal violet and appears Violet/Blue are Gram-Positive Bacteria. The bacteria that retain the Counter stain i.e. Safranine and appears Pink in color are Gram Negative Bacteria. EXAMPLES OF GRAM POSITIVE & GRAM NEGATIVE BACTERIA Gram Positive Bacteria: Actinomyces, Bacillus spp., Clostridium spp., Corynebacterium diptheriae, Gardnerella, Lactobacillus, Mycoplasma, Nocardia, Staphylococcus aureus, Staphylococcus epidermidis and other Staphylococcus spp. Streptococcus pneumoniae, Sterptococcus pyogenes, Streptomyces etc. Gram Negative Bacteria: Escherichia coli (E. coli), Salmonella typhi, Salmonella paratyphi and other Salmonella species. Shigella dysentriae, Klebsiella pneumoniae, Pseudomonas, Moraxella, Helicobacter pylori, Stenotrophomonas, Legionella etc.
This book provides a clear and informative guide to the twists and turns of German history from the early middle ages to the present day. The multi-faceted, problematic history of the German lands has provided a wide range of debates and differences of interpretation. Mary Fulbrook provides a crisp synthesis of a vast array of historical material, and explores the interrelationships between social, political and cultural factors in the light of scholarly controversies. First published in 1990, A Concise History of Germany appeared in an updated edition in 1992, and in a second edition in 2004. It is the only single-volume history of Germany in English which offers a broad, general coverage. It has become standard reading for all students of German, European studies and history, and is a useful guide to general readers, members of the business community and travellers to Germany.
An octopus may not seem too menacing to people. But to a hermit crab, the writhing tentacles signal a lethal threat from a hungry predator and prompt a hasty retreat into its borrowed shell. The bold crab that instead lingers outside its home is just asking for a deadly hug. Now, marine scientists are wondering whether a dramatic, global shift in seawater chemistry could make some deep-sea hermit crabs bolder—or rather, more foolhardy. Enter the toy octopus: A team of researchers in California is exploring how the changing ocean chemistry affects a hermit crab's fight-or-flight response by simulating octopus attacks in the laboratory. A video of one of these ambushes was a big hit with scientists attending the Third International Symposium on the Ocean in a High-CO2 World in Monterey, California, earlier this week, sparking laughs and a bit of buzz. Although seemingly lighthearted, the toy octopus experiment is part of a deadly-serious effort to understand the worrying ecological implications of a process known as ocean acidification. Over the last few centuries, the ocean has absorbed huge amounts of the carbon dioxide spewed into the atmosphere by human activities, such as burning fossil fuels. The uptake is helping to slow climate change, but also fueling chemical reactions that are shifting the pH of seawater toward the acid end of the scale. On average, researchers estimate that surface waters, where key players in the ocean food chain live, have seen a 0.1 decrease in pH since the beginning of the Industrial Revolution; that's an extraordinarily rapid 30% increase in acidity. Many researchers worry that acidification will make life harder for some shell-building marine organisms such as clams, crabs, and shrimp; more-acidic water could corrode the creatures' shells, or make it harder to build them in the first place. But the impact of the changing water chemistry could go even deeper. Researchers have also been surprised to discover that exposure to these waters can change the behavior of some marine organisms, such as by disrupting brain development in fish. They are particularly worried about acidification's impact in the deep sea, which could be hit hard by changes in pH. To see how acidification might affect one deepwater creature, marine biologist Taewon Kim and colleagues at the Monterey Bay Aquarium Research Institute in Moss Landing, California, used a robot submarine to vacuum up some deep-sea hermit crabs (Pagurus tanneri) that live off the coast of California at depths of 900 meters. Once the crabs were back in their laboratory, the researchers divided them into two groups: Some lived in tanks filled with seawater with a pH of 7.6, typical of the crab's deep-sea home; the others lived in seawater with a more acidic pH of 7.1, representing what the deep sea could be like in the future. Kim's team then measured a variety of differences between the two groups: how much oxygen they consumed, how quickly they detected prey, and how often they "sniffed" surrounding waters by flicking their antennae. In general, they found that the crabs in the more acidic water tended to flick their antennae less often, and were slower to sniff out food. But there was a lot of variation among individuals, he told an audience in Monterey, suggesting some crabs have the ability to cope with rising acidity levels. The researchers also looked at whether acidification altered the "boldness" of the crabs, or how long it took them to withdraw into their shells when attacked by a toy octopus held by a scientist. But there was no statistically significant difference between the two groups, Kim told an audience in Monterey—more acidity didn't make the crabs become foolishly brave. That negative result, however, didn't stop researchers from laughing in delight at the sight of a scientist splashing away with the toy octopus—or sending out numerous tweets and e-mails about Kim's droll and often hilarious talk. It ended with Kim suggesting that people might have to start a housing aid group for hermit crabs if a more acidic ocean begins dissolving the abandoned shells they use as their homes. In a nod to Habitat for Humanity, Kim said such an aid group could be called "Habitat for Hermanity."
Standards:ETS1 Engineering Design ETS1-2 Design a solution to a complex real world problem by breaking it down into smaller, more manageable problems that can be solved through engineering Many cities do not have appropriate public transit which leads to more cars on the road. Identify the issue of transportation in a city, and explain how your city will address public transit, remember to think about how people have moved out of the city center into the suburbs. Congested urban areas lead to more pollution. Identify how pollution occurs in an urban area and explain how your city will prevent or decrease the amount of pollution. Remember to be specific and address multiple types of pollution. Highly populated areas use land quickly for roads, housing developments, and businesses, leaving very little natural land. Explain how this negatively impacts ecosystems and ecosystem services. Please identify the ecosystem services and their benefits you will address when designing your city Due to the fact that cities and sprawl contain so many people, food must be provided. Explain how both agriculture and meatpacking (how meat is raised) can negatively affect the environment. Identify how food can be raised in a more beneficial way but still provide for all people. Create a sustainable city plan addressing the issues below and explain how your city will fix or prevent the issues from occurring. You must clearly define each of the issues below, each of the 4 issues on the left have multiple avenues, so be sure to address several aspects of each. Once you have clearly identified the issue explain how your city will prevent, fix, or decrease the problem. You will need to use reliable resources and are required to use the library databases as specified in blackboard. Picture: You will need to create a map of your city showing aspects like transportation, natural space, sprawl, city center, etc. You may also want to include other pictures to show your alternatives to current city issues. You may draw or use the computer to create these images. Works Cited: Provide a detailed list of resources in APA format. You must use articles from the database or from the website provided. If you use other articles not provided from the database or website you must get them approved by the teacher.
“Hellish” is a word commonly used to describe the atmosphere on Venus, but a new study suggests the second planet from the sun may have been habitable once. Writing in the journal Geophysical Research Letters, climate scientists from NASA’s Goddard Institute for Space Studies (GISS) say the planet may have had a shallow ocean and “habitable surface temperatures” for around 2 billion years of its history. “Many of the same tools we use to model climate change on Earth can be adapted to study climates on other planets, both past and present,” said Michael Way, a researcher at GISS and the paper’s lead author. “These results show ancient Venus may have been a very different place than it is today.” Today, Venus is a suffocating place with an atmosphere 90 times denser than Earth’s, almost no water vapor and temperatures that can reach as high as 462 degrees Celsius. Venus likely formed from the same materials as Earth, but went in a very different direction at some point in its history. The notion of a liquid ocean once flowing on the planet’s surface is not new. In the 1980s, the Pioneer space probe discovered hints that water once existed; but, that water was burned off by the amount of sunlight Venus gets. Venus spins very slowly, and one day there is the same as 117 days here on Earth. Even the water vapor was likely broken apart by ultraviolet radiation, which caused the hydrogen to escape the atmosphere, say scientists. “With no water left on the surface, carbon dioxide built up in the atmosphere, leading to a so-called runaway greenhouse effect that created present conditions,” NASA wrote in a press release. Venus’ slow rotation compared to Earth was thought to be a result of the thick atmosphere; but, researchers say that even a planet with a relatively thin atmosphere like Earth’s could also spin slowly. That means an ancient, habitable Venus could have also spun slowly. Topography also plays a crucial role in Venus’ atmosphere. The researchers think that even with an ocean, Venus has more dry land than Earth, particularly in the tropical regions. This, researchers say, would limit evaporation and slow down the greenhouse effect caused by water vapor. “This type of surface appears ideal for making a planet habitable; there seems to have been enough water to support abundant life, with sufficient land to reduce the planet’s sensitivity to changes from incoming sunlight,” NASA said. For the climate modeling used in this study, researchers took all of this into consideration when making a model for a hypothetical early Venus. They also took into account data from the Magellan spacecraft mission in the 1990's as well as factoring in that the ancient sun was up to 30 percent dimmer than it is today. “In the GISS model’s simulation, Venus’ slow spin exposes its dayside to the sun for almost two months at a time,” co-author and fellow GISS scientist Anthony Del Genio said. “This warms the surface and produces rain that creates a thick layer of clouds, which acts like an umbrella to shield the surface from much of the solar heating. The result is mean climate temperatures that are actually a few degrees cooler than Earth’s today.”
To distinguish the different purposes of text, this web site uses the following conventions: fixed font, blue, in a box - used to indicate where you need to replace the text with your own specific values: e.g. share name, etc. fixed font, black, in a box - used to indicate where you need to type a command. proportional font, black - used in descriptive text to indicate an operating system command, keyword or filename. italic font - text taken verbatim from elsewhere - usually book synopses. A common way to denote resistor values is shown below. This notation avoids using a decimal point, which may be mis-read on photocopied drawings or small components. It also avoids multiple zeroes that can be mis-counted and take up space. The general format is xQy where x is the integer part of the number, y is the fractional part and Q is a multiplier that shows the location of the decimal point. Multipliers are: R = 1, K = 1000, M = 1000000. Some examples: |100.0 x 1 = 100Ω |4.7 x 1000 = 4700Ω |1.0 x 1 = 1Ω |0.1 x 1 = 0.1Ω |0.01 x 1 = 0.01Ω |1 x 1000000 = 1000000Ω A similar notation is often used for low voltages. Again it uses the format xVy where x and y are the integer and fractional parts and V shows the location of the decimal point. In Europe, the decimal comma is used instead of a decimal point and this notation avoids confusion by omitting the point or comma completely. There is no multiplier. So 3V3 means 3.3V, 1V8 means 1.8V.
What is this technique about The Motivational video method is an icebreaker that uses publicly available videos to motivate the students to complete more complex tasks throughout the day, simply by completing small but meaningful tasks that are often taken for granted. One example could be to motivate the students to make their bed. Where does it come from Motivational videos are known as powerful means to inspire viewers. Organizations often use them to develop a sense of loyalty amongst their employees and commitment to excellence (Cinimage, 2014., online). In teaching, showing students a motivational video allows them to free their minds and get inspired. For which purposes it is used (why in your engineering teaching) When facilitating a workshop where students are deeply concentrated on solving a problem or developing a future scenario, it is ideal to have an ice breaker or “mind freer” midway or after long periods of focussing on one task. The purpose of the ice breaker is to get the students to stop thinking about the workshop for a moment and do a completely different activity so that the brain can process all the information obtained so far. Here, drawing the students’ attention by showing them a motivational video can positively affect their self-confidence and boost their motivation to solve the problem or scenario even better. Ice breakers are often underrated sessions but can positively affect the outcome of the workshop. How to use it Go to YouTube.com and click in the search field. Here, type the following: “motivational speech about making your bed”. The first video to show is a motivational speech from Admiral McRaven, which is ideal for this part as an icebreaker. Copy the link to the video and insert it into the sheet where the workshop is hosted or keep it as an open window in the browser, most important is that it is easy to access for the teacher. When reaching the icebreaker, share the screen so every student can see the video and play it for them. A link to the video is also provided under “Resources”. How to implement this techniques online Preparation, what do before the session - Find a motivational video clip on e.g., YouTube.com (see example above) - Keep it in the browser or save the link where the workshop is hosted. During application, i.e., while giving the session - Explain the purpose of showing the video. - Share the screen so every participant can see the video. - Make sure to turn up the sound and volume on the computer. - Tell the students to turn up the volume on their computer. - Ask if the students can hear and see the screen. - Press play. Follow-up, about what to do after the session - Give the students a break of approx. 15 minutes before proceeding with the workshop. You will need a platform to share screens and communicate with the participants, such as: MS Teams, Zoom or similar. As well as access to a shared document (Google doc, Word etc.) Furthermore, you will need access to a public platform with the selected video (e.g. YouTube.com, vimeo.com/watch). Cinimage (n.d.). Benefits of motivational videos. Online at https://www.slideshare.net/cinimage/benefits-of-motivational-videos Nate Wylie Studios (2020, November 18). Speech To Change Your Life Today! Admiral McRaven “Make Your Bed” Motivational Words Of Wisdom. [Video]. YouTube. https://www.youtube.com/watch?v=sBAqF00gBGk Chlup, D. T., & Collins, T. E. (2010). Breaking the ice: using ice-breakers and re-energizers with adult learners. Adult Learning, 21(3-4), 34-39.
The running time of an algorithm refers to the length of time it takes for it to run as a function. An algorithm’s running time for a given input depends on the number of operations executed. An algorithm with more operations will have a lengthier running time. The number of operations of an algorithm in proportion to the size of an input also affects the algorithm’s running time. Time complexity and space complexity are also ways of looking at algorithms. The former is an expression of how long it takes for the algorithm to run as a function of the length of the input. The latter expresses the total amount of space or memory that it occupied during this process. “An uncommonly complex algorithm with requiring many minute functions will have a longer running time than a basic ‘if, then’ statement.”
Absolute Value Equations Practice Worksheet – The objective of Expressions and Equations Worksheets is to assist your child in learning more effectively and efficiently. These worksheets are interactive as well as problems that are based on sequence of operations. These worksheets will help children can grasp both simple and more complex concepts in a quick amount of time. You can download these free resources in PDF format to aid your child in learning and practice math-related equations. These are helpful for students between 5th and 8th Grades. Download Free Absolute Value Equations Practice Worksheet These worksheets can be utilized by students in the 5th-8th grades. These two-step word puzzles comprise fractions as well as decimals. Each worksheet contains ten problems. These worksheets are available on the internet as well as in print. These worksheets are an excellent method to learn how to reorder equations. These worksheets can be used for practicing rearranging equations. They also help students understand equality and inverse operations. These worksheets are suitable for use by fifth and eighth grade students. These worksheets are perfect for students who struggle to compute percentages. There are three kinds of problems to choose from. There is the option to either solve single-step problems with whole numbers, decimal numbers, or to employ word-based methods for fractions and decimals. Each page contains 10 equations. These Equations Worksheets can be used by students from the 5th to 8th grade. These worksheets can be used to practice fraction calculations and other concepts in algebra. The majority of these worksheets allow students to choose between three different types of problems. You can choose the one that is numerical, word-based, or a mixture of both. The type of the problem is vital, as each will be a unique problem kind. There are ten challenges on each page, so they’re fantastic resources for students in the 5th to 8th grade. These worksheets aid students in understanding the connections between numbers and variables. These worksheets allow students to the chance to practice solving polynomial expressions or solving equations, as well as getting familiar with how to use them in everyday life. If you’re looking for a great educational tool to learn about expressions and equations it is possible to begin with these worksheets. These worksheets can teach you about the various types of mathematical problems along with the different symbols employed to explain them. These worksheets are great for students in the first grade. These worksheets will help them learn how to graph and solve equations. These worksheets are ideal for practice with polynomial variables. These worksheets will help you factor and simplify them. It is possible to find a wonderful set of equations and expressions worksheets that are suitable for kids of any grade level. Making the work your own is the most effective way to master equations. There are a variety of worksheets for teaching quadratic equations. Each level has its own worksheet. The worksheets are designed to help you practice solving problems of the fourth degree. After you’ve solved a stage, you’ll move on to solving other types equations. You can then work on solving the same-level problems. As an example, you may discover a problem that uses the same axis as an elongated number.
For the last three decades, there has been an increasing recognition of the extremely damaging and deleterious effect on the earth’s climate and environment of the emission of greenhouse gases (GHG), such as carbon dioxide, from anthropogenic activity that poses a severe threat to the long-term survival of the human species. While discussions of this phenomenon used to be carried out a few decades earlier mostly in scientific journals, the words global warming are now ubiquitous in the popular media and the word “green” is now attached to all kinds of technology or activity as an attribute of environmental virtue. Every year or two, under the auspices of the United Nations, there is a COP (Conference of Parties) meeting of the world’s countries whose leaders make promises to cut the production and consumption of fossil fuels, oil, gas and coal, whose combustion in power plants, industry, transport, buildings, homes, etc., is responsible for the production of GHGs, mostly carbon dioxide. However, data shows that most of these promises have turned out to be rhetorical “hot air” as the production of greenhouse gases still keeps rising. Since 2014 approximately 330 billion tons more of CO2 has been dumped into the global atmosphere causing a rise of over 5% in GHG emissions. The US Administration recently gave approval to the Willow Project, a mega scale oil extraction project in a pristine wilderness area of Alaska at the edge of the Arctic Circle, that is expected to produce at its peak 180,000 barrels of oil per day. The production and use of this quantity of petroleum will add over 9 million metric tons each year of the greenhouse gas carbon dioxide to the earth’s atmosphere for approximately the next 30 years. This will happen at a time when the world is expected to halve its greenhouse gas (GHG) emissions by the year 2030 to prevent global temperature rising beyond 1.5 degrees C according to the recently released sixth assessment report AR6 of the IPCC (intergovernmental Panel on Climate Change), the United Nations mandated authoritative scientific body on climate change. AR6 has made clear the grave consequences of the worldwide rise in GHG emissions and the possibility of irreversible changes to the earth’s climate leading to large parts of the world becoming uninhabitable should we fail to change course. The Willow project is owned by the oil major ConocoPhillips that obtained lease rights for the area located in the National Petroleum Reserve-Alaska (NPR-A) back in the late 1990s. While environmental groups have criticized the US administration for approving the project and have threatened to sue it in court, legal analysts have pointed out that the government had few options given that ConocoPhillips held the lease rights to a portion of the NPR-A reserves, and the government would have lost in court if they had tried to block the project and had been sued. Moreover, while Willow is a major oil development project, it needs to be pointed out that it is just one of many hundreds of new oil and gas extraction projects that were approved just last year of which many are in the US itself. Indeed, if the fracking of shale rock to extract natural gas that is occurring in the US is included, the US could be the world’s leading oil and gas producer compared with other major oil and gas producing countries like Saudi Arabia and Norway. In fact, an analysis of new oil and gas projects approved in just 2022 and 2023 in 30 countries reported in the New York Times (NYT) newspaper of April 6, 2023 shows that several tens of billions of barrels of oil equivalent will be produced over the lifetime of these projects, typically 30 years, over and above the current production of oil and gas that is already too high and needs to be substantially reduced to meet climate goals. In the last year, as the covid pandemic waned, oil company production and profits have literally soared to astronomical levels with the major oil companies, BP, Shell, Exxon, etc., raking in record profits of tens of billions of dollars. This bonanza is expected to continue for some years based on the planned projects that have either been approved or are in the approval chain. As the NYT report quoted above reveals: “Amid the record profits fossil fuel companies made last year, some also extended timelines for production further into the future, in essence reneging on pledges to transition their businesses, however slowly, toward renewable energy. BP recently revised its plan to cut production by 40 percent by 2030, setting a new target of 25 percent. The company’s stock price surged on the news.” Along with oil and gas, the other, dirtier, fossil fuel, coal, still appears to have plenty of life left. Both China and India are expanding coal production and implementing thermal power projects based on coal combustion. For example, despite the extensive lip service paid to renewables by the Indian government, India opened up significant areas of virgin forest in the state of Chhattisgarh in central India to bids by private investors to extract coal via strip mining, the most environmentally destructive form of coal extraction, that will destroy many thousands of acres of old growth trees. It is no surprise that the Modi government’s favorite company Adani Group, whose name has become an international byword for financial corruption, won most of the bids as reported in an article in Scroll of April 4 by Arunabh Saikia and Supriya Sharma: “The Adani Group emerged as one of the largest winners in commercial coal auctions held by the government in March, picking up four coal blocks at among the lowest prices.” The same article indicated “The Modi government greenlighted the clearance of about 3,000 acres of forest land in Chhattisgarh for the expansion of a coal mine operated by the Adani Group, even though a government-funded study found that coal extraction was not as per the mining plan.” Despite these irregularities and shenanigans, Adani remains a major player in the thermal coal power sector from his ownership of Australia’s largest and most controversial coal mine and his project to supply Bangladesh power from a plant in Jharkhand that will burn coal imported from the Australian mine. These plans and projects to continue or enhance fossil fuel production need to be analyzed in the context of the predictions of the IPCC AR6 report as well as the commitments made by different countries to reduce GHG emissions at various recent COP conferences. AR6 indicates clearly that that even the current rise in global temperature of 1.1 degrees C is causing changes in the climate system in every region of the world including more frequent extreme weather events, and the rise of sea levels along with rapidly disappearing sea ice. The IPCC scientists unambiguously state that by 2025, at the very latest, world GHG emissions need to peak, then decline by 43% by the year 2030 and reach net zero emissions by 2050 if global temperature rise over pre-industrial levels is to be limited to 1.5 degrees C. The environmental scientist Prof Kevin Anderson of the University of Manchester in the UK, has recently argued (The Conversation, March 24, 2023), that the conclusions of the AR6 report of net-zero carbon emissions by 2050 to limit global temperature rise to 1.50 C are too optimistic since their projections of GHG emissions were based on data from 2020 as the base year. Anderson calculates that if the 2020 emissions are updated to 2023, then to keep the 1.50C limit of global temperature rise implies that net zero emissions have to arrive a decade earlier by 2040 and comments that “Given it will take a few years to organize the necessary political structures and technical deployment, the date for eliminating all CO? emissions to remain within 1.5°C of warming comes closer still, to around the mid-2030s. This is a strikingly different level of urgency to that evoked by the IPCC’s “early 2050s.” The likelihood of any of this happening can be gauged from the fact that by 2019 emissions had increased by 12% from their 2010 levels and although there was a fortuitous dip in 2020 caused by the covid pandemic, emissions have continued to grow ever since. It appears more likely that a rise in global temperature by 2 to 3 degrees C will occur which could lead to irreversible changes in the environment from climate impacts, such as sea level increases that may doom many island and coastal area communities, and temperature levels that could render many areas in the world uninhabitable. South Asia, in particular, is on the cusp of being one of the world’s most vulnerable regions including areas of Pakistan, India, Bangladesh, and Sri Lanka which are likely to be among the worst affected. There is considerable irony in the fact that many countries in Asia and Africa that have contributed the least to global warming will suffer the most from its consequences, another feature of the profound inequity of the global economic and political system. The US Administration under President Biden has committed hundreds of billions of dollars to fight climate change through policies such as speeding up the transition of the US transportation sector to electric vehicles, curbing leaks of methane (a much more potent greenhouse gas compared to CO2) from oil and gas wells and providing numerous incentives for renewable energy production. Very recently, press reports claimed that the Environmental Protection Agency (EPA) will announce limits on GHG emissions from the 3400 coal and natural gas power plants in the US that provide about 60% of the nation’s electricity and account for 25% of the US GHG emissions. Almost all of the fossil fuel-based power plants will be required to cut or capture their CO2 emissions to achieve net zero carbon emissions by around 2040 if the press reports are accurate. While this is the first time that the federal government will require power plants to limit their GHG emissions, it is certain that this regulation will face severe opposition in the US Congress as well as an uncertain future in the court system. Moreover, the carbon capture technology that limits emission of CO2 is itself highly experimental and is currently used by a mere handful, around 20, of the thousands of power plants currently in operation. There is still considerable uncertainty about the price of carbon credits and the concurrent improvements in carbon sequestration technology that will be needed to achieve the goals reported in the media. If a comparison is made, even just within the US, between the Administration’s climate change plans with those of the oil and gas companies supported by the major banks to significantly increase production in the next few years, especially considering the huge increase in US LNG exports to Europe that are replacing Russian gas embargoed by many countries in Western Europe following the war in Ukraine, the words of a long-time climate activist stand out. Jamie Henn, director of Fossil Free Media, a nonprofit media lab devoted to end the use of fossil fuel comments: “This is the crux of the climate problem…The Biden Administration is taking some bold action –but they only want to tackle the demand for fossil fuels, not the supply. That’s like trying to cut a piece of paper with only one side of the scissors.” Jamie Henn raises a very basic point here why it’s difficult, if not impossible, for the political, economic, and legal system of the US to curb and stop the investment of billions of dollars in the Willow or similar oil and gas projects that will produce many gigatons of carbon dioxide and other GHGs over the next decades, which could well drown many portions of the earth’s surface and also render large land areas uninhabitable. It would appear that it is the fundamental system of property relations, also known as the global capitalist system, that renders illegal the “taking” of private property, in other words stopping some of the largest companies in the world from continuing to carry on an activity they have been doing for well over 150 years. In principle, the US political and legal system could restrict or ban the production of fossil fuels as hazardous to human health on grounds somewhat similar to what were used for curbing tobacco smoking. However, given that tobacco products are still legal, though restricted, and it took perhaps four decades to achieve those restrictions, the chance of doing something similar for fossil fuels in the next decade or so appear to be somewhat less than those of a snowflake in hell. Based on the scenarios already sketched in the IPCC’s AR6 report and updated by Prof. Kevin Anderson, the next decade or two will be extremely difficult if not an ongoing catastrophe for many people in the tropical areas of the planet that account for a large majority of the world’s population. Some of this is probably unavoidable but is emphasizes the urgent need to prepare plans to prevent the worst scenarios from occurring in areas that have the least resources to overcome climate impacts. The magazine Monthly Review’s April 2023 issue is devoted to climate change and one paragraph in its Review of the Month lead article points to the outline of a way forward in the advanced capitalist countries that is worth quoting at length: “It is likely that the struggle, at least in the capitalist core, will have two phases, the first of which will be ecodemocratic, aimed at a kind of ecological popular front directed at the fossil fuel companies and financial capital, but pointing in an ecosocialist direction since going against the logic of capitalism; the second of which will take a form in which ecosocialism is dominant if there is to be any hope at all. What is certain is that we have to abandon capital accumulation as the driver of society. As the leaked 2022 IPCC climate mitigation report agreed to by scientists clearly indicated—prior to the censorship of this report by governments in the published version—what is required at this point is the adoption of new, low-energy solutions, necessitating vast changes in the structure of social relations.”
Municipal biosolids, once considered a challenging byproduct of wastewater treatment, have evolved into a subject of exploration for their potential environmental and community benefits. Through advanced biosolids processing techniques, municipalities are finding ways to transform what was once deemed waste into resources that contribute positively to both the environment and local communities. 1. Resource Recovery: A Sustainable Approach Municipal biosolids processing stands at the forefront of a sustainable approach to waste management. By harnessing advanced technologies, wastewater treatment plants can recover valuable resources from biosolids, turning them into reusable and beneficial products. Notably, the extraction of nutrients such as nitrogen and phosphorus from biosolids provides an eco-friendly alternative to chemical fertilizers in agriculture. This resource recovery not only promotes environmental sustainability but also aligns with the principles of a circular economy, where waste becomes a valuable input for other processes. 2. Renewable Energy Generation: Turning Waste into Power One of the significant breakthroughs in biosolids processing is its potential for renewable energy generation. Through methods like anaerobic digestion, the organic matter present in biosolids can be converted into biogas—a valuable source of renewable energy. This biogas can be utilized to power the wastewater treatment plants themselves, reducing their dependence on external energy sources and contributing to a more sustainable energy landscape. The integration of biosolids into renewable energy production adds an extra layer of eco-efficiency to wastewater treatment processes. 3. Improved Soil Health and Land Reclamation Beyond waste management, biosolids, when processed and treated appropriately, serve as an excellent soil conditioner. The nutrient-rich content of biosolids enhances soil fertility, improves water retention, and encourages microbial activity. Consequently, biosolids become a valuable amendment for degraded soils, aiding in land reclamation and promoting healthier and more productive ecosystems. Controlled application of biosolids not only benefits soil health but also reduces the need for landfill disposal, aligning with sustainable waste management practices. 4. Cost-Effective Waste Management Efficient biosolids processing not only contributes to environmental benefits but also proves to be a financially prudent solution for municipalities. By recovering resources and generating renewable energy, the overall operational costs of wastewater treatment plants can be offset. This reduction in costs is crucial for municipalities, as it alleviates the financial burden on local governments and taxpayers. The cost-effectiveness of biosolids processing makes it an attractive and viable option for municipalities striving to balance budgetary constraints with sustainable waste management practices. 5. Community and Agricultural Collaboration: Strengthening Local Ties Biosolids processing goes beyond the technical aspects of waste management; it fosters collaboration between municipalities, agricultural communities, and local stakeholders. By providing a sustainable source of nutrients for agriculture, municipalities can actively support local farmers, contributing to the growth of regional economies. Transparent communication and community engagement regarding biosolids processing build trust and understanding among residents. This ensures the acceptance and success of such initiatives, fostering a sense of shared responsibility for sustainable waste management practices. Looking to Install a Municipal Biosolids System? Municipal biosolids processing, when approached with advanced technologies and a commitment to environmental responsibility, offers a comprehensive range of benefits that extend well beyond traditional waste management practices. From resource recovery to renewable energy generation, improved soil health, cost-effective solutions, and strengthened community ties, the positive impacts of biosolids processing are shaping a more sustainable and integrated approach to wastewater management. As municipalities continue to explore and implement these innovative solutions, the potential for positive environmental and community outcomes grows, contributing to a more resilient and responsible future. Work with Vulcan to design and build a system to your specifications today.
2. Identifying different types of problem With any mathematical task or problem you set your pupils, there are ‘deep’ features – features that define the nature of the task, and strategies that might help solve it. Almost all mathematics problems have these deep features, overlaid with a particular set of superficial features. As a teacher, you have to help your pupils understand that once they have recognised the superficial features, changing them does not have any effect on how we solve the problem. The strategies for solving a problem remain the same. (See Resource 2: Ways to help pupils solve problems.) Case Study 2: The essence of the problem Amma wrote this problem on the board: In one family, there are two children: Charles is 8 and Osei is 4. What is the mean age of the children? Some pupils immediately wanted to answer the question, but Amma told them that before they worked out the answer, she wanted them to look very closely at the question – at what kind of a question it was. Was there anything there she could change that would not alter the sum? Some pupils realised that they could change the children’s names without changing the sum. Amma congratulated them. She drew a simple sum on the board (1+1=2) and then said, ‘If I change the numbers here,’ (writing 2+5=7) ‘it is not the same sum, but it is still the same kind of sum. On our question about the mean, what could we change, but still have the same kind of sum?’ Some pupils suggested they could change the ages of the pupils as well as the names. Then Amma asked, ‘Would it be a different kind of sum if we talked about cows instead?’ They kept talking in this way, until they realised that they could change the thing being considered, the number and the property of these things being counted, all without changing the kind of sum being done. The pupils then began writing and answering as many different examples of this kind of sum as they could imagine. Activity 2: What can change, what must stay the same? Try this activity yourself first. - Write the following question on your chalkboard: Mr Ogunlade is building a cement block wall along one side of his land to keep the goats out. He makes the wall 10 blocks high and 20 blocks long. How many blocks will he need in total? - Ask your class to solve the problem. - Check their answer. - Next, ask your pupils in groups of four or five to discuss together the answer and what can be changed about the problem, yet still leave it essentially the same so it can be solved in the same way. - Ask the groups to make up another example, essentially the same, so that the basic task is not changed. - Swap their problem with another group and work out the answer. - Do they have to solve this new problem in the same way?
The NASA Perseverance rover, which has been exploring the Jezero Crater on Mars since February 2021, has recorded the acoustic environment of the Red Planet for the first time. Using the SuperCam microphone developed at Los Alamos National Laboratory and a consortium of French universities under the Centre National D’Etudes Spatiales, an international research team published the first analysis of these sounds April 1 in Nature. “For the first time we were able to record the sound environment of the Red Planet between 20 hertz and 20 kilohertz, which is the audible range for humans,” said Baptiste Chide, a postdoctoral fellow at Los Alamos National Laboratory and author on the paper. “We discovered a totally new acoustic environment. Being able to record these sounds helps us better understand the behavior of sound on the Martian surface and helps us learn more about the atmosphere of the planet.” While missions to Mars have been returning images from the red planet for nearly 50 years, all have been silent; no microphone had been able to record the acoustic environment associated with the Martian landscape until now. Researchers believed that studying the Martian soundscape could contribute to the understanding of the planet, which led to the development of two microphones for the Perseverance mission; one that was provided by NASA’s Jet Propulsion Laboratory and the SuperCam microphone, which was provided by the Institut Supérieur de l'Aéronautique et de l'Espace in France. The recordings revealed that Mars is a quiet planet, Chide said. Besides the wind and the turbulence of the atmosphere, natural sources are not very abundant. “With these recordings of the wind, researchers have been able to correlate the sound with variations in the wind flow, shedding light on wind gusts at very short timescales for the first time,” he said. But the Perseverance rover has its own sound sources, including the SuperCam laser, zapping rocks on the surface and the Ingenuity helicopter, flying over the Jezero Crater. While the behavior of sound is well known on Earth, sound propagation (or how sounds travels) in the Martian atmosphere is far less understood and was only predicted by uncertain theoretical models. From the recordings, scientists found that sound attenuation — the measure of sound’s energy loss as it travels — was stronger on Mars than on Earth. The propagation of sound differs according to the different frequencies; high frequencies are lost very quickly at short distance, contrary to the low frequencies. This difference is due to high CO2 content, which makes up 96% of Mars’ atmosphere, and the low pressure on the Martian surface, which is 170 times lower than on Earth. These factors would make a conversation between two people, separated by only a few yards, difficult. After one year of Perseverance on Mars, Chide and the research team have collected more than five hours of Martian sounds. These results show the potential for acoustic measurements to study the dynamics of Mars’ atmosphere, and eventually could improve the understanding of other planetary atmospheres such as those on Venus or Titan. Listen to some of the sounds the rover has recorded during its first year here.
Ozone is one of the most important trace gases in our atmosphere that both benefits and harms life on Earth. High ground-level ozone amounts contribute to poor air quality, adversely affecting human health, agricultural productivity, and forested ecosystems. Ozone absorbs infrared radiation, and is most potent as a greenhouse gas in the cold upper troposphere located 8–15 km above the surface. In the stratosphere, between approximately 15 and 50 km above the Earth’s surface, a layer rich in ozone serves as a “sunscreen” for the world by shielding the Earth’s surface from harmful ultraviolet radiation. This absorption of solar energy also affects atmospheric circulation patterns and thus influences weather around the globe. Moreover, throughout the atmosphere, ozone is the key ingredient that initiates chemical cleansing of the atmosphere of various pollutants, such as carbon monoxide and methane, among others, which could otherwise accumulate to harmful levels or exert a stronger influence on climate. Therefore, changes to ozone anywhere in the atmosphere can have major impacts on the Earth. A major challenge in developing both a quantitative scientific understanding of atmospheric ozone and its changes, and effective policies to mitigate the consequent harmful aspects, is that ozone is not directly emitted to the atmosphere. Instead, ozone abundances are mainly controlled by emissions of other trace gases and a suite of chemical reactions. The relative importance of individual reactions varies with location and season. Human activities associated with energy production, transportation, and industry have altered the chemical reactions that create and destroy ozone throughout the atmosphere, leading to net increases in some regions and net decreases in others. As the multiple roles of ozone described above depend upon its location in the atmosphere, the overall societal impacts depend on where in the atmosphere ozone changes occur. The perturbations to ozone abundances by human activities have been substantial, and have, on the whole, enhanced its negative impacts. Significant scientific progress has been made in understanding the factors controlling atmospheric ozone distributions and their temporal changes in different regions of the atmosphere, as well as in understanding the impacts of those changes on the planet. This progress has been made through a combination of many in situ and remote-sensing observations, fundamental laboratory-based process studies, and theoretical and numerical modeling. Several national and international governmental and scientific organizations produce regular assessments of scientific understanding and policy related to these ozone changes. The American Meteorological Society’s (AMS) position on atmospheric ozone, summarized below, is based on these assessments and the broader scientific literature that underpins such assessments. Ozone in the stratosphere, profoundly important for life on land and in surface waters, can be depleted by industrial chemicals. Within the lower half of the stratosphere, between about 15 and 30 km above the surface, ozone concentrations reach their largest values in the entire atmosphere, several times higher than in the troposphere. This “ozone layer”, which evolved naturally, absorbs the majority of harsh ultraviolet (UV) radiation from the sun and shields life on land and in surface waters from radiation capable of damaging DNA, skin, and eyes. Decreases in stratospheric ozone therefore lead directly to increased exposure to UV radiation. Natural and controlled experiments have demonstrated that greater exposure to UV radiation increases rates of certain skin cancers and cataracts in humans, and decreases photosynthetic productivity in terrestrial and marine ecosystems as well as agricultural crop yields. In the 1970s, it was recognized that the ozone layer was threatened by the use of ozone-depleting substances (ODSs), chemicals such as chlorine-containing chlorofluorocarbons (used for refrigeration, air conditioning, and other applications), bromine-containing Halons (used for fire suppression), and many other chemicals containing chlorine and bromine. These human-emitted chemicals have their most dramatic impact in the annual Antarctic ozone hole, a phenomenon that started in the late 1970s and recognized in the early 1980s, where now more than half of the ozone above Antarctica is depleted each year in late September through October. Globally, the ozone layer has exhibited decreases of a few percent since 1980. It is now known with high confidence that the world avoided a major catastrophe of global ozone layer depletion by taking deliberate actions. A series of international measurement campaigns by multiple agencies and universities to study stratospheric ozone chemistry provided undeniable evidence linking industrial ODSs to the severe stratospheric ozone depletion during Antarctic spring and around the globe. As the scientific understanding of ozone depletion increased and the risks of continued ODS production were realized, people of the world agreed to take action to minimize the ozone layer depletion and enable recovery in the future. This action was codified in the Montreal Protocol for the protection of the ozone layer in 1987, the only universally ratified international environmental treaty. Because of the compliance of all the countries with this treaty, the ozone layer depletion is no longer worsening and indications are that ozone is even beginning to recover. The current scientific understanding, supported by numerical model projections, suggests the ozone layer should return to pre-ODS levels of 1980 in the late 21st century with continued compliance with the Protocol, barring unforeseen events. This result stands as a shining example for the societal impacts of basic scientific discoveries and research, and for the application of research results to policy development. Stratospheric ozone influences weather, and this influence is detectable due to the ozone hole. Through the influence of stratospheric ozone on radiative heating and cooling, there are couplings between ozone and atmospheric circulation. Increasing levels of greenhouse gases in the atmosphere have cooled the stratosphere. This cooling in turn affects the rates of chemical reactions governing stratospheric ozone abundances. Such effects of ozone concentrations on radiation, and therefore temperature and moisture budgets, and the associated feedbacks with climate, are becoming routinely included in climate models as well as operational weather prediction systems for improved simulations of short- and long-term variations in atmospheric circulation. Observational and computer modeling studies demonstrate that the Antarctic ozone hole has led to a delay in the seasonal breakdown of the stratospheric polar vortex, which influences both the recovery of the ozone hole and the lower atmospheric circulation in the Southern Hemisphere summer. In contrast to the stratosphere, ozone concentrations in the lower atmosphere have increased since preindustrial times, often most profoundly in and downwind of large urban areas, degrading human and ecosystem health as well as agricultural crop yields. Human activities associated with industrialization and modernization, such as power generation and transportation, have dramatically increased emissions of ozone precursors such as nitrogen oxides (NOx) and volatile organic compounds (VOC). Research in the 1950s showed that together with sunlight, these pollutants catalyze the rapid formation of ozone in the air — a process known as photochemical smog formation of which ozone and secondary aerosol particulate matter are the main byproducts. Prevalent in major cities and surrounding areas around the world, high ozone concentrations in photochemical smog can adversely affect human health, the built environment, ecosystems, and agricultural yields. For example, epidemiological studies show an increase in asthma-related hospital visits following enhanced ozone exposure. Forests downwind of regions with high surface ozone show decreased productivity and visible leaf and needle damage. High- ozone episodes lead to deterioration of common polymers. The exposure of soy plants to elevated ozone leads to decreased crop yields, costing an estimated $1billion per year in lost productivity at current surface ozone levels. For these and other reasons, many localities now aim to regulate ozone concentrations at the surface to remain below specific threshold values. The recognition that chemical processes, especially those influenced by human actions, contribute greatly to ozone concentrations in the lower atmosphere was the foundation for policy action. Effective policies to reduce surface and lower atmospheric ozone concentrations must incorporate an understanding of meteorological processes which might lead to elevated concentrations of ozone, natural and anthropogenic activities which lead to emissions of ozone precursors, and the atmospheric chemical reactions that form ozone. Ozone concentrations respond nonlinearly to changes in emitted precursor gases, with some precursors being more influenced by human activities than others. Moreover, each locale has a different background ozone concentration set by circulation patterns and pollution sources upwind, which can vary significantly from day to day. Furthermore, ozone in a given location can increase as the result of influences beyond the control of that region, for example, due to ozone transport from the stratosphere, production from wildfires, and from international precursor emission. Therefore, exemptions are included in U.S. air quality regulations for influences beyond the control of an air management agency. In the United States and other nations that have enacted air pollution control policies, daytime maximum surface ozone concentrations have decreased over the past decades; unfortunately, surface ozone levels are increasing in other regions of the world. Ozone precursor emissions have decreased severalfold across the United States in response to regulations, even while energy production and automobile use have continued to increase. Thus, there has been a major success in cleaning U.S. near-surface air across much of the nation. Other industrialized nations have enacted similar controls, approaching similar levels of success. Air quality policies continue to evolve as scientific understanding improves. For example, nonmethane VOC emission controls effectively reduced the highest ozone levels in Los Angeles, CA. However, the highest warm season ozone levels began to decline only after NOx reductions were phased in under the Clean Air Act in U.S. urban regions with abundant supplies of biogenic VOC emissions from vegetation. Yet, ozone pollution is increasing in populated areas of rapidly developing countries where severe events occur analogous to those that occurred in U.S. or European cities during the mid to late 20th century. Because of the scientific advances, technological improvements, and lessons learned in cleaning up ozone pollution in developed countries, there is now an opportunity for a more rapid transition to cleaner air in these areas while increasing prosperity. Ozone and its precursors are transported throughout the lower atmosphere, adding a hemispheric dimension to local air quality. Ozone produced as part of photochemical smog, and its precursors such as methane, carbon monoxide, and nitrogen oxide reservoirs that produce ozone, can be transported through the atmosphere for days. The global spread of ozone and its precursors resulting from pollution have led to increases in ozone concentrations on large regional and even hemispheric scales throughout the troposphere, the region between the surface and the stratosphere. Increases in tropospheric ozone have been largest in the northern hemisphere where anthropogenic emissions of ozone precursors are the largest. This is documented in long-term monitoring network observations. Air quality regulations continue to aim for lower ozone concentrations, motivated by improved understanding of ozone’s negative effects on public health and ecosystems. Yet, increases in the regional or even hemispheric ozone “background” concentrations are generally outside of local control, and may even arise from international or intercontinental transport of ozone formed from precursors emitted in another country. As such, to attain compliance with more stringent regulations on surface ozone concentrations, a locality may require even more restrictive emission control strategies if global tropospheric ozone continues to increase. Changes in tropospheric ozone influence as well as respond to atmospheric composition and climate. As noted above, ozone is radiatively active, acting as a greenhouse gas in the upper troposphere. The increases in upper tropospheric ozone since preindustrial times, due to anthropogenic emissions of its precursors, have contributed significantly to the positive radiative forcing of climate (warming), with a magnitude similar to that from changes in methane concentrations, though the spatial and temporal patterns of these forcings are not necessarily comparable. If the ozone precursor emissions are reduced, the atmosphere would respond quickly and reduce ozone. As ozone is the dominant source of the hydroxyl radical, our atmosphere’s main cleansing agent, increases in global tropospheric ozone have altered the atmosphere’s ability to cleanse itself. Changes in tropospheric ozone thus affect the abundance of other greenhouse gases, such as methane and certain halocarbons, as well as that of aerosol particles, while changes in methane and aerosol particles influence tropospheric ozone concentrations. These chemical effects thereby couple the fates of ozone and several climate forcing agents and their impacts on radiative forcing in ways that continue to be the focus of continued research. Ozone changes remain a significant environmental challenge, requiring a broad set of actions across the world to monitor, evaluate, and mitigate the impacts. There are still many important gaps in knowledge and policy actions related to ozone throughout the atmosphere. They include ground-level ozone pollution with its major health and ecological disbenefits, increases in tropospheric ozone and the consequent climate impacts, and ozone layer depletion and its consequences for living things at the surface. Therefore, long-term trends in ozone abundances need to be documented around the world, monitored, and their causes determined to ensure that policy actions are working, to establish new scientific understanding, and to develop new policy. This need in turn calls for expanded in situ and remote-sensing observational capabilities of both chemical and meteorological variables. Natural variations in atmospheric ozone make it difficult to identify changes in ozone caused by anthropogenic activities, yet being able to make such distinctions is essential for policy decisions. Changes in climate and other atmospheric and oceanic events, such as wildfires, contribute to changes in ozone levels in various parts of the atmosphere through transport and chemical processes. In addition, even though the scientific understanding of the various controls on atmospheric ozone is well advanced, large perturbations to those controls by human activities continue. As the levels of human-induced emissions decrease( e.g., ODS levels or tropospheric ozone precursor levels respond to regulations) natural variations play an even larger role in year-to-year variations. This situation is further complicated because emissions from some regions of the globe continue to increase, and this pollution undergoes international transport, with the impacts on other regions varying with changes in atmospheric circulation. Yet, parsing the contribution of natural variations from human activities and specific regional activities is essential for policy action. Therefore, an integrative approach that accounts for all possible causes of ozone trends over time is required. Such an approach necessitates not only observations of the changes, which are currently sparse, but also developing and testing mechanistic explanations of those changes, which rely on a hierarchy of computer models ranging from process-based to fully coupled climate-chemistry Earth-system models. Scientific understanding and technical advances across the range of spatial and temporal scales represented in current computer models are enabling such attributions on regional and global scales, but these so far remain relatively uncertain. Thus, developing a deeper scientific understanding of the natural and anthropogenic processes affecting ozone, and the variability therein, and improving the representation of such understanding in models, will remain crucial activities for future policy decisions. Significant scientific effort over the past 50 years has led to recognition of the multiple roles of atmospheric ozone and substantial progress in identifying the processes shaping ozone trends in different parts of the atmosphere, as well as the associated impacts on air quality, ecosystem viability, weather, and climate. Based on these advances, society has taken concerted actions to mitigate the negative impacts of ozone trends, successfully addressing the causes of ozone depletion in the stratosphere and slowing or reversing increases of the highest summertime surface ozone events near populated regions in North America and Europe. Indeed, there are clear signs of progress in moving ozone abundances toward their natural levels in many regions. International cooperation in responding to the challenge of stratospheric ozone depletion, and clean air regulations in developed nations decreasing surface ozone in urban areas, are two of the great environmental success stories of the twentieth century. While many facets of ozone's atmospheric behavior are understood, scientific uncertainties and policy challenges remain. Their resolution will require combined efforts by the meteorological, chemical, and atmospheric physics communities. The AMS strongly supports these efforts focused on obtaining a better understanding of ozone and its behavior. [This statement is considered in force until January 2023 unless superseded by a new statement issued by the AMS Council before this date.]
Young people prepare for their visit to the retirement home by writing an autobiography. They work together to come up with questions to ask their senior friends. Filter by subjects: Filter by audience: Filter by unit » issue area: find a lesson The learners examine the responsible choices of Lorenzo de Zavala who was a leader in Texas and a signer of the Declaration of Independence. A positive school or community climate is made up of people making choices about how to act and treat one another. It is everyone's responsibility to follow the established social contract. To make a deliberate social contract, participants identify how they want to act together and survey the... Unit: Constitution Day Students identify key events in U.S. history and the magnitude of the Constitution in context, with a particular emphasis on philanthropy. This lesson is designed for Citizenship/Constitution Day (September 17) and connects students to the historical significance of the... To have students partner with a nonprofit organization to design and complete a service-learning project for that organization. In the third trimester of the Urban EdVenture course, students begin work on the final project in collaboration with their homeroom teachers. Each... Unit: My Country, My Community In a persuasive essay, learners describe the responsibilities of American citizenship and the cost of freedom. They connect how philanthropic action is a part of those costs. “Freedom isn’t free. It passes on an enormous debt to the recipient.”
Next Steps for a Healthy Gulf of Mexico Watershed Landscape at a Glance The Florida Keys are a coral cay archipelago that extends about 100 miles from the southern tip of the Florida peninsula in an arc to the southwest and then west into the Gulf. The islands lie along the Florida Straits, dividing the Atlantic Ocean to the east from the Gulf to the northwest, and defining one edge of Florida Bay. Even though most of the land area in the focal area lies between two to three feet above high tide, the combination of marine and tropical upland habitats supports a wealth of biological diversity and habitats, including numerous endemic plants and animals. The coral reefs of the Florida Keys are the most extensive living coral reef system in North America and the third largest coral reef system in the world. Much of this system is protected as part of the Florida Keys National Marine Sanctuary, encompassing 2,800 square nautical miles of state and federal waters in the Keys. This marine protected area shares its conservation footprint with State Wildlife Management Areas, National Parks and NWRs that conserve the habitats that are home to federally listed species such as the Key deer, the American crocodile, the Lower Keys marsh rabbit, the silver rice rat and sea turtles, as well as native and migratory birds, butterflies and plants. Major vegetation cover types include pine rockland, tropical hardwood hammock, freshwater wetland, mangrove forest and seagrass beds. The West Indian hardwood hammocks and pine rocklands are imperiled upland communities that include more than 120 species of hardwood trees, shrubs and plants. These forests are home to several endangered and threatened species including the Key Largo woodrat, the Key Largo cotton mouse, the Schaus swallowtail butterfly, the eastern indigo snake and the Stock Island tree snail. The mangrove forest ecosystem along the shoreline provides food and shelter to a myriad of marine organisms and shelter for diverse avian life. The shallow protected waters of Florida Bay and nearshore Atlantic waters support lush seagrass beds that serve as important nurseries for marine life and foraging grounds for wading birds. The Keys have become less resilient over time and are losing ecosystem integrity due to the increasing environmental impact of factors such as climate change, invasive species, habitat fragmentation and poor water quality. Perhaps the greatest conservation challenge facing the Keys stems from rising sea levels; data show that the sea level in the area has risen nine inches in the last 100 years, and is expected to rise an additional three to six feet by the year 2100. Impacts from such a rapid rise in sea level include increasingly fewer upland areas as marine and intertidal habitats move upslope and displace native species, as well as diminished property values as inundation becomes more widespread and increasingly common. Additionally, an increase in nutrient loading in nearshore waters from land-based sources over the past few decades has resulted in decreased water clarity and unnatural algal growth, which greatly adversely affects nearshore coral reef communities. The Service believes that it is imperative that we quickly and collectively act on adaptive and sustainable conservation strategies to address these immediate threats to the Florida Keys. The Florida Keys are home to a whole host of flora and fauna uniquely adapted to its insular environment. Many of these are distinctive subspecies that have evolved in isolation from their mainland congeners, are found nowhere else in the world, and are now endangered or threatened. Strategic land conservation and habitat management, particularly with potential climate change effects in mind, can aid in accomplishing recovery targets in these dynamic systems for several of these endangered species. Such targets are focused on stable populations with positive growth rates over time, such as for the Key deer (seven-year running), the Key Largo woodrat (three-year running average for six years), and the Lower Keys marsh rabbit (three-year running average for six years). Numerous invertebrate species (e.g., the endangered Miami blue butterfly and the threatened Stock Island tree snail) and plants (e.g., the endangered Key tree-cactus and the threatened Garber’s spurge) are also endemic to the Keys and their habitats (e.g., tropical hammock and pine) need to be restored and conserved. The subtropical climate of the Florida Keys also represents the northern range extent of even more species that are either federally listed or rare in the United States but that may occur more commonly in the Neotropics, such as the American crocodile, the West Indian manatee, the white-crowned pigeon, the mangrove cuckoo, and two threatened coral species, the staghorn and elkhorn. Restoring hydrology and protecting water quality will not only benefit some of these species, but will also benefit many waterbirds (e.g., the great white heron, the roseate tern and the brown pelican) with important nesting areas within the focal area. Similar to aquatic restoration in the Southwest Florida Focal Area, the Service also has committed to restoring hydrology in in the Keys for many years. Federal trust resource species like staghorn and elkhorn coral stand to benefit from a watershed scale restoration effort that includes improvements in water quality and restores functional freshwater flow to the estuary by removing old roadbeds and promoting better practices for wastewater treatment. These actions and others that are already outlined in existing state-federal collaborative plans will improve conditions for many commercially and recreationally important marine fish like red drum, bonefish, and tarpon as well as invertebrate species like blue crabs, shrimp, and oysters. A large number of at-risk species also occur in the Florida Keys (e.g., sawgrass skipper, Key ringneck snake, Florida Keys mole skink, and the Lower Keys population of the striped mud turtle, among others); most would benefit from the priority actions described below. High Priority Actions based on the Service’s Vision Continue strategic land conservation efforts to ensure sustainable plant communities and quality wildlife habitats, particularly mangrove and pine rocklands habitat, and to build resiliency in preparation of accelerated effects of climate change and sea level rise. - Coordinate with the state of Florida and Monroe County on their conservation land acquisition programs to strategically identify high-quality parcels and optimize land protection efforts to foster landscape conservation on private and public lands. - Work with willing sellers to protect important wildlife habitats within approved acquisition boundaries of NWRs in the Keys. - Work with partners to apply land conservation tools, such as conservation easements, partnership agreements, mitigation banks and technical assistance to protect, restore and manage priority habitats throughout the Florida Keys ecosystem. - Work with the Peninsular Florida LCC, state and federal agencies, and other stakeholders to develop a Florida Keys adaptation strategy to anticipate the conservation needs of the future in light of increasing sea level rise and urbanization. - Initiate planning for potential “ex-situ” or off-site conservation strategies to prevent extinction of species and subspecies endemic to the Florida Keys if conservation partners are unable to protect adequate habitat from impacts of sea level rise. - Implement long-term monitoring of any translocated species and assess their impacts on their new habitat and associated species. Enhance the biological diversity and resiliency of the fire-dependent pine rocklands and restore natural conditions and resilience of diverse habitats through frequent prescribed fire and/or control of invasive species. - Work with state, federal, NGO and private land partners to implement frequent prescribed fire in fire-dependent habitats, especially pine rocklands where numerous federally listed plant species exist. - Identify alternative treatments for maintaining stands of pine rocklands and reducing organic fuels where prescribed burning is no longer feasible due to adjacent, high-density urban areas. - Through coordination with the Florida Keys Invasive Exotics Task Force and its member organizations, detect and monitor the presence, spread and damage caused by invasive non-native plants, particularly upon listed native plant and wildlife species, in order to develop priorities for eradication and/or control. - Replace non-native plant species known to destabilize dunes and other coastal habitats with native species that are a natural defense against storm surge and coastal erosion which is likely to be exacerbated by sea level rise. - Work towards the eradication of selected non-native plant specimens that represent exceptional threats to native habitats (e.g., mature individuals of white leadtree, Australian pine and Brazilian pepper found in hammock canopy openings). - Work with landowners to control non-native seed sources from private lands and to increase coordinated mapping and monitoring of areas with known infestations of non-native plant species. Restore hydrologic processes to improve water quality, water flow and tidal connections, and to enhance reef and adjacent coastal habitats, including mangrove forests, for the benefit of native fish and wildlife. - Support implementation of landscape-level actions found in the Comprehensive Everglades Restoration Plan to enhance the water quality of Florida Bay, which will improve the overall health of the Florida Keys marine ecosystem, particularly seagrass and coral community habitats. - Remove backfill from historic wetlands and restore hydrologic connectivity in degraded wetlands. - Fill and plug ditches (e.g., former mosquito ditches) identified as essential to prevent unnaturally rapid infiltration of interior freshwater wetland, transitional and upland habitats by saltwater. - Restore hydrological connectivity by removing obsolete roadbeds and installing culverts under actively used roads to facilitate the rapid drainage of storm surge waters, especially important in places where storm surge has become impounded and is causing damage to freshwater-dependent habitats and species. These restoration actions are also effective at reviving and restoring degraded mangrove forests. - Monitor and assess the quality and quantity of subterranean freshwater lenses (i.e., layers of fresh groundwater that float on top of denser saltwater that arise when rainwater seeps down through a soil surface and then gathers over a layer of seawater at or down to about five feet below sea level) to determine the effects on fish, wildlife, and their habitats by saltwater intrusion caused by sea level rise.
Students in the primary grades often struggle with cvc words and this packet contains a wide variety of activities that are real favorites in my classroom. I have only used twenty-five words so that students can work with them and build capacity with them. There are a wide variety of literacy stations, games and activities that can be done during small group instruction. Included in this packet are a variety of books that can be created, where these words have been contextualized. It is Common Core aligned and reflects crucial foundation skills necessary for young children to learn to be successful readers. I even included headbands that your children can wear proudly, once they have mastered short vowel skills.These activities are ideal for intervention programs and special education.They are engaging and interactive. What are some goodies you might find? CVC Blending Mats Differentiation Sticks (student/ teacher) Game: I Have... Who Has? Cut and Paste Activities With Sentences and lots more...
Insect-resistant transgenic plants are grown in many countries worldwide. They produce insecticidal proteins that protect them against the main insect pests. But what about beneficial species? So-called Bt plants produce one or more proteins from the bacterium Bacillus thuringiensis (Bt) that provide protection against being eaten by caterpillars (e.g. the European corn borer) or beetle larvae (corn rootworms). Since the start of the cultivation of Bt plants in 1996, Agroscope has been researching the potential risks of Bt plants for beneficial arthropods. Our work focuses on predators and parasitoids which contribute to biological pest control and are thus crucial for sustainable agriculture. For risk assessment, it is important to know how much of the protein is ingested by a beneficial species (exposure), and how sensitive the species is to the protein (hazard). In addition to potential direct effects caused by ingestion of the Bt protein, indirect effects may emerge in the food chain owing to changes in the nutritional quality of sub-lethally damaged prey species. For over twenty years, Agroscope researchers have been studying and evaluating the possible effects of genetically modified Bt crops on biodiversity, and especially on beneficials that play a role in the biological control of pests. In a 2018-published review article, co-authored with American colleagues, Agroscope researchers point out that today’s Bt plants do not endanger beneficials. Rather, they actually support them in those cases where they help to reduce the use of chemical synthetic insecticides.
TVOKids Teacher Power Hour Shadows, Science & Symmetry | Primary Lesson | TVOkids Join teacher Joel explore scientific thinking about light and shadows! Scientists always start with what they know! Let's make a KWL chart to record our findings. We model what do you already know? Fill that in. Next, scientists ask QUESTIONS! They use wonders to discover new things. Let's make a list of questions or wonders you have about lights and shadows. Record your findings - observations - on your chart! Create a symmetry shadow art piece.
Eyebrows are crucial to human evolution, scientists have discovered, because it is how our predecessors first learnt to communicate. Dr Penny Spikins, one of the researchers from the University of York, said: “While our sister species the Neanderthals were dying out, we were rapidly colonising the globe and surviving in extreme environments. “This had a lot to do with our ability to create large social networks – we know, for example, that prehistoric modern humans avoided inbreeding and went to stay with friends in distant locations during hard times. “Eyebrows are the missing part of the puzzle of how modern humans managed to get on so much better with each other than other now-extinct hominins.” Modern mobile eyebrows inherited their social function from the prominent thick brow ridges that were a key feature of primitive human ancestors, the scientists believe. After conducting a new study of the skull of Homo heidelbergensis, an ancient African hominin dating back 124,000 – 300,000 years, the researchers discounted previous theories about the origin of brow ridges. Earlier research had suggested the heavy brows may have been needed to protect the skull when chewing, or were a structural feature arising from the conjunction of a flat brain case and large eye sockets. Instead it was much more likely that the purpose of brow ridges was social, said the scientists, who created a 3D computer simulation of a skull from Zambia known as Kabwe 1 housed at London’s Natural History Museum. Huge, jutting brow ridges may have signalled dominance and aggression, according to the findings reported in the journal Nature Ecology & Evolution. In a similar way, dominant male mandrills, the world’s largest monkey, sport brightly coloured muzzle swellings to display their status. The bones underlying mandrill swellings are pitted in the same way as the brow ridges of ancient hominins, the researchers point out. As human faces became smaller and smoother over the course of 100,000 years, jutting brow ridges gave way to eye brows capable of more subtle emotional displays, it is claimed. Dr Spikins added: “Eyebrow movements allow us to express complex emotions as well as perceive the emotions of others. “A rapid ‘eyebrow flash’ is a cross-cultural sign of recognition and openness to social interaction, and pulling our eyebrows up at the middle is an expression of sympathy. “Tiny movements of the eyebrows are also a key component to identifying trustworthiness and deception. “On the flip side it has been shown that people who have had botox which limits eyebrow movement are less able to empathise and identify with the emotions of others.”
A network is a collection or a group of people interacting and sharing information with each other, So in the same way , a Computer networks is a group of computer that interacts and share information with each other , connected by some communication links such as wires, cable or wirelessly. It is all about transmitting message from sender to receiver and vice-versa. A computer network can be wired and wireless. i) Twisted Pair Wires ii) Copper Wire or Coaxial Cable iii) Fibre-Optic Cables Mediums to form a wireless network are: i) Radio Waves ii) Terrestrial Communication iii) Communication Satellites i) Personal Area Network (PAN): A Personal Area Network (PAN) is a computer network that is used to transfer messages between a computer and other technological devices that belongs to one person only. Example of such devices are a laptop or desktop, fax machine, Personal Digital Assistance (PDA) or Palmtop computer, scanner , printer etc. The range of a PAN is around 10 metres. ii) Local Area Network (LAN): A Local Area Network (LAN) is a computer networks that is set-up within a home, college, institution, college and there nearby buildings. The connected computers are called nodes of this computer network. The wired LAN can be formed using an Ethernet cable. The range of a LAN is about 100-150 metres. Wireless LAN can be formed using the WI-FI slots in your computers. iii) Metropolitan Area Network (MAN): A Metropolitan Area Network (MAN) is a computer network that is set-up for a whole city or a very large area. MAN can cover a region of almost 30-40 kilometres. iv) Wide Area Network(WAN): A Wide Area Network(WAN) is a computer network that is set-up along a big area such as a WAN covering two or more cities, a country, a continent. The best example of WAN is Internet. Now after defining the network analogies, we should move on to the Internet and should look into some aspects of Internet. Ques 1. What is an Internet ? Internet is a collection of computers i.e. it is a network of networks in which million of computer are connected and interacting with each other. People are connected to the Internet from all over the world through different devices such as desktops, laptops, smartphones, tablets etc. These all devices in networking are called as hosts or end users or nodes. According to a survey conducted in 2011, more than 2 billion hosts were connected to Internet. Ques 2. How do different end users communicate with each other over computer networks? Now before directly answering your question, I want to ask something from you. Suppose when you want to ask time from a stranger. How do you do it. You first say “Hi” (Greet message) to initiate the communication, then he will reply you with “Hi”( a message that you will take as an indication, that the receiver is willing to communicate and you can proceed to ask time). Or the other replies could be “Don’t bother me” or “I don’t understand your language”, indicates the unwillingness to communicate. So you would not ask time. These were the human protocols, that before initiating you should say a “Hi” or “Hello”(or at-least your manners), then the reply from the receiver , that will indicate you, whether you should proceed with asking time or not. This whole procedure is governed by certain set of rules , that are called as protocols. Ques 3. What are the different ways of sending data ? The message or the data is called as packet in computer networks. So sending of packets can be done be done in 2 ways. Circuit switching and Packet Switching. Ques 4. What are protocols? A protocol defines the type, method and the order of the messages exchanged between the sender and the receiver node, and the actions to be taken after the transmission or the receipt of the message. Ques 5. What are the problems that occur after setting up a network? There are certain problems that we have to take into account to have proper transmission of message from sender to receiver. There can be Noise, various delays while transmitting, interference from outside etc. These are Transmission Delay, Queuing Delay, Processing Delay, Propagation Delay. Ques 6. How many protocol Layering are there? There are basically two models of protocol layering .One is said to be Open Systems Interconnection(OSI ) layer model and the other is TCP/IP model. In OSI model, there are 7 layers while in TCP/IP model, there are 5 layers of protocols. Ques 7. Is there any threat to our networks to our hosts? Yes, there are lots of threats engaged when you are connected to an Internet. There are two types of attacks, Active and Passive attacks. These attacks can include Denial of Service attacks, virus attacks, infrastructure attacks etc. This was all from us on Computer Networks. Do you have something to share with our readers on Computer Networking ?
Bacteria are not as simple as most of us think. While the widespread definition of a bacterium is simply a single-celled organism without any specialized sub-cellular compartments called organelles, such a blanket statement fails to take into account the discoveries of many types of bacterial organelles. Examples include the anammoxsome, a large compartment that allows the bacterium to use ammonium for energy, and magnetosomes, which house magnetic material. Carly Grant, a graduate student in Arash Komeili’s lab at UC Berkeley, has been studying a brand new organelle coined the ferrosome. A ferrosome is a small organelle, only 30 to 50 nanometers in diameter, made up of iron, phosphorous, and oxygen surrounded by a membrane. Ferrosomes appear as dark speckles throughout the cell. They were first discovered while researching magnetosomes in Desulfovibrio magneticus. The key to seeing ferrosome formation involves depriving the cells of iron, an important nutrient. “When we were watching them transition from low to high iron conditions, we observed that they fill up with these very uniform electron dense granules,” says Grant. Since then, Grant has observed several species of bacteria that are capable of producing ferrosomes. To study their composition, she separated ferrosomes from bacterial cells and analyzed the proteins associated with them. She found three proteins that were highly represented and used them to identify the genes that are necessary to produce ferrosomes. “When you delete those genes in Desulfovibrio magneticus, and other phylogenetically diverse bacteria … such as Rhodosudemonas palustris and Shewenella putrefacians, the cells no longer make ferrosomes,” says Grant. The exact function of ferrosomes remains unclear, but their presence throughout the bacterial world indicates they have an important role to play. Hayley McCausland is a graduate student in molecular cell biology Design Credit: Amanda Bischoff
Our top tips for supporting reading at home this Christmas break Supporting reading at home doesn’t need to be a chore, and will ensure that your child stays on track and sets them on the course for success. Done correctly, it can also become something they find fun and develop a love of! Checking their comprehension Chat about the book: keep it casual – ask questions like, “Who’s the main character?”, “Would you want to be friends with them?”, “What do you think will happen next?”, and “Where is the book set? Is it a real or fictional place?” Checking their vocabulary Find other uses for words: if they come across an uncommon word, but don’t struggle with it, explore how they might use that word in a different sentence; “What does that mean? When else could you use that word?” Break down new words: model sounding-out words by syllable and letter-sounds. Be consistent with praise: when your child comes across a new word, finds it tricky, but works it out make sure you praise them for it! Making reading enjoyable Use the five finger rule: when choosing a reading book make sure you find one that isn’t too easy or too challenging for them. If your child makes five or more mistakes when reading a single page, the book’s probably a bit difficult for them. Try audiobooks: if your child is really resistant to reading, try listening to audiobooks together in the car, instead of watching TV, or at bedtime. Most libraries have audio versions of popular children’s books that you can borrow for free. Find books that have been adapted into films or TV shows: if your child has already seen the film or TV series, encourage them to read the book. Or, if they’re enjoying the book and haven’t seen the adaptation, you can watch it together and discuss if you think it’s been adapted well; “Is that how you imagined that character?”, “Is that what their room is like in the book?” When they get tired or distracted, stop: the most important thing is for them to enjoy reading and for it not to feel like a chore or punishment. We hope this gives you some ideas for supporting reading at home. For further ideas, check out our ‘at-home activities’ resources on our website.
Complex post traumatic stress disorder (CPTSD) occurs with repeated ongoing exposure to traumatic events. Often CPTSD is a result of early traumatic relationships with caregivers. In this article we consider the effects of early traumatic relationships on learning. Many children with a history of trauma have trouble with learning in the classroom and do not perform as well as their peers. The connection between early interpersonal trauma and learning is particularly relevant when considering the ability to maintain attention and concentration. Often, early traumatic relationships impair more than emotion regulation abilities. Cognitive capacities are also deeply affected since the ability to focus and concentrate is largely dependent upon emotion regulation. Early attachment relationships and learning Early relationships have a direct impact on cognitive, social and emotional development. This is because an infant/child who is raised in a safe and supportive environment has ample opportunity for exploration as well as the availability of comfort from a trusted caregiver. One of the ways infants learn is through play and exploration of their environment. When thinking about this stage of development it is crucial to understand that an infant’s biological system is not mature enough to calm itself in times of fear or upset. This is why young children and infants reach for a trusted adult when they feel fear or uncertainty. In a secure relationship, opportunities abound for curiosity and exploration. At the same time, the infant is protected from unhealthy levels of stress, when he/she needs comfort, it is available. Attachment researchers call this phenomena a “secure base” in which the caregiver encourages the child to lay, with providing safety and security for the infant when needed. Exploratory play coupled with protection provide an optimal environment for learning. Researchers have noted traumatized infants tend to spend less time in exploratory play (Hoffman, Marvin, Cooper & Powell, 2006). Let’s imagine a young child in a playground. She is less than a year old and not quite walking on her own yet. With mom nearby she can explore, perhaps by playing in the sandbox and learning how her toy car moves differently over sand in comparison to the kitchen floor at home. She is learning important information about the world. While she plays while she is keeping an eye on mom, making sure she is near. If anything happens to cause fear, perhaps a big dog strays onto the playground, a predictable scenario plays out. The child begins to cry, afraid of the dog. Mom is here to help. She picks up her infant and soothes her distress, walks away from the animal, and relatively soon, the infant is calm again. In a traumatic relationship, mom may not recognize she needs to help her child. She may not be afraid of dogs and does not understand the infant’s reaction. She may decide to let the infant learn about dogs without her help. Perhaps the child gets bit by the dog or is allowed to scream frantically while the big, unfamiliar animal investigates her, and still mom does not react in an appropriate calming way. She may let her child learn the dog is safe (or not safe) without getting involved. Alternatively, she may escalate the situation with her own fear of dogs and scare the child even more. In terms of emotional and cognitive development, these two infants are dealing with very different internal and external environments. Internally, the traumatized infant’s developing nervous system is exposed to ongoing heightened states of stress hormones that circulate through the developing brain and nervous system. Since the infant is left on her own to recover from a traumatic event, all of her resources are required to bring herself back to a state of balance. Researchers in the field of neuropsychology have pointed out that when an infant is required to manage its own stress without help, he or she can do nothing else (Schore, 2001). All energies are dedicated to calming the brain and body from significant stress. In this situation, valuable opportunities for social and cognitive learning are lost. It is important to understand that all parents at some time fail to soothe their child when he/she is distressed. Healthy children do not require perfect parenting; it is the continued ongoing trauma that is detrimental to development. Hypervigilance — The impact of early traumatic relationships in the classroom Children raised in violent or emotionally traumatic households often develop hypervigilance to environmental cues. More than just a “common sense” response to an abusive environment, hypervigilance occurs because of the way the nervous system has organized itself in response to persistent fear and anxiety during the earliest years of development (Creeden, 2004). Hypervigilance to other’s emotional cues is adaptive when living in a threatening environment. However, hypervigilance becomes maladaptive in the classroom and impedes the child’s ability to pay attention to school work. For the traumatized child, school work may be thought of as irrelevant in an environment that requires attention dedicated to physical and emotional protection of self (Creeden, 2004). Imagine a time when you were very upset or unsure of your physical or emotional safety. Perhaps an important relationship is threatened after a particularly heated argument and you feel you are at a loss of how to fix it. Imagine you had a violent encounter with a parent, or are dealing with sexual abuse at home. Now imagine, in this situation, trying to focus your attention on the conjugation of verbs, or long division. It is likely you would find this impossible. What can be done? It’s important that we understand the roots of learning and behavioral difficulties in the classroom so we can address them with therapy rather than prescribing medications (Streeck-Fischer, & van der Kolk, 2000). Some children who cannot focus in the classroom may be wrongly diagnosed and never offered the help they need. There are effective ways to help children with past trauma in their learning environments. Adults need to understand that for a traumatized child, challenging behaviors are rooted in extreme stress, inability to manage emotion, and inadequate problem solving skills (Henry et al, 2007). In these circumstances, the child will likely respond more positively to a non-threatening learning environment. Children with traumatic histories need the opportunities to build trust and practice focusing their attention on learning rather than survival. A supportive environment will allow for safe exploration of the physical and emotional environment. This strategy applies to children of various ages. Older children also need to feel safe in the classroom and when working with adults such as teachers and other professionals. Frustrated teachers may believe children with challenging behaviors are hopeless and just not interested in learning. The teacher may insult the child, respond with sarcasm or just give up on the child. Teachers may fail to protect the child from teasing or ridicule from their peers. In this way, the teacher is also contributing to the threatening environment the child has come to expect. New understanding, new opportunities A shift in understanding is required for teachers and other professionals working with traumatized children in the classroom. Supportive environments can give these children a chance to modify their behavior and develop coping skills. This change in adults perception of why the child is unable to focus on schoolwork will hopefully lead to a change in attitude. Even more importantly, children with trauma in their early history are in need of therapy and support. With understanding and appropriate therapeutic intervention, these children will have a much better chance at healing past trauma and developing the ability to focus, learn in the classroom and respond differently to challenging situations. Baker, L.L. & Jaffe, P.G. (2007). Woman abuse affects our children: An educator’s guide. Developed by the English language Expert Panel for Educators, Ontario. Creeden, K. (2004). The neurodevelopmental impact of early trauma and insecure attachment: Re-thinking our understanding and treatment of sexual behavior problems. Sexual Addiction & Compulsivity, 11, 223-247. Henry, J., & Sloane, M., & Black-Pond, C. (2007). Neurobiology and neurodevelopmental impact of childhood traumatic stress and prenatal alcohol exposure. Language, Speech and Hearing Services in Schools, 38, 99-108. Hoffman, K. T., Marvin, R. S., Cooper, G., & Powell, B. (2006). Changing toddlers’ and preschoolers’ attachment classifications: The Circle of Security Intervention. Journal of Consulting and Clinical Psychology, 74(6), 1017-1026. Schore, A. N. (2001). The effects of early relational trauma on right brain development, affect regulation, and infant mental health. Infant mental health journal, 22(1‐2), 201-269. Streeck-Fischer, A., & van der Kolk, B. A. (2000). Down will come baby, cradle and all: Diagnostic and therapeutic implications of chronic trauma on child development. Australian and New Zealand Journal of Psychiatry, 34(6), 903-918.
What to know about microcephaly Abnormal brain development frequently accompanies microcephaly. The condition can often occur alongside other major birth defects. Microcephaly can, however, be the only abnormality present. The condition occurs in between 2 and 12 in every 10,000 live births each year in the United States. In this article, we explore the causes, diagnosis, and treatment of microcephaly. Microcephaly is a shrinking of the brain and skull. A range of genetic conditions, infections, and diseases can cause it. The cause of microcephaly is not always clear. The condition may develop at birth or in the first few years of life. However, certain conditions might have a relationship with its development. Conditions that increase the risk of developing microcephaly include: - genetic or chromosomal abnormalities, such as Down syndrome - infections during pregnancy, such as rubella, toxoplasmosis, cytomegalovirus, chickenpox, and possibly the Zika virus - severe malnutrition - craniosynostosis, or premature fusing of the skull suture line - cerebral anoxia, a condition involving a decrease in oxygen delivery to the brain of a fetus - maternal uncontrolled phenylketonuria (PKU), a congenital anomaly that restricts the body's ability to break down a specific amino acid Environmental factors can also increase the risk of microcephaly. If, while in the womb, the mother exposes a fetus is exposed to illicit drugs, alcohol, or toxins, the risk of the infant developing a brain abnormality is higher. While the defining feature of microcephaly is decreased head circumference, the condition has other effects on health that can limit quality of life and impair development. The effects of microcephaly on development can range from mild to severe, and might include: - delayed development, such as learning to speak, stand, sit, or walk at a later age than other children at a similar stage - learning difficulties - movement and balance issues - a high-pitched cry - issues with feeding, such as dysphagia, or difficulty swallowing - hearing loss - reduced vision from lesions on the retina, the area at the back of the eye - distorted facial features and expressions - short stature In severe cases, microcephaly may be life-threatening. CT scans can help identify microcephaly. Occasionally, a doctor may detect the presence of microcephaly on a second- or third-trimester ultrasound and diagnose the anomaly before the birth of the infant. For a child to receive a diagnosis of microcephaly after birth, they will undergo an in-depth examination process. The diagnostic process for microcephaly can include: - a physical exam, including an evaluation of head circumference - family history and evaluating the head sizes of the parents - charting head growth over time Once a doctor diagnoses microcephaly, doctors could also use CT or MRI scans and blood tests to evaluate the severity and cause of the microcephaly, as well as any other associated conditions. Some of these tests might also provide the healthcare team with information about the presence of an infection in utero that may have caused structural brain changes. No treatment or cure is currently available for microcephaly. Instead, treatment focuses on managing the condition and relieving linked health problems, such as seizures. If an ongoing process is contributing to the microcephaly, such as malnutrition, healthcare professionals will also address this. Infants with mild microcephaly typically only require routine check-ups. However, those with a more severe form of the condition may require early childhood intervention programs to strengthen their physical and intellectual capabilities. These programs will often include speech, physical, and occupational therapies. A condition called craniosynostosis can cause microcephaly. In cases of craniosynostosis, the joints between the bones of an infant skull fuse together prematurely, preventing the brain from growing fully. However, this condition is typically reversible with surgery that helps reshape the skull. While treatment options manage rather than cure microcephaly, some people with the condition have normal cognitive function and a head that grows larger over time despite remaining smaller than the usual growth pattern. However, people with microcephaly as a result of Zika often have a more severe presentation that might even need intensive care on a lifelong basis. Speak with your healthcare provider about the personal risks of having a child with microcephaly and the steps you can take to lower that risk. In any pregnancy, reducing the risk of complications by avoiding alcohol, drugs, and other toxins is important. Chickenpox, rubella, cytomegalovirus, and toxoplasmosis have links to the condition, so take preventative measures against these diseases. Possible connection between microcephaly and Zika virus The Zika virus has links to microcephaly. Due recent concerns over the risk of microcephaly and Zika virus, the Centers for Disease Control (CDC) recommend that women who are pregnant avoid traveling to regions in which the disease has a presence. Click here for a full, up-to-date rundown of countries that the CDC cites as presenting a risk of Zika. Dr. Mark DeFrancesco, President of the American College of Obstetricians and Gynecologists (ACOG), advised the following in a statement supporting the travel guidelines set in place by the CDC. "Travel to regions with ongoing Zika virus outbreaks is not recommended for women who are pregnant or women who are considering pregnancy." Information on Zika is developing and changing fairly rapidly. Follow this link for the most recent statements and recommendations from ACOG. To learn more about the CDC's travel recommendations, please visit their travel health notices web page.
This article examines British writing about the 1876-8 famine in southern and western India. In British newspapers and journals, the turn to thinking about famine in terms of the total population obscured the extreme variations in food access that worsened with rising economic inequality. When the British press in the late-1870s turned to human causes of famine, they either argued that India’s population overburdened India’s land, or suggested that more rail construction would prevent enough deaths sufficiently to mitigate British responsibility for famine conditions. The turn to population-based arguments helped either to perpetuate the belief that famine was a quasi-natural part of India or to parse the sudden increase in the frequency and severity famines in India under British rule. In 1876-8, somewhere between six and eleven million people died in southern and western India of starvation and other famine-related conditions. After the failed monsoon in the summer of 1876, grain prices skyrocketed in and around the Deccan plateau. Peasant cultivators in the Deccan, who had already been deeply in debt before drought set in, sold cattle, farm tools, and sometimes land in order to procure food. The situation was even worse for the landless agricultural laborers who were thrown out of work when harvests didn’t materialize. Drought spread in the summer of 1877, extending the crop failures across southern India into the northwestern provinces and Punjab. The effects of the first year of drought had made the consequences of the second still worse: small cultivators no longer had the tools or cattle to grow even a meager harvest with the scant rain that fell. Grain prices rose again. Still more small cultivators found themselves unable either to grow or to purchase food. By the late summer months of 1877, millions of people, especially those from the lower castes, were dying. British land policies that vastly exacerbated peasant indebtedness were chief culprits in turning two years of drought into famine. Much of the region’s agricultural production had, by the 1870s, been converted to cash crops; when prices for a crop dropped, many small cultivators lost their incomes. The crash in cotton prices had proven especially disastrous for Indian farmers. During the years of the American civil war, cotton production had expanded vastly in the Deccan. After the war ended, though, cotton prices fell dramatically as English textile manufacturers purchased less Deccan cotton, favoring the American cotton that had reentered the global market. Nor was it easy to convert cotton fields into ones that might grow food. Without income, farmers could not procure the supplies that would have been needed for undertaking such a conversion. This economic precarity exacted a substantial toll from the health of small cultivators who were living near subsistence levels even before the drought. In the years before the famine set in, as Leela Sami observes, “it is probable that the large numbers of petty tenants, sharecroppers and artisans lived on the brink of hunger” (2597). Already coping with chronic hunger, these tenants, sharecroppers, and artisans were especially vulnerable to additional enfeeblement and disease when food became still more inaccessible during the dry summer months of 1876. At the root of much of rural indebtedness was the urgency of paying the hefty annual land revenue assessment, which came due even if fields were fallow or if crops failed. By 1875, the debt crisis in the poorer parts of the Deccan was so dire that cultivators in a number of areas—Pune and Ahmednagar districts most famously—rioted against local moneylenders after the latter refused to lend money to peasants who needed it to pay their land taxes. Land in most of the famine districts was held under a raiyatwari system in which land revenue taxes were to be paid directly by the occupiers of the land (who might or might not be the same as people doing the physical work of cultivating the land). British surveyors were to conduct surveys every thirty years in order to assess land revenue rates. While, by the 1870s there was, according to H. Fukazawa, “no rule” that determined assessment rates across regions (185), they were so extortionate as to be unmanageable for many small raiyats. The raiyatwari system seemed to promise greater independence for peasant cultivators than the system of zamindari landlords that prevailed in those parts of British India under the Permanent Settlement. But having to be directly responsible for heavy land revenue taxes nonetheless caused substantial difficulties for many raiyats, especially because failure to pay the tax meant that raiyats could be evicted and forced to forfeit their land rights. As a result, many cultivators turned to village moneylenders in order to make their land revenue payments. Most commonly, peasants would borrow from local moneylenders who would mortgage the debtor’s lands under terms that allowed the moneylender to retain control over the land and its produce without taking on the burdens of cultivating it. Nascent peasant movements in the region cited these patterns of indebtedness as among their key concerns. The anti-caste activist Jotirao Phule, for instance, identified how high-caste moneylenders, the British courts, and excessive tax assessments impoverished and imperiled low-caste shudra peasants. Written in 1883, Phule’s pamphlet, entitled Cultivator’s Whipcord, notes: Our cunning government, through its brahman employees, has carried out surveys every thirty years and have [sic] established levies and taxes as they willed, and the farmer, losing his courage, has not properly tilled his lands, and therefore millions of farmers have not been able to feed themselves or cover themselves. As the farmers weakened further because of this, they started dying by the thousands in epidemics. There was drought to add to the misery, and thousands of farmers died of starvation. (167) Phule does not discount the effects of the drought, but his analysis does not accord it primacy over the burdensome land tax, nor the Brahmin moneylenders whose interests the British courts systemically preferred to those of the small cultivators. That said, when crops failed as a result of drought, many more people found themselves unable to pay their land taxes; indeed, indebtedness increased vastly in famine years. This attention to long-term economic structures contrasts sharply with British administrative writing, which tended to respond to disasters such as the famine as, in Upamanyu Pablo Mukherjee’s words, “governance glitches” (43)—momentary problems that need not require the undoing or radical overhaul of economic and governmental relations (43). As Mukherjee notes, even those writers who were outraged by the longstanding economic relations that produced famine policy chose to focus on the most arrestingly current images of suffering as sources of famine iconography. In consequence, English-language famine writing in the 1870s predominantly understands famines as exceptional, contained events. Although the debates over railway construction in particular did voice attentiveness (however self-interested and misplaced) to longer-term conditions of British rule, they did little to interrupt the view of famine as a periodic crisis. Across the political spectrum, this mode of representing famine persisted even though, as Amrita Rangasami maintains, “the sudden collapse into starvation” in which “the stigmata of starvation become visual” ought to be read as only the last step of a much longer interweaving of dire political economic conditions (1800). All too commonly, even those depictions of famine that highlight very real immediate suffering nonetheless present famine as a wholly exceptional state, generally ignoring or undercutting the continuities between famine and the poverty and privation in non-famine times. In articulating the view that famine marks, in Rangasami’s phrase, a “sudden collapse” (1800), nineteenth-century writers regularly cited a temporality that Thomas Malthus had schematized in his 1798 Essay on Population. The Essay, widely cited and reprinted throughout the nineteenth century, famously predicted that famine was a certainty in the absence of “preventive checks” that would keep population growth low by what he believed to be less violent means (Malthus 28). To make his case for the gradual reduction of population by a reduction in the birth rate, Malthus relied on the specter of sudden, apocalyptic suffering in the form of famine, plague, and war. In doing so, the Essay left the question of why populations might die in sudden catastrophic events and not in gradual declines undertheorized. While the political economy of the 1870s saw changes to Malthusian theories of population, land, and value, a brute Malthusianism nonetheless inheres in many British analyses of famine through the 1870s. Arguing that southern India’s population had reached the maximum threshold that the land could sustain, Viceroy Lord Lytton applied Malthusian principles, in Srinivas Ambirajan’s words, “rigidly to the Indian economy” (7). Somewhat ironically (given, for instance, Malthus’s support for the protectionist Corn Laws), Lytton was able deploy these arguments in order to license a stern laissez-faire philosophy under which, even amid dire local food scarcity, the Government of India refused to intervene in the grain markets. In August of 1877, as it was apparent that the monsoon had failed for a second year in a row, Lytton reaffirmed his position on free trade as a basis for justifying his decision not to import grain: Free and abundant private trade cannot co-exist with Government importation. Absolute non-interference with the operations of private commercial enterprise must be the foundation of our present famine policy. . . I am confident that more food, whether from abroad or elsewhere, will reach Madras, if we leave private enterprise to itself, than if we paralyse it by Government competition. (112) Lytton’s confidence in the self-regulation of the market led to the prioritizing of the means for encourage private trade—the building of railways, for instance, that would assist export, while providing returns for investors in Britain who backed the rail expansion projects. Relief policies, in turn, were deliberately designed to be horrific. For those people who managed to meet onerous eligibility requirements, the proffered “relief” provided only a wildly inadequate amount of food in return for exceedingly heavy labor. As critics of famine policy made clear, the famine caused genocidal atrocities in the form of skeletonized bodies, scenes of people dying in front of grain depots or en route to cholera-ridden relief camps, feats of hard labor in which corpse-like men and children constructed railroads and canals in exchange for an utterly insufficient modicum of grain. Millions of the people who had not already starved to death were so enfeebled that they died of malaria and cholera. In setting forth this policy, Lytton was drawing on beliefs about population and infrastructure that were widely held in Britain. When the British press in the late-1870s turned to human causes of famine, they either argued that India’s population overburdened India’s land, or suggested that more rail construction would prevent enough deaths sufficiently to mitigate British responsibility for famine conditions. The turn to population-based arguments helped either to perpetuate the belief that famine was a quasi-natural part of India or to parse the sudden increase in the frequency and severity famines in India under British rule. A number of late nineteenth-century critics make the point that British tax policy, for instance, was instrumental in causing the huge increase in famines after the East India Company and then the Raj assumed control of India. The economic historian Romesh Chunder Dutt, for instance, points out that the East India Company continued to exact a heavy land tax through the 1769-1773 famine that killed a large percentage of the population of Bengal (Economic 52–3). To be sure, India had experienced famine conditions prior to the arrival of the British. It is nonetheless salient that, as Amartya Sen notes, eighteenth-century Bengal had seen no real famine prior to the arrival of the East India Company (“Imperial Illusions” 28) and that severe famines would become such a regular occurrence in nineteenth-century British India. Reading famine as a by-product of population or lack of railroads also helped to mask famine’s wildly uneven effects within India. As Ajit Ghose argues, famines are exceptional events not because they are the necessarily apocalyptic results of a catastrophic weather event, but rather because they shift economic relations towards increasing inequality with extreme rapidity. While it is true that droughts leading to crop scarcity did occur regionally (though not nationally) in the famines between 1860 and 1910, focusing on drought and the availability of food in a given region thus misses crucial changes to social relations that render famine periods distinct from non-famine times. Drawing on and adapting Amartya Sen’s now famous argument that famine occurs as a result of a failure “exchange entitlements,” Ghose maintains that in the last third of the nineteenth century India saw “sudden changes in the distribution of food (and, relatedly, income)” (370) that highlighted and worsened poverty and inequality (384). Landless people starved, Ghose points out, because crop failure meant that they had been thrown out of work; others, however, prospered greatly during famine years. In British newspapers and journals, the turn to thinking about famine in terms of the total population obscured the extreme variations in food access that worsened with rising economic inequality. These famine writings in turn had an effect on famine relief policy. British famine journalism during the late-1870s illustrates how ideas about population and infrastructure could simultaneously shape a famine policy that was renowned for its brutality and elicit the sympathy of British readers, many of whom donated money to famine relief funds as they encountered the news of what was happening overseas. In Britain, the celebrity of Malthus’s argument had occasioned the increasing cultural prevalence of population statistics, both through the expanding census apparatus, and elsewhere. In spite of the prevalence of Malthusian thinking, though, 1870s writing on famine and population rescripts the plot that Malthus had famously laid out eighty years earlier in a number of key ways. Famine writing showed a keen interest not only in the Malthusian theory of a relationship between population and land productivity, but also in the infrastructures that made this productivity possible. The debates over rail and irrigation that emerged in famine discourse were thus integral to the arguments that were circulating about the effects of population. The Argument that Overpopulation Causes Famine If, in Malthus, famine occurs when the population becomes too burdensome for agricultural resources on a given bit of land, by the 1870s the rhetoric on famine was newly attentive to the movement and settlement of people in response to local economy. In the development of classical economic liberalism through the middle years of the century, writing on famine loses some of the localism for which Malthus had been criticized. David Ricardo’s Principles of Political Economy had popularized the idea that more fertile tracts of land were more thickly populated first because the best land tended to be cultivated first, with worse land only coming into cultivation as better or better accessible land becomes overburdened. Nor could a purely Malthusian theory of place and population hold sway in the new neoclassical marginalism developed in the 1870s by William Stanley Jevons and Alfred Marshall. When he set the examination questions for Cambridge’s 1874 Moral Sciences Tripos, Jevons asked students to assess the ethics, economics, and politics of British policy in famine-stricken Bengal, reminding students to “take into account the assertion of some eminent authorities that the population of the famine districts is becoming excessive, and that a recurrence of such famines appears to be inevitable” (132). Like Malthus, Jevons argues that supposedly excessive population must necessarily produce periodic famine. That said, Jevons’s economics were not fundamentally Malthusian; while Malthus concerned himself with production, Jevons focused nearly exclusively on demand or supply. His famous development of a theory of marginal utility cast utility as a measure of the scarcity of a commodity, but he remained uninterested in the scene of production relative to that scarcity. This comparative lack of attention to production entailed a necessary shift in emphasis away from the more strictly Malthusian concern with the carrying capacity of a piece of land. Nevertheless, by the end of the 1870s, these assessments found well-publicized antagonists. First, some writers made the point that England did not produce its own food and therefore that it might be preposterous to assess the famine as a result of India’s inability to do so. The effectiveness of these population-based arguments is especially striking given the continued prevalence of a conviction that population density also acted as both a sign of national prosperity and capitalist “development.” These views enjoyed a stunning tenacity even in the face of equally widespread concerns over the ills of crowded cities and slums. At the level of the country and the city, that is, population density marked an accomplishment; in some British writing about colonization, the city was both the result of a “civilized” drive to capital accumulation and the condition under which this drive could best develop. “Uncivilized” dispositions, that is, could be both the cause and the effect of sparse urban development. Under the heading of “Population Density”, the 1871 Census of England and Wales maintained that: when many families, with many wants, are brought into communication with each other, and produce a great variety of interchangeable articles for which there is a general demand, they suffer less from privation; for the number of persons living on the soil is no longer limited by the minimum amount of the crops in each homestead. Instead of one savage to a square mile there may be five houses, as there may have been in the later Saxon times . . . In the reign of Queen Victoria the houses on the same area are on an average seventy-three, occupied by eighty-seven families and three hundred and ninety people.” (Census of England and Wales for the Year 1871 General Report xxv) While the tables that follow measure population density as persons to a square mile, the argument about communication, privation, and “interchangeable articles” suggests that what the census is interested in measuring is less the number of bodies occupying a tract of land and more the ease of exchange. Population density here indexes civilizational progress, assuming that greater exchange means less suffering. It can nonetheless be difficult to determine whether the Census wishes to see population density as a measure or a cause of the “communication” it prizes as a measure of civilizational development: being able to interchange articles means that more population density can occur, but this interchange also removes the limits to population density. Nevertheless, these claims found their limit as, in particular, high population density was widely blamed for the 1873-4 famine in Bengal. During the Bengal famine, Lord George Hamilton, Under-Secretary of State for India, had observed that “the population of the district now affected by famine was probably the densest in the world . . . very nearly double the density of the population of the United Kingdom of Great Britain and Ireland” (Hansard’s Paliamentary Debates 176). In writing about famine, the debates over population in Bengal had to contend with the sharp contrast between the catastrophe and the somewhat better attenuated suffering of the Bengal famine only a few years earlier. According to British records, twenty-three people died from famine-related conditions in Bengal in 1873-4. However much we might mistrust these official numbers, they nonetheless contrast sharply with the famine later in 1876-8 in which millions of people lost their lives. The lower death rate in the Bengal famine of 1874 had made it much harder to maintain that excessive population density caused millions of people to starve to death in the comparatively more sparsely populated areas around the Deccan plateau Nor were such blanket statements about population density merely descriptive political economic claims. The proposition that excessive population density might lead to famine offered commentators in the British press a way of skirting the possibility that famine had resulted from appalling and cumulative effects of British rule. If anything, as The Economist opined in 1874, Britain should be concerned that it had encouraged Indians to be alive in the first place, not that it was letting people starve: The effect of our rule in India has been to people the country now visited with famine much more densely than it was ever peopled before, much more densely than this country, or almost than any other part of the globe. . . . Having created this vast precarious population, we feel, as Englishmen and as Christians, that we are bound to keep them alive. But this is incredibly difficult. . . . The native rulers formerly did nothing; one effect of these terrible visitations was that the numbers of the people were kept near to a manageable limit. But we have to deal with far greater numbers, and we have said that we will not permit any of them to perish. As by the unintended effect of civilized Government we have given life to an immense number of human beings, we cannot, in common humanity, according to our notions of humanity, leave them to perish. (“Neglected Aspects of the Indian Famine” 378–9) So keen is the author to deny that the restructuring of Indian economies that came with British rule—the drain of millions in revenue to Britain, the cultivation of export cash crops at the expense of grain, and the organization of the economy around the importation of goods from Manchester and other British industrial centers, for starters—that British rule here appears almost sentimental and soft hearted, allowing the kind of undue prosperity that it should know necessarily leads to excessive population. The author, of course, forgets his famine history in multiple registers: famines were gruesome in India under both the East India Company and the Raj, and rulers before the British were not categorically and casually indifferent to human life in the way this article implies. What stands out in this mess of historical revisionism, though, is that population density appears to be a result of purported British benevolence and not of some characterological phenomenon supposedly unique to “lower Asiatics”: this writer finds it obvious that “civilized Government” has “given life” as though the birth-rate of Indian children was entirely determined by British decisions. The comparisons of the two famines might have led to a renunciation of Malthusian thinking. Nonetheless, as these arguments circulated, muddled efforts appeared that sought to retain Malthusian principle in the face of evidence that seemed to contradict it. The Times’s correspondent in Madras offers a case in point in a dispatch that was reprinted in major English newspapers: A great deal has been said about over-population in India and the Malthusian doctrine of the necessity of periodical famine and pestilence to remove the redundant people. The remarks on this head would be pertinent to the subject if the famine had displayed itself in the most thickly populated districts of the country, but, as a matter of fact the most thickly-populated districts have been able not onto to grow food enough for their own necessities, but to export to places where there was scarcity. Taking the export of grain as evidence that Bengal could sustain itself instantiates grave historical error on a number of fronts. Not only does it forget that Bengal could only weather the famine with imported grain from Burma (and thus did not produce its own necessities), it also makes a virtue of sending grain away from millions of starving people and leaves aside entirely the bitter irony that this much-needed food was shipped away through the railroads and canals that were built in the name of famine prevention. The clunky rhetorical turn of claiming that Malthus is not “pertinent” exists in order to avoid making the case that Malthus might be wrong. Instead of challenging the principles of Malthusian reasoning, the article suggests that the fact that Bengal suffered only a low death toll in 1874 is simply evidence that the dire Malthusian famine state had not yet been reached. In this way, in spite of its lip service to the irrelevance of Malthus, the dispatch sustains the belief that some relationship between population density and the fertility of the land determines the outcome of a famine. Nor did those who abandoned Malthusian tenets necessarily relinquish all conviction in an intimate relationship between population, land, and famine conditions. Hamilton himself offers a case in point, performing what seems to be an about-face on the Malthusian reasoning of his 1874 remarks. When he took up talking about Indian population density again in 1877, Hamilton’s articulation of the relationship between population density and land became much more Ricardian than Malthusian, doing away with the claim that famine results from the excessive burdening of the land by population. In a dinnertime speech, Hamilton chose to gloss population density not as a cause of or precursor to famine, as The Economist had suggested, but instead as quite the opposite. It is in those parts of India where the annual rainfall is greatest and most regular that there is the greatest fertility and the most dense population. Or to put it in another way, the more dense the population of any locality is the less likely is that district to be visited by drought. (“Lord G. Hamilton, M.P., on the Indian Famine” 2) Rather than a cause of draught or famine, that is, population density becomes an index of how improbable a drought or scarcity might be. While Hamilton insisted that both the Indian famines of the 1870s were “exceptional” in character, it is less than clear how his view of population sparseness as, in some sense, a predictor of “scarcity” was supposed to relate to his approval for a famine policy centered on work-based relief and on the notion that the English Treasury should invest only in remunerative infrastructure projects. On the evidence of his speech, one would conclude that India’s “independence and local financial responsibility” are in graver danger than the millions of human beings who died. Proposed Infrastructural Remedies These concerns over inculcating “local financial responsibility” nonetheless did not prevent the British from advocating British-backed railway expansion as the most appropriate means of redressing famine conditions. Thinking about missing infrastructure, that is, was not designed to challenge the effects of a laissez-faire political economy that sought to ensure grain revenues to European capitalists. Even most of those who preferred irrigation works typically skirted the economic arguments that would have challenged capital accumulation. The economist Srinivasa Ambirajan observes that while there “were practical difficulties, no doubt, the focus on the lack of transportation only hides the fact that “one cannot resist the conclusion that the officials as a class swore by the principles of non-interference in the grain market.” Most Britons in the late 1870s who took up the task of recommending infrastructural changes as a means of preventing famine cited the devastation of the 1866-7 Orissa famine in which, by some estimates, approximately a third of the population died. In most histories of nineteenth-century Orissa, the aftermath of the famine marks a shift through which the province’s economic circuits became less local. “The famine broke the isolation of Orissa,” the historian Ganeswar Nayak reports (346). Colonial officials, English-language journalists, and British political economists agreed that whatever famine relief the British administration undertook should focus on building better transportation networks. Following the Famine Commission’s advice, the colonial administration began building new canals, roads, and railways in Orissa beginning in 1867. After the famine, Orissa’s mud and dust roads—impassable in rainy seasons—were replaced with “modern” thoroughfares, ostensibly designed to prevent the recurrence of famine conditions. Even Karl Marx concurred with other commentators in seeing missing “means of communication” as a culprit responsible for producing famine conditions. In London, at work on the manuscript of Capital I, Marx responded to the famine crisis by observing that: In consequence of the great demand for cotton after 1861, the production of cotton, in some thickly populated districts of India, was extended at the expense of rice cultivation. In consequence there arose local famines, the defective means of communication not permitting the failure of rice in one district to be compensated by importation from another. (333) Although this brief moment of attention to communication would prove important to Marx’s development of a theory of population, in the material that would become Capital II, Marx’s interest in the analytic of “communication” disappears in the later volumes of Capital. Instead, Marx shifts his analysis of the Orissa famine to emphasize the “unexampled export” of rice from Orissa to Australia, Madagascar and elsewhere. Already by the end of the 1870s, the argument that remedying “defective means of communication” would prevent famine no longer seemed entirely viable. In the middle of the great famine of 1876-8 British commentators did not, as a rule, identify a shortage of roads, canals, or railroads as to blame for famine conditions, even though they continued to maintain that investing especially in rail would alleviate famine conditions in a partial and gradual way. The Orissa famine had occasioned a vision of a region cut off from access to food; in southern and western India in the late 1870s, the existence of roads and rail lines into the famine districts meant that a belief in geographic isolation did not appear as a cause in the way it had in relation to Orissa in the late 1860s. Most of the British men who were arguing over famine prevention and relief in 1877 advocated rail construction that would provide “direct” financial benefit to Britain while maligning the supposed ineffectiveness and costliness of irrigation projects. Even when its panegyrics to rail were strongest, their writing on famine could fatalistically maintain that famine—purportedly inevitable in India—would be at best softened by British relief efforts, making the point that Britain need not feel obliged to build unremunerative infrastructure in India because doing so would do little to eliminate famine conditions. Member of the Viceroy’s Council and the Indian Public Works Administration Andrew Clarke, for instance, voiced a version of this claim in noting that “if we cannot abolish famine—an end I fear hopeless to be attained—to do at any rate what we can to limit its area, to localize its scourge, to mitigate its intensity” (Abstract of the Proceedings 11). While voicing sympathy for those suffering, Clarke’s patience for and acceptance of the inevitability of famine conditions indicates a personal distance from the threat of famine that people at risk of starving or falling sick with cholera simply did not have. Waiting for railways magically to foster the capital that might lessen famine conditions only seems like a viable option to those who are not immediately staring down death and disease. Unsurprisingly, the critique of views such as Clarke’s among early Indian nationalists was strident, especially as nationalists tackled the British preference for railway over canal construction. That many of the people developing famine prevention and relief policy were also railway investors was not lost on Indian writers. Especially influential in their criticism of British policy were Dadabhai Naoroji, the economist who became the first Asian Member of Parliament in Britain, and Romesh Chunder Dutt, who administered famine relief in Bengal in the 1870s as one of the first Indians to enter the civil service. Both Naoroji and Dutt articulated how capital worked in empire as the source of Indian poverty and famine, though especially Naoroji imaged solutions to this problem not as the dismantling of capital as such but rather as the redirecting of capital back into India. Naoroji, whose fame as an economist stemmed from theorizing and quantifying the “drain” of capital from India to Europe, took a two-pronged approach to criticizing British rail projects, simultaneously noting that rail facilitated the removal of capital from India and that the “magic wheels” of trains did not actually make food or wealth come into being. Dutt, in turn, regarded the English preference for railways over canals a “geographical mistake” (India 367) drawn from the fact, first that England itself had little need for canals and second, but more importantly, the fact that “Englishmen had not appreciated the need for cheaper transit as well as for irrigation. They had not realized that securing crops in years of drought was of far more greater importance in India than means of quick transit” (India 366). In this respect, Dutt turned back to a point that irrigation advocates had made at the height of the famine: for the majority of Indian people, inexpensive transit was far more important than speedy transit. In contrast, most British writers saw rail’s rapidity as an unalloyed good. Moreover, for many rail advocates, the fact that railways promised capital returns to British investors was a benefit rather than a problem. While most British administrators publicly deplored famine conditions and emphasized the need to redress suffering, many tended to couch their attention to famine in the logic of capitalist expansion. The British preference for railways over irrigation projects stemmed largely from the economic returns investors expected from rail. Clarke believed that, unlike slower canal transport, railways “taught the people the advantages of rapid locomotion” prized by traders and merchants (Abstract of the Proceedings 27), as though a supposed failure to appreciate trains somehow occasioned inadequate access to capital. Ignorance of the benefits of the capital that rail would fuel, Clarke suggested, was to blame for the desperate suffering of the famine years, as though admiring capital and benefiting from it surely went hand in hand. He fails, that is, to assess how it is that railways will benefit the people whose lands they crisscross. Indeed, the actual uses of railways in famine prevention tell a quite different story than the one that Clarke envisages. Railroads did deliver grain into famine districts in the late 1870s. But the same railroads that had been designed to relieve famine only abetted the escalating prices that, in tandem with the drought, had caused famine conditions in the first place. One of the more vocal British opponents of laissez-faire famine relief, the once editor of the Madras Times William Digby, concedes that the “working power” of the railway lines “was exerted to its utmost” in accommodating grain traffic. Digby points out that even at the beginning of the famine: the facilities for moving grain by the rail were rapidly raising prices everywhere . . . Grain was hurriedly withdrawn by rail and sea from the more remote districts, to their serious prejudice, and poured into central depots, but retail trade up-country was almost at a stand-still. Either prices were asked which were simply beyond the means of the multitude to pay, or shops remained entirely closed. (6–7) Digby focuses on the intra-India effects of price-gouging made possible by the removal of grain from remoter regions. These circuits, though, were not purely regional. Grain scarcity did not stop Britain from importing 1,409,000 quarters of grain from India in 1877 alone (27), such that, as Mike Davis writes, “Londoners were in effect eating India’s bread” (26–7). It is thus less than surprising that when pro-rail arguments appeared in the London Evening Standard, relieving famine slipped readily into profitability, as though the two ran together as smoothly as the fantasy of a rail system that would somehow magically even out access to food that would, equally magically, be necessarily adequate in supply. Alongside a piece focused more squarely on irrigation, the London Evening Standard ran an article arguing that railways prevent famine effectively because they “[send] the surplus from one part to meet the deficiency in others. The land communications were the great mainstay during the recent famine, and always showed that the railways system in India was profitable and must continue to expand.” While what counts as “profit” is unclear in the sentence, the slippage in which rail’s uses in famine and more explicitly commercial meanings of “profit” is emblematic of the purported seamless conjoining of interest between British railway and manufacturing investors and Indian people starving in famine districts. Some of the staunchest arguments for rail perversely recognized that starvation and famine occur as a result, not of a shortage of food availability but of insufficient means needed to acquire it. This view was deeply tied into a desire to defend the conversion to cash crops such as jute, cotton, and opium that had, under British rule, depleted the percentage of India devoted to growing grain. Clarke was among the most florid in his arguments to this effect. “The fleecy capsules of the cotton plant, or the jeweled diapers of the poppy,” Clarke opined in the Indian Legislative Council early in 1878, “feed their cultivators with no less certainty than the crops of the rice swamp, or the wheat field” (Abstract of the Proceedings 13). Clarke’s claim that opium and cotton are as likely sources of food half-understands the point that food is bought but also misrecognizes the conditions under which surplus-value is produced only for a select few off of the labor the majority of people. This fact takes on a racialized character in a colonial situation in which British capitalists extract profit from India, what Naoroji described as an economic “drain.” In contrast, irrigation advocates highlighted that, unlike rail, canals assisted with the material practice of growing food: canals promised not only the means of cheap shipping from districts less frequently subject to periodic drought, but also irrigation of local crops. If rail’s champions had sustained the view that transportation would enable a capital expansion that would supposedly benefit all, those who favored developing irrigation infrastructure instead turned to the belief that growing more food would at least alleviate famine conditions. Most Indian commentators voiced a definitive preference for irrigation over rail construction projects; among their British supporters, Arthur Cotton, the engineer who had designed major irrigation projects in India in the 1850s (most notably the anicuts along the Godavari and Krishna rivers), was especially vocal on this score. At the end of the day, Cotton came back to the point that water availability would produce “entire security of food for the people” (5) by enhancing agricultural production. Cotton understood famine as a problem of misplaced British priorities around the government of India but did not question the position that adequate access to water would prevent food shortage: his is not a theory of exchange entitlements. In Cotton’s estimation, the India Office, the House of Commons, and most of the British Press demonstrated an unconscionable refusal to contemplate the importance of building canals and other irrigation works (5-6; 13-4): “it does not take five minutes’ investigation,” Cotton opined, “to prove, indisputably, that the sole cause of the Famine is the refusal to execute the Works that will give us of Water that is at our disposal” (4). Imagining a world in which Northern Indian wheat would amply feed not only India but also England, Cotton did not challenge the view that Britain should continue to imagine itself the beneficiary of capitalist agriculture in India (5). But he did make the case that, unlike railways, canal irrigation offered greater benefits to poor Indian people who stood to gain little for railway expansion. Backed by John Bright, the free trade advocate and Radical, Cotton articulated the view that railways, financed by high taxes levied on Indians, were chiefly instruments of English military power and far too costly to support the transport of grain to poor Indian people. As Bright fumed, that the question of the railways “is far more a question for the English, as a power in India, than for the native people in India” (Thorold Rogers 442). While the arguments that famine resulted from excessive population or missing rail lines might seem to represent radically different ways of theorizing famine, thinking about them as wholly separate risks leaving aside how population and infrastructure were being jointly constituted in the nineteenth century. In nineteenth-century British writing, the claim that famine should be understood in relation to capital arose chiefly by the least apologetic of capitalists, namely those advocating the expansion of the Indian railways as the best means of preventing and relieving starvation. Population in such texts tends to be read as a uniform whole: rail and other infrastructure projects, that is, were imagined to benefit the population in its entirety, ignoring the distinctions between, for instance, the landless and the landed. Setting aside any thinking about the immiseration of the region’s most vulnerable people, the belief that famine was the product of some essential relationship between population and infrastructure was instrumental in rendering less visible the social relations responsible for mass starvation. published August 2017 HOW TO CITE THIS BRANCH ENTRY (MLA format) Frederickson, Kathleen. “British Writers on Population, Infrastructure, and the Great Indian Famine of 1876-8.” BRANCH: Britain, Representation and Nineteenth-Century History. Ed. Dino Franco Felluga. Extension of Romanticism and Victorianism on the Net. Web. [Here, add your last date of access to BRANCH]. Abstract of the Proceedings of the Council of the Governor General of India, Assembled for the Purpose of Making Laws and Regulations. 1878. XVII. Calcutta: Office of the Superintendent of Government Printing, 1879. Print. Ambirajan, Srinivas. Classical Political Economy and British Policy in India. Cambridge: Cambridge UP, 1978. Print. Bagchi, Amiya Kumar. Colonialism and Indian Economy. New Delhi: Oxford UP, 2010. Print. Banerjee, Sukanya. Becoming Imperial Citizens: Indians in the Late-Victorian Empire. Durham: Duke UP, 2010. Print. Bulwer-Lytton, Robert. “Minute by His Excellency the Viceroy, Dated Simla, 12th August 1877.” Copy of Correspondence between the Secretary of State for India and the Government of India on the Subject of the Famine in Western and Southwen India. IV. London: George Edward Eyre and William Spottiswoode, 1878. Print. East India (Famine Correspondence). Census of England and Wales for the Year 1871 General Report. IV. London: George Edward Eyre and William Spottiswoode, 1873. Print. Chaudhuri, Binay Bhushan. Peasant History of Late Pre-Colonial and Colonial India. VIII Part 2. New Delhi: Centre for Studies Civilizations, 2008. Print. History of Science, Philosophy and Culture in Indian Civilization. Cotton, Arthur. The Madras Famine with Appendix Containing a Letter from Miss Florence Nightingale and Other Papers. London: Simpkin, Marshall, & Co, 1877. Print. Davis, Mike. Late Victorian Holocausts: El Niño Famines and the Making of the Third World. London: Verso, 2002. Print. Digby, William. The Famine Campaign in Southern India (Madras and Bombay Presidencies and Province of Mysore) 1876-1878. Vol. 1. London: Longmans Green, 1878. Print. Drysdale, Charles R. The Population Question According to T. R. Malthus and J. S. Mill, Giving the Malthusian Theory of Over-Population. London: George Standring, 1892. Print. Dutt, Romesh Chunder. The Economic History of India under Early British Rule: From the Rise of the British Power in 1757 to the Accession of Queen Victoria in 1837. 2nd ed. London: Kegan, Paul, Trench, Trübner & Co, 1906. Print. —. India in the Victorian Age: An Economic History of the People. London: Kegan Paul, Trench, 1904. Print. Fukazawa, H. “Agrarian Relations: Western India.” The Cambridge Economic History of India. Ed. Dharma Kumar and Meghnad Desai. Vol. 2. Cambridge: Cambridge UP, 1983. 177–206. Print. Ghose, Ajit Kumar. “Food Supply and Starvation: A Study of Famines with Reference to the Indian Sub-Continent.” Oxford Economic Papers 34.2 (1982): 368–89. Print. Hansard’s Paliamentary Debates. CCXVIII. 3rd Series. London: Cornelius Buck at the Office for Hansard’s Parliamentary Debates, 1874. Print. Jevons, William Stanley. “Papers Set by Jevons as External Examiner at Cambridge University: Moral Sciences Tripos, Tuesday Dec 1 1874.” Papers and Correspondence of William Stanley Jevons. Ed. R.D. Collison Black. VII. London: Palgrave Macmillan, 1981. 132. Print. Kranton, Rachel, and Anand Swamy. “The Hazards of Piecemeal Reform: British Civil Courts and the Credit Market in Colonial India.” Journal of Development Economics 58 (1999): 1–24. Print. “Lord G. Hamilton, M.P., on the Indian Famine.” The London Evening Standard 6 Oct. 1877: 2. Print. Malthus, Thomas. An Essay on the Principle of Population. Oxford: Oxford World Classics, 2008. Print. Manikumar, K. A. “Impact of British Colonialism on Different Social Classes of Nineteenth-Century Madras Presidency.” Social Scientist 42.5/6 (2014): 19–42. Print. Marx, Karl. Capital Volume II. Trans. David Fernbach. London: Penguin, 1993. Print. Mill, John Stuart. Principles of Political Economy with Some of Their Applications to Social Philosophy. 5th ed. Vol. 1. London: Parker, Son and Bourne, 1862. Print. Mishra, H. K. Famines and Poverty in India. New Delhi: Ashish, 1991. Print. Mukherjee, Upamanyu Pablo. Natural Disasters and Victorian Empire. Basingstoke: Palgrave Macmillan, 2013. Print. Naidu, A. Nagaraja. “Famines and Demographic Crisis–Some Aspects of the De-Population of Lower Castes in Madras Presidency, 1871-1921.” Dalits and Tribes of India. Ed. J. Cyril Kanmony. New Delhi: Mittal, 2010. Print. Naoroji, Dadabhai. “Memorandum on Mr. Danvers’s Papers of 28th June 1880 and 4th January 1879.” Essays, Speeches, Addresses and Writings on Indian Politics of the Honorable Dadabhai Naoroji. Bombay: Caxton Printing Works, 1887. 441–64. Print. Nayak, Ganeswar. “Transport and Communication System in Orissa.” Economic History of Orissa. Ed. Nihar Ranjan Patnaik. New Delhi: Indus Publishing, 1997. 345–55. Print. “Neglected Aspects of the Indian Famine.” The Economist 28 Mar. 1874: 378–9. Print. Phule, Jotirao. “Cultivator’s Whipcord.” Selected Writings of Jotirao Phule. Ed. G.P. Deshpande. Trans. Aniket Jaaware. New Delhi: LeftWord, 2002. Print. Rangasami, Amrita. “‘Failure of Exchange Entitlements’ Theory of Famine: A Response.” Economic and Political Weekly XX.42 (1985): 1797–1801. Print. Ray, Urmita. “Subsistence Crises in Late Eighteenth and Early Nineteenth Century Bihar.” Social Scientist 41.3/4 (2013): 3–18. Print. Report of the Indian Famine Commission: Part II Measures of Protection and Prevention. London: George Edward Eyre and William Spottiswoode, 1880. Print. Ricardo, David. The Principles of Political Economy and Taxation. 3rd ed. London: John Murray, 1821. Print. Roy, Parama. Alimentary Tracts: Appetites, Aversions, and the Postcolonial. Durham: Duke UP, 2010. Print. Saha, Poulomi. Object Imperium: Gender, Affect, and the Making of East Bengal. MS. Samal, J. K. Economy of Colonial Orissa 1866-1947. New Delhi: Munshiram Manoharlal, 2000. Print. Sami, Leela. “Gender Differentials in Famine Mortality: Madras (1876-78) and Punjab (1896-97).” Economic and Political Weekly 37.26 (2002): 2593–2600. Print. Schabas, Margaret. The Natural Origins of Economics. Chicago: U of Chicago P, 2007. Print. Sen, Amartya. “Imperial Illusions.” The New Republic 31 Dec. 2007: 28. Print. —. Poverty and Famines: An Essay on Entitlement and Deprivation. Oxford: Oxford UP, 1983. Print. Sweeney, Stuart. Financing India’s Imperial Railways 1875-1914. New York: Routledge, 2016. Print. —. “Indian Railways and Famine 1875-1914: Magic Wheels and Empty Stomachs.” Essays in Economic & Business History 26 (2008): 147–57. Print. “The Famine in India.” Manchester Courier and Lancashire General Advertiser 5 June 1877: 6. Print. Thorold Rogers, James, ed. Public Addresses by John Bright, M.P. London: Macmillan, 1879. Print. Wakimura, Kouhei. “The Indian Economy and Disasters during the Late Nineteenth Century: Problems of Interpretation of Colonial Economy.” The BRICs as Regional Economic Powers in the Global Economy. Ed. Takahiro Sato. Sapporo: Slavic-Eurasian Research Center, 2012. Print. “War and Famine.” Leicester Chronicle and Leicestershire Mercury 8 Sept. 1877: 4. Print. Williams, A. Lukyn. Famines in India: Their Causes and Possible Prevention. London: Henry S. King, 1876. Print. Davis summarizes the claims about the mortality during the Indian famines of 1876-9 by noting that William Digby’s 1901 ‘Prosperous’ British puts the number at 10.6 million; Arap Maharatna’s The Demography of Famine (1996) at 8.2 million and Roland Seavoy’s Famine in Peasant Societies (1986) at 6.1 million (Davis 7). On the depopulation of landless castes during the famine, see Naidu 36. See Davis 33. On the significance of American cotton for cultivators in India, see Kranton and Swamy 6. On the consequences of conversion to cash crops and difficulties of converting fields to new crops in times of distress, see Manikumar 25. As Amiya Bagchi points out, the British government would “curb” property rights in order to secure the land revenue that financed the imperial state (196). Binay Chaudhuri emphasizes, against earlier critics, that little property was formerly transferred to moneylenders during and immediately after the famine years. According to Chaudhuri, this lack of formal transfer occurred because the moneylending vanis were from non-cultivating castes, and because mortgaging rather than purchasing land would allow they control of the produce of the land without taking on the obligations associated with land-ownership (539). G.P. Deshpande notes that “Cultivator’s Whipcord (Shetkaryacha Asud) was written in 1883 but the publication of the entire text was delayed because, as Phule put it, ‘We the shudras have amongst us cowardly publishers’. Nor was it written at one go. Phule did public readings of the various chapters of the book as they got written” (113). Writing about William Digby’s exposé of the southern Indian famine, for instance, Mukherjee emphasizes that Digby was particularly deft at “the vivid representation of the decayed, dying and abject Indian bodies—people following rice carts to nibble at the grain that fell from it; a family starving to death in sight of thousands of bags of grain that had been hoarded and priced beyond their reach; dogs fighting over the bloated corpse of a young child, and above all, the skeletal specters of the famished” (44). Among Malthusians in Britain, in turn, Indian famine offered a clear example of the Essay’s precepts. The doctor Charles Drysdale—the Malthusian League’s co-founder and first President—observed of the 1876-8 famine that “it would be a wonder” if famines were not “endemic in that over-peopled country” (90). For Drysdale and many others, the famines of the 1870s seemed to offer material evidence of Malthus’s hypothesis. Parama Roy observes that the British response to famine “relief” during this period entailed “dislocating starving populations from their homes and aggregating them in dormitory camps, demanding hard labor of the recipients as a condition of their receipt of food, and imposing a ‘distance test’ that denied work to able-bodied men and older children within ten miles of their homes” (117). Mike Davis notes that the Temple wage “provided less sustenance for hard labor than the diet inside the infamous Buchenwald concentration camp and less than half of the modern caloric standard recommended for adult males by the Indian government” (38). Kouhei Wakimura makes the point that “we need to analyze the specific causes of death in order to understand why famines brought about huge human mortality. We find that more deaths were due to diseases than starvation. Such diseases as cholera, smallpox, diarrhea, dysentery, and malaria combined with famine to produce large-scale mortality” (79). See also Davis 110. Dutt follows even Warren Hastings’ claim that a full third of the population of Bengal died as a result of the famine (Ray 7). Sen reports that, while the famine was certainly catastrophic, a third is perhaps an overestimate. While Ricardo’s Principles were, over the course of the 1870s, gradually becoming supplanted by the new neoclassical marginalism of William Stanley Jevons and others, Ricardian economics nonetheless retained high degrees of influence on the economic thinking apparent in administrative policy writing and in the popular press throughout the 1870s. Ricardo writes that “It is only then because land is of different qualities with respect to its productive powers, and because in the progress of population, land of an inferior quality, or less advantageously situated, is called into cultivation, that rent is ever paid for the use of it. When, in the progress of society, land of the second degree of fertility is taken into cultivation, rent immediately commences on that of the first quality, and the amount of that rent will depend on the difference in the quality of these two portions of land” (57). John Stuart Mill was responsible for the theory’s popularity through midcentury. Mill writes that “Settlers in a new country invariably on the high and thin lands; the rich but swampy soils of the river bottoms cannot at first be brought into cultivation, by reason of their unhealthiness, and of the great prolonged labor for clearing and draining them” (178). While Mill cites the American political economist H. C. Carey as the source of this view, its basis in Ricardian thinking on ground rent remains clear. Margaret Schabas observes that “Jevons was the first to explore the question of the dimensions of economic variables, and he found that, in many cases, the physical component dropped out of the analysis. . . . Indeed, the material attributes of goods are of no consequence to the analysis” (13). In Famine in India, Arthur Lukyn William suggests that “in England, as has been observed, there is always a natural scarcity of food, but the ‘actual pressure’ is never felt; because, as Dr. Hunter says, ‘The whole tendency of modern civilization is to raise up intervening influences which render the relation of annual pressure to natural scarcity less certain and less direct, until the two terms which were once convertible come to have very little connection with each other” (15). The Famine Commission listed Oudh [Awadh] has more densely populated than Bengal, but then noted that “The average in the case of Bengal and the North-Western Provinces is brought down by the large area of mountainous and thinly peopled hill country. . . In Bengal there are 17 districts in which the population is over 500 to the square mile” (Report of the Indian Famine Commission: Part II Measures of Protection and Prevention 86). In these districts, “the population that it presses closely on the means of subsistence, and here unless the existing system of agriculture is improved, so as to yield a larger produce per acre, there is no room for an increase of the population” (77). Arthur Lukyn Williams, in a prize-winning Cambridge essay, noted that while Assam and Chota Nagpur remain thinly populated, “the food-producing area cannot average less than 650 souls to the square mile. . . . Yet great as is this density of population, Sir R. Temple shows that it is not at present too much for the land” (74-5). Famine mortality statistics varied widely. Davis notes that official records for the 1873-4 Bengal famine claim that only 23 people died of starvation; regardless of mortality numbers, scholars agree that the response, to quote Davis, “was the only truly successful British relief effort in the nineteenth century” with its provision of a “gratuitous dole,” relief works, and importation of emergency rice from Burma (36). Many starvation deaths were reported as cholera deaths because it was more palatable to the British administration. See Davis 34. Examining the technology of the 1871 imperial census of India on the economics of Bengali, Poulomi Saha points out that the census offered “evidence” of the idea that “Bengal meted out its communal population . . . along the very axis of rural dependence and urban development”. See Object Imperium: Gender, Affect, and the Making of East Bengal (manuscript). Although it was more sympathetic to giving aid than The Economist, the Leicester Chronicle borrowed the former’s reasoning in holding British rule responsible for causing, through supposed prosperity, the population growth that it held responsible for the famine. “The appeal comes,” the Chronicle observes, “from fellow-subjects, from those whose growth of population we have somewhat abnormally encouraged” (“War and Famine” 4). Davis notes that “We are not dealing, in other words, with ‘lands of famine becalmed in stagnant backwaters of world history, but with the fate of tropical humanity at the precise moment (1870-1914) when its labor and products were being dynamically conscripted into a London-centered world economy. Millions died, not outside the “modern world system,’ but in the very process f being forcibly incorporated into its economic and political structures” (8–9). See, for instance, coverage in the Manchester Courier (“The Famine in India” 6) “It has been suggested that, during the Orissa famine, the deaths and misery were not due to non-intervention on the part of the government as such, but due to the difficulties of transportation: firstly there were no good roads to move foodgrains from adjoining areas, and secondly the heavy monsoon that came so soon after the severe famine prevented even the use of sea-transport. There were practical difficulties, no doubt . . . But after reading the mass of correspondence and official papers of the period, one cannot resist the conclusion that the officials as a class swore by the principles of non-interference in the grain market” (Ambirajan 76–7). On the lack of roads in Orissa, see Mishra 10. J. K. Samal reports that “The grave deficiency of communications, which still existed as late as 1866, was made apparent in the great Orissa Famine, when it was said that ‘the people were shut in between pathless jungles and impracticable seas, and were like passengers in a ship without provisions.’ After the famine, British authority felt the urgent need for taking positive steps for developing road system in Orissa” (74–5). Marx notes that “the sudden increase in the demand for cotton, jute, etc., due to the American Civil War led to a great limitation of rice cultivation in India, a rise in the price of rice, and the sale of old stocks of rice by the producers. On top of this, there was the unparalleled export of rice to Australia, Madagascar, etc., in 1864-4. Hence the acute character of the famine of 1866, which carried off a million people in Orissa alone” (218). On the conversion to cash crops such as jute and cotton, see also Saha. As Stuart Sweeney observes, “generous government rail budgets were out of kilter with investment in irrigation and general industrial development” (“Indian Railways” 148). Davis notes that while Naoroji and Dutt published the most famous of their claims at the beginning of the twentieth century, their “basic polemical strategy—mowing down the British with their own statistics” was already in place in the 1870s (56); indeed, Davis observes, Naoroji read “The Poverty of India” in 1876 in Bombay, in advance of publishing Poverty and Un-British Rule in India in 1901 (56). Naoroji writes that “If the mere movement of produce can add to existing wealth, India can become rich in no time. All it would have to do, is to go on moving its produce continually over India, all the year round, and under the magic wheels of the train, wealth will go on springing, till the land will not suffice to hold it. But there is no Royal (even railway) road to material wealth. It must be produced from the materials of the Earth till the great discovery is made of converting motion into matter” (444). As Sukanya Banerjee points out, “it is not so much the extraction of wealth that Naoroji criticizes, but the patterns of circulation into which that wealth is routed (or not)” (47–8). On Naoroji and railways, see also Sweeney, Financing India’s Imperial Railways 1875-1914 44. Sweeney points out, “representatives of the British manufacturing and service sectors benefited from famine protective railway construction. By contrast, the amount of imported material involved in the construction of canals and water tanks for irrigation was negligible” (“Indian Railways” 152). Sen argues that “starvation . . . is a function of entitlements and not of food availability as such. Indeed, some of the worst famines have taken place with no significant decline in food availability per head” (Poverty and Famines 7).
Few scientific theories have been accepted as quickly as the idea that an asteroid killed the dinosaurs. Hollywood blockbusters. Some of my colleagues will even tell you ground zero is the Chicxulub impact structure located off the coast of the Yucatan peninsula in Mexico. And most people seem happy to accept their word. Because I’m a scientist, people will ask me if there is life on Mars, if global warming is for real, or even if the moon landings were a hoax, but not once has someone asked me if I thought the dinosaurs were really toasted by a giant asteroid. After all, how else would you suddenly kill off an animal as cool as a T-Rex? From This Story Back in 1980, the principal piece of evidence suggesting an asteroid was to blame came from a little known element called iridium. When the Alvarez group analyzed sediments deposited right after the dinosaurs died 65 million years ago—a moment in geologic time known as the K/T boundary—they found concentrations of iridium, which is extremely rare on Earth but abundant in asteroids. At the time, no one had identified an impact crater that was 65 million years old, so scientists started looking. Eventually geologists found evidence for such a crater in drill samples taken while prospecting for oil in the Yucatan. To many the "smoking gun" had been found. Scientists are a skeptical bunch by nature, however, and while the press and Hollywood took the idea and ran with it, some of us began shaking our heads. One of the problems with the Chicxulub impact crater is that it’s about 100 miles in diameter. An important tenet of science is that a theory only has validity if it works all the time. Basically that means that other craters 100 miles in diameter or larger should also have wiped out most life on Earth. While few craters rival the size of Chicxulub, there are the 60-mile wide craters Manicouagan in Canada and Popigai in Russia. There is also the 150-mile wide Sudbury crater in Canada and the 180-mile wide Vredefort crater in South Africa. No mass extinctions have been associated with these impact structures, and Earth has experienced plenty of mass extinctions. A Gerta Keller of Princeton University and her colleagues also suggests that Chicxulub may have formed 300,000 years before the dinosaurs actually went extinct. And so far, only the K/T boundary appears to show an iridium "anomaly," suggesting that some other mechanism is responsible for other mass extinctions on Earth. While this evidence remains controversial, more and more scientists are giving up on Chicxulub as the smoking gun. So how important was the giant impact in killing off the dinosaurs? Despite all the hype, it may have had nothing to do with it. At the same time the dinosaurs were dying, an enormous volcanic eruption was occurring in what is now India. More than 12,000 cubic miles of lava poured out onto Earth’s surface within a very short period of time. The ash and gases associated with this eruption would have certainly affected climate in much the same way as a giant impact would. These so-called flood basalts have occurred throughout Earth’s history, and each time they’ve been associated with large mass extinctions. American Geophysical Union, Courtillot showed a relation between the larger cycles of magnetic reversals and mass extinctions. Many different pieces of his theory seem to fit together nicely. Of course, this explanation has some problems as well. At the same AGU Meeting, I had the opportunity to chat with geophysicist Silver, magma actually forms enormous puddles under the continents over time. The giant stresses and faulting that occur when crustal plates collide creates the pathways for this magma to reach the surface—usually over very short timescales. What I particularly like about Silver’s paper is that he provides ideas for testing his model, including seismic surveys that may show magma puddles forming under the continents. There is still a lot of work to do. But the next time you see a picture of dinosaurs running from a giant asteroid falling from the sky, you might also want to think about the magma slowly collecting beneath your feet. Posted January 16, 2007.
The Sun is by far the largest object in the solar system. It contains more than 99.8% of the total mass of the Solar System (Jupiter contains most of the rest). It is often said that the Sun is an "ordinary" star. That's true in the sense that there are many others similar to it. But there are many more smaller stars than larger ones; the Sun is in the top 10% by mass. The median size of stars in our galaxy is probably less than half the mass of the Sun. Jupiter is the fourth brightest object in the sky (after the Sun, the Moon and Venus). It has been known since prehistoric times as a bright "wandering star". But in 1610 when Galileo first pointed a telescope at the sky he discovered Jupiter's four large moons Io, Europa, Ganymede and Callisto (now known as the Galilean moons) and recorded their motions back and forth around Jupiter. In Roman mythology Mercury is the god of commerce, travel and thievery, the Roman counterpart of the Greek god Hermes, the messenger of the Gods. The planet probably received this name because it moves so quickly across the sky. Mercury has been known since at least the time of the Sumerians (3rd millennium BC). The IAU changed the definition of "planet" so that Pluto no longer qualifies. There are officially only eight planets in our solar system. Of course this change in terminology does not affect what's actually out there. In the end, it's not very important how we classify the various objects in our solar system. What is important is to learn about their physical nature and their histories. Planet Order from the Sun; Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune Our knowledge of our solar system is extensive but it is far from complete. Some of the worlds have never even been photographed up close. The Nine Planets is an overview of what we know today. We are still exploring, much more is still to come: We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time. -- T. S. Eliot Other Educational Resources & Notes - Astronomy picture of the day - For a full list of contents please see here. - For information on geography please visit Physical Geography. Astound your eyes with illusions - cna classes online - Professional Astronomy research paper writing help can be found at AdvancedWriters.com. - Solar system tour courtesy of Solar System Scope - Do My Essays Online is best site for essay writing service. - If you’re thinking about travelling throughout our planet, then Europe is a great place to visit some of the most beautiful cities in the world, but be sure to take your EHIC Card with you. However if you fancy visiting the USA, then you may need to apply for an ESTA visa before you go. - Proud supporter of Nine Planets, Daily Fantasy Cafe is offering one the top fanduel tv promo code in fantasy football. - Learn how to make money by using binary trading software: Binary Option Robot Info - European health insurance card renewal - As a continuing supporter of Nine Planets, Come2OrderDC is offering some of the top online coupons and promo codes. - Visit PromoCodeWatch for exclusive student discounts and annual scholarship contests. - Website design by seo los angeles
Significant Figures are the digits of a number which are used for expressing the necessary degree of accuracy that starts from the first non-zero digit. The number of significant figures in a result indicates the number of digits that can be used with confidence. The idea of a significant figure is simply a matter of applying common sense when dealing with numbers. An important characteristic of any numerical value is the number of digits, or significant figures it contains. A significant figure is defined as any digit in the number ignoring leading zeros and the decimal point. Examples of Significant Figures - 821 has 3 significant figures - 0.0310 has 3 significant figures
We used the term relation to represent our generalisation of the stream concept. A relation is a data structure which is used to organize linguistic items into linguistic structures such as trees and lists. A relation is a set of named links connecting a set of nodes. Lists (streams in the MLDS) are linear relations where each node has a previous and next link. Nodes in trees have up and down links also. Nodes are purely positional units, and contain no information apart from their links. In addition to having links to other nodes, each node has a single link to an item, which contains the linguistic information. Figure 1 shows an example utterance with a syntax relation and a word relation. The word relation contains nodes which have next and previous connections, whereas the syntax relation, has up and down connections in addition to next and previous. Each node in the syntax tree is linked to a item, each of which has a feature, CAT, giving its syntactic category. Terminal nodes in a syntax tree are words, and so an additional feature name is used here. The nodes in the word relation, which is a linear list, are also linked to the items that are linked to the terminal nodes in the syntax tree. In this way, node structures of arbitrary complexity can be constructed, and they can be intertwined in a natural way by having links from different nodes to the same item.
The word “utopia” refers to a perfect, or nearly perfect, place, or idea. Utopia was created by Sir Thomas Moore in 1516. He used it as the title of his book, Utopia, that describes an island with all the qualities of a perfect society. The word “utopia” means comes from the Greek meaning “no place,” playing into the idea that truly utopian civilizations are impossible. Definition and Explanation of Utopia A utopian community or ideal is defined by its equality in economics, governmental policies, and the justice system. Utopias do not have any of the negative features of modern society, such as inequality, mass incarceration, poverty, or untimely death. In utopias, disease has been all but eliminated, discrimination is nonexistent, and men and women of all colors and cultures are treated equally. Today, the word is commonly paired with its opposite, dystopia. In fact, much “utopian” literature is dystopian. This is since a utopia to one person is not likely a utopia to another. For example, the world of 1984 is perfect for those at the top, those who play by their own rules and control the men and women below them. But for people like Winston Smith, the world could not be more dystopian if it tried. Why Do Writers Write About Utopias? It is much more likely today that a writer will set out to combine the world of a utopia with the world of a dystopia and end up with a multilayered, unequal hierarchy of wealth and extreme poverty. Utopian worlds quickly fall apart, even with the best intentions. This allows for any number of complex plots to evolve. Men and women are often featured as resistance leaders, hoping against all hope to find some way to correct the imbalance and bring about a better world. The idea of a utopia is far more important in literature than an actual, physical, functional utopia. It gives the character something to hope and strive for. Dystopia and Utopia When one is first confronted with these two words, they appear to be opposites. But, the truth is more complicated than that. In reality, the latter, a utopia, is defined by its impossibility. It inherently leads to a dystopia marked by its lack of equality, disastrous living conditions, rampant injustice, and usually violence. Writers chose to write dystopias to convey something about contemporary society. This is true regarding all the best dystopian novels, including 1984, The Handmaid’s Tale, A Clockwork Orange, and A Brave New World. The latter is one of the best examples of how a utopian ideal, or the quest to develop a utopia, devolves into something at its heart dystopian. Examples of Utopias in Literature Example #1 Herland by Charlotte Perkins Gilmore This lesser-known, 1915 novel depicts a utopia on an isolated island. It is run and populations entirely by women who reproduce asexually. Men have been eliminated from the equation, leading to peace and happiness for the women and children. But, as with almost all utopias, it’s not perfect for everyone. The novel details male visitors to the island who are shocked by what they witness. There is no place for them there. Example #2 Utopia by Sir Thomas More More’s Utopia is a classic example. In the novel, he explores the possibility, or lack thereof, that a utopia could actually exist. He focuses primarily on economic rights and equal justice under law. The book is a work of fiction as well as a satirical frame narrative. It depicts a fictional island and its customs. It was published in 1516 in Latin. Example #3 Gulliver’s Travels by Jonathan Smith Over the course of his travels, Gulliver comes upon a group of horses called the Houyhnhnms. The horse-society is as close to a utopia as it is possible to conceive of. The female and male horses are educated the same and exhibit virtues that any human being should be jealous of. There are no laws, and the creatures value friendship and temperance above all else. As with novels, the best utopian film are at their heart dystopian. One of the best examples is Minority Report, which is based on the short story of the same name by Philip K. Dick. The plot follows the PreCrime division of the police department, a group tasked with arresting offenders before they commit crimes. The foreknowledge of the crimes-to-be comes from psychics, called pre-cogs are effectively kept in slavery in the police station. Their information is fed into a machine, and John Anderton, and his officers, are tasked with following it. Elysium is another contemporary example of a film that is in part utopia and part dystopia. It follows Matt Damon’s character as he fights to gain access to Elysium, a space habitat to which the richest of Earth’s have moved. Perfect, ideal, idyllic, paradise, heaven, fantasy world, harmonious, illusory, bliss, and transcendental. Related Literary Terms - Antagonist— a character who is considered to be the rival of the protagonist. - Anti-Hero—a character who is characterized by contrasting traits. This person has some of the traits of a hero and a villain. - Audience—the group for which an artist or writer makes a piece of art or writes. - Dystopia—the opposite of a utopia. It is an imagined place or community in which the majority of the people suffer. - Mood—the feeling created by the writer for the reader. It is what happens within a reader because of the tone the writer used in the poem. Other Resources on Utopia - Watch: Is Utopia Always Dystopia? Is Utopia Possible? - Watch: 10 Failed Utopias From History - Read: Utopia by Sir Thomas More
Is it necessary to feed plants? Containerised plants need regular feeding, as they only have what you give them. Plants in beds and borders, by contrast, are able to use the resources present in the garden soil, and may not need feeding. Ornamental trees and shrubs in garden soil may not need regular feeding by fertiliser. Some crops that do benefit from regular fertiliser are: fruit, vegetables and bedding plants. Gardeners often assume that poor growth in garden plants is related to lack of soil nutrients and give fertiliser. In fact, results from the RHS Soil Analysis Service show that shortages of plant nutrients in the soil are quite rare. Usually poor growth is due to other environmental factors such as drought, waterlogging and weather damage. Pests and dieases are also responsible for plants making poor growth. Soils vary in their nutrient levels. Sandy soils and chalky soils tend to be lower in nutrients than clay or loam soils. Soils also vary in the availability of nutrients. Soils that are dry, waterlogged, very acid or very alkaline may not allow plants to access existing nutrients. Correcting these factors (where possible) may be more effective than giving fertiliser, and in fact may be necessary for fertilisers to be effective. Soil: understanding pH and testing soil RHS Soil Analysis Service
There has been a growing concern from countries all around the globe regarding with the problem of air pollution. It is so serious that they came up tit an international treaty named “Kyoto Protocol” which basically a treaty that requires the industrialized countries to reduce the emission of greenhouse gases. Trees are the main component of reducing the number of harmful gases in the air, especially carbon dioxide, and convert it to oxygen. This process is called Photosynthesis. If trees were to be reduce, then the percentage of carbon dioxide will be greater than oxygen. Carbon dioxide will attacks the ozone layer. If there is too much of carbon dioxide the ozone layer can not endure thus they loose the “battle” and left a hole. With this there are more IV lights infiltrated the ozone layer. The air gets warmer and warmer and this is recognized as a threat to human health as well as to the Earth’s ecosystem. . 2 FACTORY WASTE In general, factories use coal and petroleum to produce electricity or for manufacturing goods. These types Of factories are dominant in the developing countries which they cannot afford to replace it with a greener power supply, for example Wind Turbine for electricity. The smoke from this factories will be thrown to the air and thus to its surroundings. Coal and petroleum often contains sulfur compounds, their combustion generates sulfur dioxide and further oxidation of it with nitrogen dioxide will produce sulfuric acid. Sulfuric acid is one of the dangerous acid. It can range from 0. 3 to 2. 1 of the pH level. 2. 3 HUMAN ACTIVITIES Human activities are the ones that contributes more to the emission of harmful gases to the environments, especially carbon monoxide. One of it is open burning. Humans often do open burning of their litter outside their souse and thus black smog are introduce to the atmosphere. People realized that the occurrence of open burning has gone wild and the government had done several ways to reduce this from happening. Another example of human activities is the uses of cars. Cars produce carbon monoxide, an odorless, colorless, non-irritating but harmful gas. Most people prefer to have own vehicles rather than taking the public transport. Human activities will lead to haze which had occurs in several countries. 2. 4 NATURAL HAZARDS Natural Hazards is one of the uncontrollable pollutants, such as volcanic eruptions, thunderstorms and hot weather. Volcanic eruptions also contains sulfur and other gases such as nitrogen. Nitrogen dioxide is also a product of thunderstorms by electric discharge. It can be seen as the brown haze dome above cities. Nitrogen dioxide has a characteristic of sharp, biting dour. It is one of the prominent air pollutants 3. EFFECTS OF AIR POLLUTION The followings are some effects of air pollution. 3. 1 SMOG Smog is one of the most visible effects. It is described as a combination of smoke and fog. However, it is actually the mixture of pollutants (such as carbon dioxide) and ground-level ozone (such as poisonous gas ozone,03). This fog-like smoke blankets many cities and obscures the skyline’s view around the world with its disclosure haze. Additionally, they affect the people who breathe in it but also affect all the system that rely on circulation of air. When the smog is particularly heavy, the dust and grime can impact the machinery by clogging filters and gears. 3. 2 ACID RAIN Acid rain is produced due to acidification. Chemicals, for example, sulfuric acid or nitric acid from the pollutants (such as sulfur dioxide and nitrogen) enter the atmosphere and combines with water droplets that make up clouds, the water droplets becomes acidic, forming acid rain. When it falls to earth, it has numerous consequences 32. Effects on animals When acid rain falls, it can pollute the existing water table such as rivers, streams and lakes which could harm and damage the fishes and other aquatic life in the freshwater ecosystem. 3. 2. 2 Effects on Plants The acid also affects the plants and trees. It can kill a forest by affecting the leaves and bark and increases the susceptibility of plants to attack by insects herbivores. When the acid rain infiltrates into soils, it raises the acidity of the soil by changing the chemistry of the soil and making it unfit for many living hinges that rely on soil as a habitat or for nutrition. . 2. 3 Effects on Physical Environment The effects of acid rain on physical environment are drastic (according to the U. S. Environment Protection Agency). Physical Environment is also known as human construction, especially any item made of stone such as buildings, monuments and statures will be eaten away by the acid. It also erodes paint and damages building materials. 3. 3 GLOBAL EFFECT One of the biggest effects of air pollution is its global reach. 3. 3. 1 Climatic Change Global currents carry chemicals and particles around the world. This meow results in increased temperature causing areas such as the artic, that don’t have vehicles or industry, are still affected by air pollution. It melts the polar ice caps and raise the ocean level. 3-3. 2 Depletion Of Ozone Layer Ozone depletions mean losing earth’s protective layer caused by number of chemicals that have been released into the air such as chlorofluorocarbon(CIFS). This results in increasing harmful ultraviolet(LLC) radiation to reach the earth. 3. 3. 3 Global Warming Air pollution affects the whole earth ecosystem. Another aspect of air pollution is also global warming, caused by excess of carbon dioxide put into he atmosphere through human activities. This will trap heat in the earth’s atmosphere causing dramatic climatic changes around the world. 3. 4 HEALTH ISSUES Air pollution causes numerous health consequences for people. The level of effects depends on the length of time of exposure, as well the kind and concentration of chemicals and particles exposed to. 3. 4. 1 Short-term Health Effect Air pollution cause irritation of the eyes, nose and throat, and it can lead to upper respiratory infections like bronchitis and pneumonia. Headaches, nausea and allergic reactions can also occur. Pollution can also cause asthma attack and emphysema aggravating the medical condition of individuals. 3. 42 Long-term Health Effect Air pollution can lead to chronic respiratory disease and lung cancer, in addition to heart disease and damage to the brain, nervous system, liver and kidneys. Continual exposure to air pollution affects the lungs of growing children and may aggravate or complicate medical conditions in the elderly. 3. 5 ECONOMIC LOSSES Air pollution can also harmfully affect the economy. With rising health problems among the population, health care costs rise. With air pollution causing illnesses among people in the workforce, time and productivity are cost. Governments suffer from an inefficient workforce. Areas of the world with high air pollution, such as the Philippines, also often report a drop in tourism and a loss of foreign investments. Crop and agricultural losses can be attributed to dying crops, decreasing fishing areas, and shrinking forest areas. Other economic losses include losses from repair of damage to buildings and increased costs of cleaning. 4. CONCLUSION 4. CAUSES OF AIR POLLUTION The causes of air pollution can came from anywhere, from human activities, naturally, or the combination of both human activities and naturally. Such as Mexico City’s case study written by Michelle Hobbler, stated that; “Many factors have contributed to this situation: industrial growth, a population boom (from 3 million in 1950 to some 20 million today), and the proliferation of vehicles. More than 3. 5 million vehicles ? 30% of them more than 20 years old ? now ply the city streets. ” “Geography conspires with human activity to produce a poisonous scenario. Located in the crater of an extinct volcano, Mexico City is about 2,240 meters above sea level. The lower atmospheric oxygen levels at this altitude cause incomplete fuel combustion in engines and higher emissions of carbon monoxide, hydrocarbons, and volatile organic compounds. Intense sunlight turns these noxious gases into higher than normal smog levels. In turn, the smog prevents the sun from heating the atmosphere enough to penetrate the inversion layer that blankets the city. ” 4. 2 EFFECTS OF AIR POLLUTION The effects of air pollution are or can be very harmful. For example, the health impacts of this pollution on Mexico City. The Mexico City’s case study stated, in 1 998, that air earned Mexico the reputation of “the most dangerous city in the world for children. ” Air pollution cannot be blame for the illness faced by people because as Amandine Sings, M. D. , Vicar Guppy, M. D. , and Santos Ass , M. D. Stated in ‘Air Pollution: Indian Scenario’, “Disease and malformation caused by air pollution is not a natural occurrence to be overlooked because if people themselves try to help reducing the pollutants, then health risk can be controlled. ” 5. RECOMMENDATION FOR AIR POLLUTION Solution efforts on pollution are always a big problem. Several attempts are being made worldwide on personal, industrial and governmental levels to curb the intensity at which Air Pollution is rising and regain a balance as far as the proportions of the foundation gases are concerned. This is a direct attempt at slacking Global warming. This is why prevention interventions are always a better way of controlling air pollution. We are seeing a series of innovations and experiments aimed at alternate and unconventional options to reduce pollutants. Air Pollution is one of the larger mirrors of man’s follies, and a challenge we need to overcome to see a tomorrow. In many big cities, monitoring equipment has been installed at many points in the city. Authorities read them regularly to check the quality of air. The lesson to be taken is from the case study of Mexico City. The government introduced air laity improvement programs ? PICA and PAIRED that include, among other measures, a rotating one-weekday ban on private car use. On days of high pollution, the ban extends to every second day and some manufacturing activities are curtailed. In addition, car owners must have their vehicles certified every six months. The following are some recommendation action to reduce air pollution. . 1 Government (or community) Level prevention 5. 1. 1 Emphasis on clean energy resources Government throughout the world would have already taken action against air pollution by introducing green energy. Some governments are investing n clean energy technologies like wind energy and solar energy, as well as other renewable energy, to minimize burning of fossil fuels, which cause heavy air pollution. Governments are also forcing companies to be more responsible with their manufacturing activities, so that even though they still cause pollution, they are a lot controlled. 5. 1. Use energy efficient devices CFML lights consume less electricity as against their counterparts. They live longer, consume less electricity, lower electricity bills and also help you to reduce pollution by consuming less energy. Companies are also building ore energy efficient cars, which pollute less than before. 5. 2 Individual Level Prevention 5. 2. 1 Use public mode Of transportation Encourage your family to use the bus, train or bike when commuting. Also, try to make use of carpooling. If you and your colleagues come from the same locality and have same timings you can explore this option to save energy and money. If we all do this, there will be fewer cars on road and less fumes. 5. 2. 2 Conserve energy Use energy (light, water, boiler, kettle and fire woods) wisely. This is because lots of fossil fuels are burned to generate electricity, and so if we can cut own the use, we will also cut down the amount of pollution we create. 5. 2. 3 understand the concept of Reduce, Reuse and Recycle Recycle and re-use things. This will minimize the dependence of producing new things. Remember manufacturing industries create a lot of pollution, so if we can re-use things like shopping plastic bags, clothing, paper and bottles, it can help. . BIBLIOGRAPHY REFERENCES or. Amandine Sings, MD, Dry. Vicar Guppy. MD, & Dry. Santos ass, MD. (2009). Volume 10. Number 2. November 2009. Air Pollution : Indian Scenario in The Pacific Journal of Science and Technology. Http://www. Sentimentality. S/ PAST. HTML . Accessed 9 October 2013. John Fletcher (201 1). Air Pollution . Geography. Michelle Hobbler, (2003). Health An Ecosystem Approach. Wham. Dire. Ca/ cheesecloth . Accessed 9 October 2013. Website references: http://en. M. Wisped. Org/WI ski/Kyoto_protocol http://en. M. Wisped. Org/WI ski/Air_poll lotion http://environment. Cinematographic. Com/environment/global-warming/ pollution-overview http://schooldays. Com/pollution/air-pollution/air- pollution-prevention-HTML http://www. Mass. Gob/deep/air/aqua/envy_effects. HTML http://www. Conserve-energy-future. Com/causes-effects-solutions-of-air- pollution. PH . APPENDICES CASE STUDY : MEXICO CITY Taking Control of Air Pollution in Mexico City A clean air drive targets health improvements and health care savings Located in a pollutant-trapping valley, Mexico City ? one of the world’s largest cities ? has had limited success in battling suffocating air pollution. A new understanding of the health impacts of this pollution ? and of people’s role in both the problem and the solution ? could lead to better targeted, more effective air improvement programs. Famous for its size, its history, and the warmth of its people, Mexico City is also infamous for its air pollution. In 1992, the United Nations described the city’s air as the most polluted on the planet. Six years later, that air earned Mexico the reputation of “the most dangerous city in the world for children. ” This is a reputation Mexico has been working hard to improve. But despite more than a decade of stringent pollution-control measures, a dull haze hangs over the city most days, obscuring the stunning snow-capped mountains that frame the city and endangering the health of its inhabitants. Many factors have contributed to this situation: industrial growth, a population boom (from 3 million in 1950 to some 20 million today), and the relaxation of vehicles. More than 3. 5 million vehicles ? 30% of them more than 20 years old ? now ply the city streets. Geography conspires with human activity to produce a poisonous scenario. Located in the crater of an extinct volcano, Mexico City is about 2,240 meters above sea level. The lower atmospheric oxygen levels at this altitude cause incomplete fuel combustion in engines and higher emissions of carbon monoxide, hydrocarbons, and volatile organic compounds. Intense sunlight turns these noxious gases into higher than normal smog levels. In turn, the smog prevents the sun from eating the atmosphere enough to penetrate the inversion layer that blankets the city. Solving this problem has been a priority of the Metropolitan Environmental Commission, which is integrated with local and federal authorities. Recent efforts to curb emissions have been relatively successful. In the 1 9905, for instance, the government introduced air quality improvement programs ? PICA and PAIRED that include, among other measures, a rotating one-weekday ban on private car use. On days of high pollution, the ban extends to every second day and some manufacturing certified every six months. But if lead, carbon monoxide, and sulfur dioxide are now under control, pollution levels of other contaminants are still far above air quality standards. A closer look at pollution When PAIRED concluded in 2000, environmental authorities undertook a longer, ambitious air quality improvement program: PAIRED 2002-2010. TO develop the program, however, accurate measures were needed to determine how improving air quality would improve health and reduce health expenditures. A number of questions also needed to be answered about the relationship between the city’s inhabitants and air pollution: How do people perceive pollution? How does it affect them? What are they willing to do or pay for cleaner air? How can they be motivated to help solve it? The Mexico City government set out to answer these questions, with support from Canada’s International Development Research Centre (DIRE) and the Netherlands Trust Fund through the World Bank and the Pan American Health Organization. If the first question was fairly simple ? what is the economic value of benefits reaped from reducing air pollution? ? answering it was not. “No one really knows, or understands, the relationship between environmental contaminants and the health of inhabitants,” says biologist Roberto Murmur Cruz, subdirectory of information and analysis at Mexico City’s atmospheric monitoring system, part of the Secretariat del Media Ambient (department of the environment). The Secretariat coordinated the project in collaboration with the Center National De Salad Ambient (national centre for environmental health), the nongovernmental organization GREECE (a study group on relations between the environment and behavior), and the Institute De Ia Meijer del District Federal (Women’s Institute of Mexico City). The researchers focused on health hazards posed by the most serious Laotians in Mexico: ozone, produced when nitrogen oxides and volatile organic compounds react in sunlight, and PM O ? resalable particulate matter less than 10 microns (0. 01 millimeters) in diameter. IMO comes from various sources, including road construction and dust, smoke-belching diesel trucks and buses, forest fires, and burning refuse in the open air. Both pollutants can irritate eyes, cause or aggravate a range of respiratory and cardiovascular ailments, and lead to premature death. Its not air pollution that kills people,” explains Munson, “but some people die sooner than they would otherwise. More than 20 researchers from eight academic, governmental, donor, and nongovernmental organizations in Mexico, the Netherlands, and the USA contributed to compiling and analyzing the findings of national and international studies of the health effects of ozone and PM O. Surveys were also carried out “to determine people’s perceptions of the pollution problem,” says Munson. A population exposure model was then developed, using data from Mexico sophisticated air-monitoring network. The study estimated that pollution levels in 2010 will be much the same as in the late sass when ozone levels exceeded standards on almost 90% of days ND IMO on 30% to 50% of days, explains Dry Victor Boors Abort, former Director of the Center National De Salad Ambient at the Secretariat De Salad and now coordinator of workplace health, who led the project’s first module. Tangible benefits Earlier efforts to assess the Costs Of pollution in Mexico City had focused on direct medical costs such as medicines and hospital visits and on productivity losses ? income lost by those who were sick. This study, however, sought to provide a more comprehensive picture. Air quality and exposure meddlers, epidemiologists and public health specialists, economists and statisticians assessed a wide range of health benefits and “savings,” including people’s willingness to pay for better health and a potentially longer life. Communications and social participation specialists worked to understand peoples’ perceptions and get at indirect costs because, as Munson explains, “not only do people who get sick lose days from work, but also mothers stay home to take care of the children who get sick. It was an important transcriptional experience, says Munson. Bringing together different disciplines to provide a holistic picture ? an approach central to cheesecloth research ? proved very successful. And a strong connection was forged between the institutions and between government and research institutes. The research concluded that reducing PM 0 would yield the greatest health and financial benefits: each micrograms per cubic centimeter reduction would be worth about CSS$1 00 million a year. Reducing both ozone and IMO by just 10% would result in average “savings” offs$760 million a year. In human terms that would translate into, for example, 33,287 fewer emergency room visits for respiratory distress in 2010 and 88 fewer hospital admissions for the same problem. In addition, says Mums, it would lead to 266 fewer infant deaths a year an important consideration not valuated. “Clearly this justifies relatively high expenditures to further reduce polluting emissions,” Munson says. Much to the project’s credit, this detailed information provided the scientific underpinning of PAIRED 2002-2010, which calls for close to US$15 billion of public and private investments in air quality improvement projects. The information has also been made available to the international community through a number of publications. What do Mexicans think? If people largely cause air pollution, they must also be involved in cleaning it up. Certainly the original PAIRED program recognized this and included various formal and informal programs to inform people about the problem and invite them to action. “It recognized that a cultural change was needed to modify the society-city-environment relation,” says Munson. But in a city as large and as socially and culturally diverse as Mexico, that proved no easy task. The research team surveyed close to 4,000 residents in all sectors or delegations of the city. Completed questionnaires showed that close to 30% believe the government’s motives in seeking to reduce air pollution are silvering. More than 30% also think that the government’s online air quality reports are false. (http://148. 243. 232. 103/amicable/) In fact, says Munson, ‘We found that most people don’t even consult the official information.
It is very tempting to use the latest computer wizardry to represent information and develop computer enhanced learning materials. However, the instructional design of these systems should be based on a careful examination and analysis of the many factors, both human and technical, relating to visual learning. When is sound more meaningful than a picture? How much text is too much? Does the graphic overwhelm the screen? For a student, this allows them to test all of their skills gained in every subject area. Students must be able to select appropriate multimedia tools and apply them to the learning task within the learning environment in order for effective learning to take place. A Multimedia Learning environment involves a number of components or elements in order to enable learning to take place. Hardware and software are only part of the requirement. As mentioned earlier, multimedia learning integrates five types of media to provide flexibility in expressing the creativity of a student and in exchanging ideas (See Figure 1). Out of all of the elements, text has the most impact on the quality of the multimedia interaction. Generally, text provides the important information. Text acts as the keystone tying all of the other media elements together. It is well written text that makes a multimedia communication wonderful. Sound is used to provide emphasis or highlight a transition from one page to another. Sound synchronized to screen display, enables teachers to present lots of information at once. This approach is used in a variety of ways, all based on visual display of a complex image paired with a spoken explanation (for example, art – pictures are ‘glossed’ by the voiceover; or math – a proof fills the screen while the spoken explanation plays in the background). Sound used creatively, becomes a stimulus to the imagination; used inappropriately it becomes a hindrance or an annoyance. For instance, a script, some still images and a sound track, allow students to utilize their own power of imagination without being biased and influenced by the inappropriate use of video footage. A great advantage is that the sound file can be stopped and started very easily. The representation of information by using the visualization capabilities of video can be immediate and powerful. While this is not in doubt, it is the ability to choose how we view, and interact, with the content of digital video that provides new and exciting possibilities for the use of digital video in education. There are many instances where students, studying particular processes, may find themselves faced with a scenario that seems highly complex when conveyed in purely text form, or by the use of diagrams and images. In such situations the representational qualities of video help in placing a theoretical concept into context. Video can stimulate interest if it is relevant to the rest of the information on the page, and is not ‘overdone’. Video can be used to give examples of phenomena or issues referred to in the text. For example, while students are reading notes about a particular issue, a video showing a short clip of the author/teacher emphasizing the key points can be inserted at a key moment; alternatively, the video clips can be used to tell readers what to do next. On the other hand, it is unlikely that video can completely replace the face-to-face lecture: rather, video needs to be used to supplement textual information. One of the most compelling justifications for video may be its dramatic ability to elicit an emotional response from an individual. Such a reaction can provide a strong motivational incentive to choose and persist in a task. The use of video is appropriate to convey information about environments that can be either dangerous or too costly to consider, or recreate, in real life. For example: video images used to demonstrate particular chemical reactions without exposing students to highly volatile chemicals, or medical education, where real-life situations can be better understood via video. Animation is used to show changes in state over time, or to present information slowly to students so they have time to assimilate it in smaller chunks. Animations, when combined with user input, enable students to view different versions of change over time depending on different variables. Animations are primarily used to demonstrate an idea or illustrate a concept. Video is usually taken from life, whereas animations are based on drawings. There are two types of animation: Cel based and Object based. Cel based animation consists of multiple drawings, each one a little different from the others. When shown in rapid sequence, for example, the operation of an engine’s crankshaft, the drawings appear to move. Object based animation (also called slide or path animation) simply moves an object across a screen. The object itself does not change. Students can use object animation to illustrate a point – imagine a battle map of Gettysburg where troop movement is represented by sliding arrows. Graphics provide the most creative possibilities for a learning session. They can be photographs, drawings, graphs from a spreadsheet, pictures from CD-ROM, or something pulled from the Internet. With a scanner, hand-drawn work can be included. Standing commented that, “the capacity of recognition memory for pictures is almost limitless”. The reason for this is that images make use of a massive range of cortical skills: color, form, line, dimension, texture, visual rhythm, and especially imagination. Read more: Multimedia in Education - Introduction, The Elements of, Educational Requirements, Classroom Architecture and Resources, Concerns http://encyclopedia.jrank.org/articles/pages/6821/Multimedia-in-Education.html#ixzz0hAHvSMsU
Climate changes in East Africa The average global surface temperature has warmed 0.8°C in the past century and 0.6°C in the past three decades (Hansen et al., 2006), in large part because of human activities (IPCC, 2007). The Intergovernmental Panel on Climate Change (IPCC) has projected that if greenhouse gas emissions, the leading cause of climate change, continue to rise, the mean global temperatures will increase by between 1.4 and 5.8°C by the end of the 21st century (IPCC, 2007). Climate change impacts have the potential to undermine and even, undo progress made in improving the socio-economic well-being of many of the African countries. The negative impacts associated with climate change are also compounded by many factors, including widespread poverty, human diseases, and high population density, which is estimated to double the demand for food, water, and livestock forage within the next 30 years. The countries of Eastern Africa are prone to extreme climatic events such as droughts and floods. In the past, these events have had severe negative impacts on key socioeconomic sectors of the economies of most countries in the sub region. In the late seventies and eighties, droughts caused widespread famine and economic hardships in many countries of the sub region. There is evidence that future climate change may lead to a change in the frequency or severity of such extreme weather events, potentially worsening these impacts. In addition, future climate change will lead to increases in average mean temperature and sea level rise, and changes in annual and seasonal rainfall. These will have potentially important effects across all economic and social sectors in the region, possibly affecting agricultural production, health status, water availability, energy use, biodiversity and ecosystem services (including tourism). Any resulting impacts are likely to have a strong distributional pattern and amplify inequities in health status and access to resources, as vulnerability is exacerbated by existing developmental challenges, and because many groups (e.g. rural livelihoods) will have low adaptive capacity. East Africa is characterised by widely diverse climates ranging from desert to forest over relatively small areas. Rainfall seasonality is complex, changing within tens of kilometres. Altitude is also an important contributing factor. The annual cycle of East African rainfall is bimodal, with wet seasons from March to May and October to December. The Long Rains (March to May) contribute more than 70% to the annual rainfall and the Short Rains less than 20%. Much of the interannual variability comes from Short Rains (coefficient of variability = 74% compared with 35% for the Long Rains) (WWF, 2006). Regional historic climate trends Results from recent work from stations in Kenya and Tanzania, indicate that since 1905, and even recently, the trend of daily maximum temperature is not significantly different from zero. However, daily minimum temperature results suggest an accelerating temperature rise (Christy et al., 2009). A further study looking at day and night temperatures concluded that the northern part of East Africa region generally indicated nighttime warming and daytime cooling in recent years. The trend patterns were, however, reversed at coastal and lake areas. There were thus large geographical and temporal variations in the observed trends, with some neighbouring locations at times indicating opposite trends. A significant feature in the temperature variability patterns was the recurrence of extreme values. Such recurrences were significantly correlated with the patterns of convective activities, especially El Niño-Southern Oscillation (ENSO), cloudiness, and above/below normal rainfall. During recent decades, eastern Africa has been experiencing an intensifying dipole rainfall pattern on the decadal time-scale. The dipole is characterised by increasing rainfall over the northern sector and declining amounts over the southern sector (Schreck and Semazzi, 2004). East Africa has suffered both excessive and deficient rainfall in recent years (Webster et al., 1999, Hastenrath et al., 2007). In particular, the frequency of anomalously strong rainfall causing floods has increased. Shongwe, van Oldenborgh and Aalst (2009) report that their analysis of data from the international Disaster Database (EM-DAT shows that there has been an increase in the number of reported hydrometeorological disasters in the region, from an average of less than 3 events per year in the 1980s to over 7 events per year in the 1990s and 10 events per year from 2000 to 2006, with a particular increase in floods. In the period 2000-2006 these disasters affected on average almost two million people per year. Historic context of climate extremes in East Africa: - Large variability in rainfall with occurrence of extreme events in terms of droughts and floods. - Droughts in the last 20 years -1983/84, 1991/92, 1995/96, 1999/2001, 2004/2005 (led to famine). - El-Niño related floods of 1997/98 was a very severe event enhanced by unusual pattern of SST in the Indian Ocean (IPCC, 2007). - The La Niña related drought of 1999/2001. The El Niño in 1997/98 and La Niña in 1999/2000 were the most severe in 50 years. Regional climate variability Recent research suggests that warming sea surface temperatures, especially in the southwest Indian Ocean, in addition to inter-annual climate variability (i.e., El Niño/Southern Oscillation (ENSO)) may play a key role in East African rainfall and may be linked to the change in rainfall across some parts of equatorial-subtropical East Africa (Cane et al., 1986; Plisnier et al., 2000; Rowe, 2001). Warm sea surface temperatures are thought to be responsible for the recent droughts in equatorial and subtropical Eastern Africa during the 1980s to the 2000s (Funk et al., 2005). According to the U.N. Food and Agriculture Organization (FAO, 2004), the number of African food crises per year has tripled from the 1980s to 2000s. Drought diminished water supplies reduce crop productivity and have resulted in widespread famine in East Africa. El Niño is the most important factor in interannual variability of precipitation in East Africa. The Indian and Atlantic Oceans also play a role. Local geographic factors may complicate the impact of large-scale factors. There have been relatively few recent studies of rainfall variability in East Africa, compared with areas such as the Sahel. Even fewer studies exist of Indian Ocean variability and its impact on the climate. Interannual variability of rainfall is remarkably coherent within a region. The Short Rains, in particular, are characterised by greater spatial coherence and are linked more to large scale than regional factors. The Long Rains (March to May) contribute more than 70% to the annual rainfall and the Short Rains less than 20%. Much of the interannual variability comes from Short Rains (coefficient of variability = 74% compared with 35% for the Long Rains). As a result, the Short Rains are more predictable at seasonal time scales than the Long Rains. Work on East African climate is focused on rainfall variability but is thinly spread amongst mechanisms of mean climate control, circulation relationships to rainfall variability, the role of ocean patterns (including ENSO and Indian Ocean Dipole (IOD)) in rainfall variability, and the representation of rainfall in regional and global models and the predictability of rainfall. There has also been research on East African lake variability; for example the 1961-1962 rains caused rapid rises in the levels of east African lakes. Lake Victoria rose 2 m in little more than a year (Flohn and Nicholson, 1980). This was not an ENSO year, but exceedingly high sea-surface temperatures (SSTs) occurred in the nearby Indian Ocean as well as the Atlantic. ENSO comprises two opposite extremes, El Niño and La Niña. El Niño is associated with anomalously wet conditions during the Short Rains and some El Niño events, such as 1997, with extreme flooding. The IOD is regarded as a separate pattern of ocean-based variability although IOD events have occurred together with ENSO leading to extreme conditions over East Africa (e.g. 1982, 1997, 1994, see Figure 2). La Niña conditions are associated with unusually dry conditions over East Africa (Figure 3) during the Short Rains, although the relationship is less reliable than that for El Niño (taken from Downing et al., Final Report Appendices, Kenya: Climate Screening and Information Exchange. AEA Technology plc. UK). Figure 1 Satellite derived Short rain anomalies in mm/day (i.e. differences from the long term mean) for October-December 1982, 1994, 1997 (La Nina) (taken from Downing et al., Final Report Appendices, Kenya: Climate Screening and Information Exchange. AEA Technology plc. UK) Figure 2 Satellite derived Short rain anomalies in mm/day (i.e. differences from the long term mean) for October-December 1988, 1998, 2000 (El Nino) (taken from Downing et al., Final Report Appendices, Kenya: Climate Screening and Information Exchange. AEA Technology plc. UK). Regional scenario projections Although there have been studies of Global Climate Models (GCM)-simulated climate change for several regions in Africa, the downscaling of GCM outputs to finer spatial and temporal scales has received relatively little attention in Africa. The result of on attempt at the use of a regional model for East Africa is present below following the results of the AR4 climate change scenarios. This work was done as part of the DFID funded work on Climate Screening for Kenya (Downing et al., 2007). For the application of the Special Report on Emissions Scenarios (SRES) scenarios using the AR4 GCMs there are limitations which apply more to uncertainties in rainfall than temperature projections. The models have not been closely evaluated over the Kenya region, so each of the 8 GCMs used is given equal weight. Model resolution is coarse (c.200km) and the data cannot be applied to the sub-regional scale (e.g. Mount Kenya). It is best to applied over broad regions. Results from AR4 climate change scenarios Regardless of the SRES scenario, decade, season or model, all the data points to a warmer future. No simulation shows temperatures cooler than present. The A2 scenario produces warming of around 4 degrees by the end of the century in both seasons. Warming of one degree or less is more typical by 2020. Almost all the simulations show wetter conditions in October to December, even by 2020. Wetter conditions in Kenya, especially in the Short Rains and especially in northern Kenya (where rainfall increases by 40% by the end of the century) are likely. Analysis of the northern Kenya region show that the increase in seasonal total rainfall in the Short Rains occurs by means of a trend of increasing rainfall extremes which, in models like MPI, are evident from the outset of the 21st Century. At the same time the droughts remain as extreme as present, even increasing in intensity through the 21st Century. There is little change in the timing of the seasonal variations for either rainfall or temperature over future decades. In brief, climate model experiments using AR4 climate scenarios (IPCC, 2007) based on the gridded model data show that: - East African climate is likely to become wetter, particularly in the Short Rains (October to December) and particularly in northern Kenya, in the forthcoming decades. - East Africa will almost certainly become warmer than present in all seasons in the forthcoming decades. - Changes in rainfall seasonality over forthcoming decades are unlikely. - A trend towards more extremely wet seasons is likely for the Short Rains, particularly in northern Kenya, in the forthcoming decades. - Droughts are likely to continue (notwithstanding the generally wetter conditions), particularly in northern Kenya, in the forthcoming decades. In many model simulations, the drought events every 7 years or so become more extreme than present. - The wetting component evident in observed Kenyan rainfall may well be a forerunner of the longer term climate change. Regional climate change modelling (RegCM3) Regional climate simulation for the East Africa region has so far been confined to one model and one emission scenario (A2) so the results are very uncertain (Downing et al., 2006). To improve the certainty it would need multiple regional models and emission scenarios – a modelling effort which amounts to years more work. Ideally, future regional downscaling of the global climate projections for Kenya should be extended to other IPCC GCMs, to gain a better sense of the uncertainty associated with the regional climate model projections. The regional climate projections indicate that the role of sharp mountain range slopes, such as the Great Rift Valley in Kenya, can greatly affect local climate. The IPCC GCMs are based on a large grid resolution (200 x 200km 2) and do not include modifications for altitude. GCM projections are valuable for projections on the large scale, as long as they are interpreted with caution, particularly when large contrasts in altitude exist over short distances like in Kenya. Results from the North Carolina State University enhanced version of the RegCM3 regional model (Anyah et al, 2006) which were run for both a control and one climate change (A2 scenario) simulation, have been analysed for Kenya. A domain resolution of 20 km forms the basis of these experiments. These class of models offer much higher resolution than the GCMs and, as a result, are of relevance to complex terrain which characterizes Kenya. The regional model was forced by global fields from the FvGCM model. - Climate analysis using the Regional GCM model indicates that Kenya is likely to experience the following climate changes between the late 2020s and 2100: - Average annual temperature will rise by between 1°C and 5C, typically 1°C by 2020s and 4°C by 2100. - Climate is likely to become wetter in both rainy seasons, but particularly in the Short Rain(October to December). Global Climate Models predict increases in northern Kenya (rainfall increases by 40% by the end of the century), whilst a regional model suggests that there may be greater rainfall in the West. - The rainfall seasonality i.e. Short and Long Rains are likely to remain the same. - Rainfall events during the wet seasons will become more extreme by 2100. Consequently flood events are likely to increase in frequency and severity. - Droughts are likely to occur with similar frequency as at present, but to increase in severity. - This is linked to the increase in temperature. - The Intergovernmental Panel on Climate Change (IPCC) predict an 18 to 59 cm rise in sea- level globally by 2100. One study suggests that 17% of Mombasa’s area could be submerged by a sea-level rise of 30 cm (Orindi and Adwera, 2008). Extreme events projections The IPCC 4th Assessment reports that GCMs project that increasing atmospheric concentrations of greenhouse gases will result in changes in daily, seasonal, inter-annual, and decadal variability. There is projected to be a decrease in diurnal temperature range in many areas, with nighttime lows increasing more than daytime highs. Current projections show little change or a small increase in amplitude for El Niño events over the next 100 years. Many models show a more El Niño-like mean response in the tropical Pacific, with the central and eastern equatorial Pacific sea surface temperatures projected to warm more than the western equatorial Pacific and with a corresponding mean eastward shift of precipitation. Even with little or no change in El Niño strength, global warming is likely to lead to greater extremes of drying and heavy rainfall and increase the risk of droughts and floods that occur with El Niño events in many different regions. There is no clear agreement between models concerning the changes in frequency or structure of other naturally occurring atmosphere-ocean circulation pattern such as the North Atlantic Oscillation (NAO). Philip and van Oldenborgh (2006) have used climate model simulations from the fourth IPCC Assessment Report (4AR) to investigate changes in ENSO events. The models that simulate El Niño most realistically on average do not show changes in the mean state that resemble the ENSO pattern. The projected changes in amplitude are similar to the observed variability over the last 150 years. Over much of Kenya, Uganda, Rwanda, Burundi and southern Somali there are indications for an upward trend in rainfall under global warming. Wet extremes (defined as high rainfall events occurring once every 10 years) are projected to increase during both the September to December (SOND) rain season and the March to May (MAM) rain season, locally referred to as the short- and long-rains, respectively (Shongwe, M.E., van Oldenborgh and van Aalst (2009)). In general, a positive shift in the whole rainfall distribution is simulated by the models over most of east Africa during both rainy seasons. However, less confidence is placed on the MAM simulations as the signal-to-noise ratios in the model predictions during this season are relatively low. Anyah, R.O. and Semazzi, F.H.M. (2006).Variability of East African rainfall based on multiyear REGCM3 simulations. International Journal of Climatology, 27, 357-371. Christy, J.R. W.B. Norris, and R.T. McNider. (2009): Surface Temperature Variations in East Africa and Possible Causes. J. Climate, in press Clare Downing, Felix Preston, Diana Parusheva, Lisa Horrocks, Oliver Edberg, Frederick Samazzi, Richard Washington, Martin Muteti, Paul Watkiss, Wilfred Nyangena. 2008. Final Report – Appendices, Kenya: Climate Screening and Information Exchange. AEA Technology plc. UK Cubash U., Meehl, G. A., Boer, G. J., Stouffer, R. J. Dix, M., Noda A., Senior C. A., Raper S., Rap K. S. (2001): Projections of future climate change. Climate changes: The Scientific basis, Cambridge University Press, 525-582. Hewitson, B.C.; Crane, R.G., (2006). Consensus between GCM climate change projections with empirical downscaling. International Journal of Climatology, 26 (10): 1315 – 1337. IPCC (2001): Climate Change 2001: Synthesis report, Cambridge University Press, Cambridge. IPPC (2001a): Climate Change 2001: Impacts, Adaptation and Vulnerability. Cambridge University Press, Cambridge. IPCC (2007): Climate Change 2007: The Physical Science Basis, IPCC Secretariat, Geneva, Switzerland. Ogallo, L. A. (1989): The Teleconnections between the global seas surface temperatures and seasonal rainfall over East Africa. J. Japan Met. Soc., Vol. 66, No. 6, 807 822. Ogallo L. A. (1993): Dynamics of Climate Change over Eastern African Proc. Indian Academy of Sciences, 102, 1, 203 217. Philip, S.Y. and G.J. van Oldenborgh, 2009. Shifts in ENSO coupling processes under global warming. J. Climate, 22, 14, 4014-4028 Schreck, C.J. and F.H.M Semazzi. (2004), Variability of the recent climate of eastern Africa,International Journal of Climatology, 24 (6): 681-701. Shongwe, M.E., van Oldenborgh and van Aalst (2009). Projected changes in mean and extreme precipitation in Africa under global warming, Part II: East Africa. Nairobi, Kenya, 56 pp. WWF-World Wide Fund For Nature (2006). Climate Change Impacts on East Africa. A Review of the Scientific Literature. Climate science to support adaptation
History of Korea |Part of a series on the| |History of Korea| |North and South States| |Later Three Kingdoms| |Unitary dynastic period| |Division of Korea| The Korean Peninsula was inhabited from the Lower Paleolithic about 400,000-700,000 years ago. The earliest known Korean pottery dates to around 8000 BC, and the Neolithic period began after 6000 BC, followed by the Bronze Age by 800 BC, and the Iron Age around 400 BC. According to the mythic origin story recounted in the Samguk Yusa, the Gojoseon (Old Joseon) was founded in northern Korea and Manchuria in 2333 BC. The Gija Joseon was said to be founded in 12th century BC, and its existence and role have been controversial in the modern era. The Jin state was formed in southern Korea in the 3rd century BC. In the 2nd century BC, Gija Joseon was replaced by Wiman Joseon which fell to the Han dynasty of China near the end of the century. This resulted in the fall of Gojoseon and led to succeeding warring states, the Proto–Three Kingdoms period that spanned the later Iron Age. Since the 1st century, Goguryeo, Baekje, and Silla grew to control the peninsula and Manchuria as the Three Kingdoms (57 BC – 668 AD) until unification by Silla in 676. In 698, Dae Jo-yeong established Balhae in old territories of Goguryeo, which led to the North South States Period (698–926). In the late 9th century, Silla was divided into the Later Three Kingdoms (892–936), which ended with the unification by Wang Geon's Goryeo Dynasty. Meanwhile Balhae fell after an invasion by the Khitan Liao Dynasty and the refugees including the last Crown Prince emigrated to Goryeo. During the Goryeo period, laws were codified, a civil service system was introduced, and culture influenced by Buddhism flourished. In 1392, Yi Seong-gye established the Joseon Dynasty (1392–1910) after a coup in 1388. King Sejong the Great (1418–1450) implemented numerous administrative, social, and economical reforms, established royal authority in the early years of the dynasty, and promulgated Hangul, the Korean alphabet. From the late 16th century, the Joseon dynasty faced foreign invasions, internal power struggle and rebellions, and it declined rapidly in the late 19th century. In 1897, the Korean Empire (1897–1910) succeeded the Joseon Dynasty. However, Imperial Japan forced it to sign a protectorate treaty and in 1910 annexed the Korean Empire, though all treaties involved were later confirmed to be null and void. Korean resistance was manifested in the widespread nonviolent March 1st Movement of 1919. Thereafter the resistance movements, coordinated by the Provisional Government of the Republic of Korea in exile, were largely active in neighboring Manchuria, China and Siberia. After the liberation in 1945, the partition of Korea created the modern two states of North and South Korea. In 1948, new governments were established, the democratic South Korea ("Republic of Korea") and communist North Korea ("Democratic People's Republic of Korea") divided at the 38th parallel. The unresolved tensions of the division surfaced in the Korean War of 1950. Although there was a cease-fire in 1953, the two nations officially remain at war because a peace treaty was never signed. Both states were accepted into the United Nations in 1991. - 1 Prehistory, Gojoseon, and the Jin State - 2 Proto–Three Kingdoms - 3 Three Kingdoms Era - 4 North and South States - 5 Goryeo - 6 Joseon - 7 Korean Empire - 8 Japanese rule - 9 The division of Korea - 10 See also - 11 Notes - 12 Bibliography - 13 External links Prehistory, Gojoseon, and the Jin State No fossil proved to be Homo erectus has been found in the Korean Peninsula, though a candidate has been reported. Tool-making artifacts from the Palaeolithic period have been found in present-day North Hamgyong, South P'yongan, Gyeonggi, and north and south Chungcheong Provinces of Korea, which dates the Paleolithic Age to half a million years ago, though it may have begun as late as 400,000 years ago or as early as 600,000-700,000 years ago. The predominant view is that the Korean people of today are not the ethnic descendants of these Paleolithic inhabitants. Jeulmun Pottery Period The earliest known Korean pottery dates back to around 8000 BC, and evidence of Mesolithic Pit-Comb Ware culture or Yungimun Pottery is found throughout the peninsula. An example of a Yungimun-era site is in Jeju-do. Jeulmun or Comb-pattern Pottery is found after 7000 BC, and pottery with comb-patterns over the whole vessel is found concentrated at sites in west-central Korea, where a number of settlements such as Amsa-dong existed. Jeulmun pottery bears basic design and form similarities to that of Mongolia, and the Amur and Sungari river basins of Manchuria and the Jōmon culture in Japan. Mumun Pottery Period People in southern Korea adopted intensive dry-field and paddy-field agriculture with a multitude of crops in the Early Mumun Period (1500–850 BC). The first societies led by big-men or chiefs emerged in the Middle Mumun (850–550 BC), and the first ostentatious elite burials can be traced to the Late Mumun (c. 550–300 BC). Bronze production began in the Middle Mumun and became increasingly important in ceremonial and political society after 700 BC. Archeological evidence from Songguk-ri, Daepyeong, Igeum-dong, and elsewhere indicate that the Mumun era was the first in which chiefdoms rose, expanded, and collapsed. The increasing presence of long-distance trade, an increase in local conflicts, and the introduction of bronze and iron metallurgy are trends denoting the end of the Mumun around 300 BC. Gojoseon and Jin State The founding legend of Gojoseon, which is recorded in the Samguk Yusa (1281) and other medieval Korean books, states that the country was established in 2333 BC by Dangun, said to be descended from heaven. While no evidence has been found that supports whatever facts may lie beneath this, the account has played an important role in developing Korean national identity. In the 12th century BC Gija, a prince from the Shang Dynasty of China, purportedly founded Gija Joseon. However, due to contradicting historical and archaeological evidence, its existence was challenged in the 20th century, and today no longer forms the mainstream understanding of this period. In 194 BC, King Jun fled to Jin state after a coup by Wiman, who founded Wiman Joseon. Later the Han Dynasty defeated the Wiman Joseon and set up Four Commanderies of Han in 108 BC. There was a significant Chinese presence in northern parts of the Korean peninsula during the next century, and the Lelang Commandery persisted for about 400 years until it was conquered by Goguryeo. Around 300 BC, a state called Jin arose in the southern part of the Korean peninsula. Very little is known about Jin, but it established relations with Han China and exported artifacts to the Yayoi of Japan. Around 100 BC, Jin evolved into the Samhan confederacies. Many smaller states sprang from the former territory of Gojoseon such as Buyeo, Okjeo, Dongye, Goguryeo and Baekje. The Three Kingdoms refer to Goguryeo, Baekje, and Silla, although Buyeo and the Gaya confederacy existed into the 5th and 6th centuries respectively. The Jin State, which was historically simultaneous, was located in the South of the Peninsula, but historic records are scarce, and it is not easy to figure out how much of a state this was. The Bronze Age is often held to have begun around 900-800 BC in Korea, though the transition to the Bronze Age may have begun as far back as 2300 BC. Bronze daggers, mirrors, and weaponry have been found, as well as evidence of walled-town polities. Rice, red beans, soybeans and millet were cultivated, and rectangular pit-houses and increasingly larger dolmen burial sites are found throughout the peninsula. Contemporaneous records suggest that Gojoseon transitioned from a feudal federation of walled cities into a centralised kingdom at least before the 4th century BC. It is believed that by the 4th century BC, iron culture was developing in Korea as the warring states of China pushed refugees east and south. The Proto–Three Kingdoms period, sometimes called the Several States Period (열국시대), is the time before the rise of the Three Kingdoms of Korea, which included Goguryeo, Silla, and Baekje, and occurred after the fall of Gojoseon. This time period consisted of numerous states that sprang up from the former territories of Gojoseon. Among these states, the largest and most influential were Dongbuyeo and Bukbuyeo. Buyeo and other Northern states After the fall of Gojoseon, Buyeo arose in today's North Korea and southern Manchuria, from about the 2nd century BC to 494. Its remnants were absorbed by Goguryeo in 494, and both Goguryeo and Baekje, two of the Three Kingdoms of Korea, considered themselves its successor. Although records are sparse and contradictory, it is thought that in 86 BC, Dongbuyeo (East Buyeo) branched out, after which the original Buyeo is sometimes referred to as Bukbuyeo (North Buyeo). Jolbon Buyeo was the predecessor to Goguryeo, and in 538, Baekje renamed itself Nambuyeo (South Buyeo). Okjeo was a tribal state that was located in the northern Korean Peninsula, and was established after the fall of Gojoseon. Okjeo had been a part of Gojoseon before its fall. It never became a fully developed kingdom due to the intervention of its neighboring kingdoms. Okjeo became a tributary of Goguryeo, and was eventually annexed into Goguryeo by Gwanggaeto Taewang in the 5th century. Dongye was another small kingdom that was situated in the northern Korean Peninsula. Dongye bordered Okjeo, and the two kingdoms faced the same fate of becoming tributaries of the growing empire of Goguryeo. Dongye was also a former part of Gojoseon before its fall. Sam han (삼한, 三韓) refers to the three confederacies of Mahan, Jinhan, and Byeonhan. The Samhan were located in the southern region of the Korean Peninsula. The Samhan countries were strictly governed by law, with religion playing an important role. Mahan was the largest, consisting of 54 states, and assumed political, economic, and cultural dominance. Byeonhan and Jinhan both consisted of 12 states, bringing a total of 78 states within the Samhan. The Samhan were eventually conquered by Baekje, Silla, and Gaya in the 4th century. Three Kingdoms Era Goguryeo was founded in 37 BC by Jumong (posthumously titled as Dongmyeongseong, a royal given title). Later, King Taejo centralized the government. Goguryeo was the first Korean kingdom to adopt Buddhism as the state religion in 372, in King Sosurim's reign. Goguryeo reached its zenith in the 5th century, when King Gwanggaeto the Great and his son, King Jangsu, expanded the country into almost all of Manchuria and part of inner Mongolia, and took the present-day Seoul from Baekje. Gwanggaeto and Jangsu subdued Baekje and Silla during their times. Goguryeo later fought and defeated massive Chinese invasions in the Goguryeo-Sui War of 598 – 614, which contributed to Sui's fall, and continued to repel the Tang dynasty under several generals including Yeon Gaesomun and Yang Manchun (see Goguryeo–Tang War). However, numerous wars with China exhausted Goguryeo and it fell into a weak state. After internal power struggles, it was conquered by allied Silla-Tang forces in 668. The Sanguo Zhi mentions Baekje as a member of the Mahan confederacy in the Han River basin (near present-day Seoul). It expanded into the southwest (Chungcheong and Jeolla provinces) of the peninsula and became a significant political and military power. In the process, Baekje came into fierce confrontation with Goguryeo and the Chinese commanderies in the vicinity of its territorial ambitions. At its peak in the 4th century in the reign of King Geunchogo, it had absorbed all of the Mahan states and subjugated most of the western Korean peninsula (including the modern provinces of Gyeonggi, Chungcheong, and Jeolla, as well as part of Hwanghae and Gangwon) to a centralized government. Baekje acquired Chinese culture and technology through contacts with the Southern Dynasties during the expansion of its territory. Baekje played a fundamental role in transmitting cultural developments, such as Chinese characters, Buddhism, iron-making, advanced pottery, and ceremonial burial into ancient Japan. Other aspects of culture were also transmitted when the Baekje court retreated to Japan after Baekje was conquered. Baekje was defeated by a coalition of Silla and Tang Dynasty forces in 660. According to legend, the kingdom Silla began with the unification of six chiefdoms of the Jinhan confederacy by Bak Hyeokgeose in 57 BC, in the southeastern area of Korea. Its territory included the present-day port city of Busan, and Silla later emerged as a sea power responsible for destroying Japanese pirates, especially during the Unified Silla period. Silla artifacts, including unique gold metalwork, show influence from the northern nomadic steppes, with less Chinese influence than are shown by Goguryeo and Baekje. Silla expanded rapidly by occupying the Nakdong River basin and uniting the city-states. By the 2nd century, Silla was a large state, occupying and influencing nearby city states. Silla gained further power when it annexed the Gaya confederacy in 562. Silla often faced pressure from Gougryeo, Baekje and Japan, and at various times allied and warred with Baekje and Goguryeo. In 660, King Muyeol of Silla ordered his armies to attack Baekje. General Kim Yu-shin, aided by Tang forces, conquered Baekje. In 661, Silla and Tang moved on Goguryeo but were repelled. King Munmu, son of Muyeol and nephew of Kim, launched another campaign in 667 and Goguryeo fell in the following year. Gaya was a confederacy of small kingdoms in the Nakdong River valley of southern Korea, growing out of the Byeonhan confederacy of the Samhan period. Gaya's plains were rich in iron, so export of iron tools was possible and agriculture flourished. In the early centuries, the Confederacy was led by Geumgwan Gaya in the Gimhae region. However, its leading power changed to Daegaya in the Goryeong region after the 5th century. North and South States The term North-South States refers to Unified Silla and Balhae, during the time when Silla controlled the majority of the Korean peninsula while Balhae expanded into Manchuria. During this time, culture and technology significantly advanced, especially in Unified Silla. Unified Silla (Later Silla) After the unification wars, the Tang Dynasty established outposts in the former Goguryeo, and began to establish and administer communities in Baekje. Silla attacked Tang forces in Baekje and northern Korea in 671. Tang then invaded Silla in 674 but Silla drove the Tang forces out of the peninsula by 676 to achieve unification of most of the Korean peninsula. Unified Silla was a time when Korean arts flourished dramatically and Buddhism became a large part of culture. Buddhist monasteries such as the World Heritage Sites Bulguksa temple and Seokguram Grotto are examples of advanced Korean architecture and Buddhist influence. Other state-sponsored art and architecture from this period include Hwangnyongsa Temple and Bunhwangsa Temple. Silla began to experience political troubles in late 8th century. This severely weakened Silla and soon thereafter, descendants of the former Baekje established Hubaekje. In the north, rebels revived Goguryeo, beginning the Later Three Kingdoms period. Balhae was founded only thirty years after Goguryeo had fallen. It was founded in the northern part of former lands of Goguryeo by Dae Joyeong, a former Goguryeo general. Balhae controlled the northern areas of the Korean Peninsula, much of Manchuria (though it didn't occupy Liaodong peninsula for much of history), and expanded into present-day Russian Primorsky Krai. Balhae styled itself as Goguryeo's successor state. It also adapted the culture of Tang Dynasty, such as the government structure and geopolitical system. In a time of relative peace and stability in the region, Balhae flourished, especially during the reigns of the third King Mun (r. 737–793) and King Seon. However, Balhae was severely weakened by the 10th century, and the Khitan Liao Dynasty conquered Balhae in 926. Tens of thousands of refugees, including Dae Gwang-hyeon, the last Crown Prince, emigrated to Goryeo. No historical records from Balhae have survived, and the Liao left no histories of Balhae. While Goryeo absorbed some Balhae territory and received Balhae refugees, it compiled no known histories of Balhae either. The Samguk Sagi ("History of the Three Kingdoms"), for instance, includes passages on Balhae, but does not include a dynastic history of Balhae. The 18th century Joseon dynasty historian Yu Deukgong advocated the proper study of Balhae as part of Korean history, and coined the term "North and South States Period" to refer to this era. Later Three Kingdoms The Later Three Kingdoms (892 – 936 CE) consisted of Silla, Hubaekje ("Later Baekje"), and Taebong (also known as Hugoguryeo, "Later Goguryeo"). The latter two, established as Unified Silla declined in power, claimed to be heirs to Baekje and Goguryeo. Taebong (Later Goguryeo) was originally led by Gung Ye, a Buddhist monk who founded Later Goguryeo. Gung Ye was actually a son of King Gyeongmun of Silla. When Gung Ye was born, there was an omen that he would be a cause of Silla's downfall, and thus Gyeongmun ordered his newborn to be killed. Gung Ye's nurse however, ran away with him and raised him. The unpopular Gung Ye was deposed by Wang Geon in 918. Wang Geon was popular with his people, and he decided to unite the entire peninsula under one government. He attacked Later Baekje in 934 and received the surrender of Silla in the following year. In 936, Goryeo conquered Hubaekje. Goryeo was founded in 918 AD and became the ruling dynasty of Korea by 936. "Goryeo" was named as Wang Geon deemed the nation as a successor of Goguryeo. The dynasty lasted until 1392, and it is the source of the English name "Korea." During this period laws were codified, and a civil service system was introduced. Buddhism flourished, and spread throughout the peninsula. The development of celadon pottery flourished in the 12th and 13th century. The publication of Tripitaka Koreana onto 81,258 wooden blocks and the invention of movable-metal-type printing press attest to Goryeo's cultural achievements. In 1231 the Mongols began their campaigns against Korea and after 25 years of struggle, Goryeo relented by signing a treaty with the Mongols. For the following 80 years Goryeo survived as a tributary ally of the Mongol-ruled Yuan Dynasty in China. In the 1350s, the Yuan Dynasty declined rapidly due to internal struggles, enabling King Gongmin to reform the Goryeo government. Gongmin had various problems that needed to be dealt with, including the removal of pro-Mongol aristocrats and military officials, the question of land holding, and quelling the growing animosity between the Buddhists and Confucian scholars. The Goryeo dynasty would last until 1392. Taejo of Joseon, the founder of the Joseon Dynasty, took power in a coup in 1388 and after serving as a power behind the throne for two monarchs, established the Joseon Dynasty in 1392. In 1392, the general Yi Seong-gye, later known as Taejo, established the Joseon Dynasty (1392–1897), named in honor of the ancient kingdom Gojoseon and based on idealistic Confucianism-based ideology. Taejo moved the capital to Hanyang (modern-day Seoul) and built Gyeongbokgung palace. In 1394 he adopted Neo-Confucianism as the country's official religion, and pursued the creation of a strong bureaucratic state. His son and grandson, King Taejong and King Sejong the Great, implemented numerous administrative, social, and economical reforms and established royal authority in the early years of the dynasty. Internal conflicts within the royal court, civil unrest and other political struggles plagued the nation in the years that followed, worsened by the Japanese invasion of Korea between 1592 and 1598. Toyotomi Hideyoshi marshalled his forces and tried to invade the Asian continent through Korea, but was eventually repelled by righteous armies, Korean Army and Navy, and assistance from Ming China. This war also saw the rise of the career of Admiral Yi Sun-sin with the "turtle ship". As Joseon was striving to rebuild itself after the war, it suffered from the invasions by the Manchu in 1627 and 1636. Different views regarding foreign policy divided the royal court, and ascensions to the throne during that period were decided after much political conflict and struggle. A period of peace followed in the 18th century during the years of King Yeongjo and King Jeongjo, who led a new renaissance of the Joseon dynasty, with fundamental reforms to ease the political tension between the Confucian scholars, who held high positions. However, corruption in government and social unrest prevailed in the years thereafter, causing numerous civil uprisings and revolts. The government made sweeping reforms in the late 19th century, but adhered to a strict isolationist policy, earning Joseon the nickname "Hermit Kingdom". The policy had been established primarily for protection against Western imperialism, but before long Joseon was forced to open trade, beginning an era leading into Japanese colonial rule. Culture and society Joseon's culture was based on the philosophy of Neo-Confucianism, which emphasizes morality, righteousness, and practical ethics. Wide interest in scholarly study resulted in the establishment of private academies and educational institutions. Many documents were written about history, geography, medicine, and Confucian principles. The arts flourished in painting, calligraphy, music, dance, and ceramics. The most notable cultural event of this era is the promulgation of the Korean alphabet Hangul by King Sejong the Great in 1446. This period also saw various other cultural, scientific and technological advances. During Joseon, a social hierarchy system existed that greatly affected Korea's social development. The king and the royal family were atop the hereditary system, with the next tier being a class of civil or military officials and land owners known as yangban, who worked for the government and lived off the efforts of tenant farmers and slaves. A middle class, jungin, were technical specialists such as scribes, medical officers, technicians in science-related fields, artists and musicians. Commoners, i.e. peasants, constituted the largest class in Joseon. They had obligations to pay taxes, provide labor, and serve in the military. By paying land taxes to the state, they were allowed to cultivate land and farm. The lowest class included tenant farmers, slaves, entertainers, craftsmen, prostitutes, laborers, shamans, vagabonds, outcasts, and criminals. Although slave status was hereditary, they could be sold or freed at officially set prices, and the mistreatment of slaves was forbidden. This yangban focused system started to change in the late 17th century as political, economic and social changes came into place. By the 19th century, new commercial groups emerged, and the active social mobility caused the yangban class to expand, resulting in the weakening of the old class system. The Joseon government ordered the freedom of government slaves in 1801. The class system of Joseon was completely banned in 1894. Joseon dealt with a pair of Japanese invasions from 1592 to 1598 (Imjin War or the Seven Years war). Prior to the war, Korea sent two ambassadors to scout for signs of Japan's intentions of invading Korea. However, they came back with 2 different reports, and while the politicians split into sides, little proactive measures were taken. This conflict brought prominence to Admiral Yi Sun-sin as he contributed to eventually repelling the Japanese forces with the innovative use of his invention, the turtle ship, a massive, yet swift, ramming/cannon ship fitted with iron spikes and, according to some sources, an iron-plated deck). The use of the hwacha was also highly effective in repelling the Japanese invaders from the land. Subsequently, Korea was invaded by the Manchus in 1627 and again in 1636, after which the Joseon dynasty recognized the suzerainty of the Qing Empire. Though the Koreans respected their traditional subservient position to China, there was persistent Ming loyalty and disdain for the Manchus. During the 19th century, Joseon tried to control foreign influence by closing the borders to all nations but China. In 1853 the USS South America, an American gunboat, visited Busan for 10 days and had amiable contact with local officials. Several Americans shipwrecked on Korea in 1855 and 1865 were also treated well and sent to China for repatriation. The Joseon court was aware of the foreign invasions and treaties involving Qing China, as well as the First and Second Opium Wars, and followed a cautious policy of slow exchange with the West. In 1866, reacting to greater numbers of Korean converts to Catholicism despite several waves of persecutions, the Joseon court clamped down on them, massacring French Catholic missionaries and Korean converts alike. Later in the year France invaded and occupied portions of Ganghwa Island. The Korean army lost heavily, but the French abandoned the island. The General Sherman, an American-owned armed merchant marine sidewheel schooner, attempted to open Korea to trade in 1866. After an initial miscommunication, the ship sailed upriver and became stranded near Pyongyang. After being ordered to leave by the Korean officials, the American crewmen killed four Korean inhabitants, kidnapped a military officer and engaged in sporadic fighting that continued for four days. After two efforts to destroy the ship failed, she was finally set aflame by Korean fireships laden with explosives. |Wikisource has original text related to this article:| This incident is celebrated by the DPRK as a precursor to the later USS Pueblo incident. In response, the United States confronted Korea militarily in 1871, killing 243 Koreans in Ganghwa island before withdrawing. This incident is called the Sinmiyangyo in Korea. Five years later, the reclusive Korea signed a trade treaty with Japan, and in 1882 signed a treaty with the United States, ending centuries of isolationism. Conflict between the conservative court and a reforming faction led to the Gapsin Coup in 1884. The reformers sought to reform Koreans institutionalized social inequality, by proclaiming social equality and the elimination of the privileges of the yangban class. The reformers were backed by Japan, and were thwarted by the arrival of Qing troops, invited by the conservative Queen Min. The Chinese troops departed but the leading general Yuan Shikai remained in Korea from 1885-1894 as Resident, directing Korean affairs. Korea became linked by telegraph to China in 1888 with Chinese controlled telegraphs. China permitted Korea to establish embassies with Russia (1884), Italy (1885), France (1886), United States, Japan. China attempted to block the exchange of embassies in Western countries, but not with Tokyo. The Qing government provided loans. China promoted its trade in an attempt to block Japanese merchants, which led to Chinese favour in Korean trade. Anti-Chinese riots broke out in 1888 and 1889 and Chinese shops were torched. Japan remained the largest foreign community and largest trading partner. After a rapidly modernizing Japan forced Korea to open its ports in 1876, it successfully challenged the Qing Empire in the Sino-Japanese War (1894–1895). In 1895, the Japanese were involved in the murder of Empress Myeongseong, who had sought Russian help, and the Russians were forced to retreat from Korea for the time. As a result of the Sino-Japanese War (1894–1895), the 1895 Treaty of Shimonoseki was concluded between China and Japan. It stipulated the abolition of traditional relationships Korea had with China, the latter of which recognised the complete independence of Joseon and repudiated the former's political influence over the latter. In 1897, Joseon was renamed the Korean Empire, and King Gojong became Emperor Gojong. The imperial government aimed to become a strong and independent nation by implementing domestic reforms, strengthening military forces, developing commerce and industry, and surveying land ownership. Organizations like the Independence Club also rallied to assert the rights of the Joseon people, but clashed with the government which proclaimed absolute monarchy and power. Russian influence was strong in the Empire until being defeated by Japan in the Russo-Japanese War (1904–1905). Korea effectively became a protected state of Japan on 17 November 1905, the 1905 Protectorate Treaty having been promulgated without Emperor Gojong's required seal or commission. Following the signing of the treaty, many intellectuals and scholars set up various organizations and associations, embarking on movements for independence. In 1907, Gojong was forced to abdicate after Japan learned that he sent secret envoys to the Second Hague Conventions to protest against the protectorate treaty, leading to the accession of Gojong's son, Emperor Sunjong. In 1909, independence activist An Jung-geun assassinated Itō Hirobumi, the Resident-General of Korea, for Ito's intrusions on the Korean politics. This prompted the Japanese to ban all political organisations and proceed with plans for annexation. In 1910 Japan effectively annexed Korea by the Japan–Korea Annexation Treaty, which along with all other prior treaties between Korea and Japan was confirmed to be null and void in 1965. While Japan asserts that the treaty was concluded legally, this argument is generally not accepted in Korea because it was not signed by the Emperor of Korea as required and violated international convention on external pressures regarding treaties. Korea was controlled by Japan under a Governor-General of Korea until Japan's unconditional surrender to the Allied Forces on 15 August 1945, with de jure sovereignty deemed to have passed from the Joseon Dynasty to the Provisional Government of the Republic of Korea. After the annexation, Japan set out to repress Korean traditions and culture, develop and implement policies primarily for the Japanese benefit. European-styled transport and communication networks were established across the nation in order to extract the resources and labor; these networks were mostly destroyed later during the Korean War. The banking system was consolidated and the Korean currency abolished. The Japanese removed the Joseon hierarchy, destroyed much of the Gyeongbokgung palace and replaced it with the Government office building. After Emperor Gojong died in January 1919, with rumors of poisoning, independence rallies against Japanese invaders took place nationwide on 1 March 1919 (the March 1st Movement). This movement was suppressed by force and about 7,000 were killed by Japanese soldiers and police. An estimated 2 million people took part in peaceful, pro-liberation rallies, although Japanese records claim participation of less than half million. This movement was partly inspired by United States President Woodrow Wilson's speech of 1919, declaring support for right of self-determination and an end to colonial rule for Europeans. No comment was made by Wilson on Korean independence, perhaps as a pro-Japan faction in the USA sought trade inroads into China through the Korean peninsula. The Provisional Government of the Republic of Korea was established in Shanghai, China, in the aftermath of March 1 Movement, which coordinated the Liberation effort and resistance against Japanese control. Some of the achievements of the Provisional Government include the Battle of Chingshanli of 1920 and the ambush of Japanese Military Leadership in China in 1932. The Provisional Government is considered to be the de jure government of the Korean people between 1919 and 1948, and its legitimacy is enshrined in the preamble to the constitution of the Republic of Korea. Continued anti-Japanese uprisings, such as the nationwide uprising of students in November 1929, led to the strengthening of military rule in 1931. After the outbreaks of the Sino-Japanese War in 1937 and World War II Japan attempted to exterminate Korea as a nation. The continuance of Korean culture itself began to be illegal. Worship at Japanese Shinto shrines was made compulsory. The school curriculum was radically modified to eliminate teaching in the Korean language and history. The Korean language was banned, Koreans were forced to adopt Japanese names, and newspapers were prohibited from publishing in Korean. Numerous Korean cultural artifacts were destroyed or taken to Japan. According to an investigation by the South Korean government, 75,311 cultural assets were taken from Korea. Some Koreans left the Korean peninsula to Manchuria and Primorsky Krai. Koreans in Manchuria formed resistance groups known as Dongnipgun (Liberation Army); they would travel in and out of the Sino-Korean border, fighting guerrilla warfare with Japanese forces. Some of them would group together in the 1940s as the Korean Liberation Army, which took part in allied action in China and parts of South East Asia. Tens of thousands of Koreans also joined the Peoples Liberation Army and the National Revolutionary Army. During World War II, Koreans at home were forced to support the Japanese war effort. Tens of thousands of men were conscripted into Japan's military. Around 200,000 girls and women, some from Korea, were engaged in sexual services, with the euphemism "comfort women". Previous Korean "comfort women" are still protesting against the Japanese Government for compensation of their sufferings. Protestant missionary efforts in Asia were nowhere more successful than in Korea. American Presbyterians and Methodists arrived in the 1880s and were well received. In the days Korea was under Japanese control, Christianity became in part an expression of nationalism in opposition to the Japan's efforts to promote the Japanese language and the Shinto religion. In 1914 out of 16 million people, there were 86,000 Protestants and 79,000 Catholics; by 1934 the numbers were 168,000 and 147,000. Presbyterian missionaries were especially successful. Harmonizing with traditional practices became an issue. The Protestants developed a substitute for Confucian ancestral rites by merging Confucian-based and Christian death and funerary rituals. The division of Korea At the Cairo Conference on November 22, 1943, it was agreed that "in due course Korea shall become free and independent”; at a later meeting in Yalta in February 1945, it was agreed to establish a four-power trusteeship over Korea. On August 9, 1945, Soviet tanks entered northern Korea from Siberia, meeting little resistance. Japan surrendered to the Allied Forces on August 15, 1945. The unconditional surrender of Japan, combined with fundamental shifts in global politics and ideology, led to the division of Korea into two occupation zones effectively starting on September 8, 1945, with the United States administering the southern half of the peninsula and the Soviet Union taking over the area north of the 38th parallel. The Provisional Government was ignored, mainly due to American perception that it was too communist-aligned. This division was meant to be temporary and was first intended to return a unified Korea back to its people after the United States, United Kingdom, Soviet Union, and Republic of China could arrange a single government. In December 1945, a conference convened in Moscow to discuss the future of Korea. A 5-year trusteeship was discussed, and a joint Soviet-American commission was established. The commission met intermittently in Seoul but deadlocked over the issue of establishing a national government. In September 1947, with no solution in sight, the United States submitted the Korean question to the United Nations General Assembly. Initial hopes for a unified, independent Korea quickly evaporated as the politics of the Cold War and opposition to the trusteeship plan from anti-communists resulted in the 1948 establishment of two separate nations with diametrically opposed political, economic, and social systems. On December 12, 1948, the General Assembly of the United Nations recognised the Republic of Korea as the sole legal government of Korea. In June 25, 1950 the Korean War broke out when North Korea breached the 38th parallel line to invade the South, ending any hope of a peaceful reunification for the time being. After the war, the 1954 Geneva conference failed to adopt a solution for a unified Korea. Beginning with Syngman Rhee, a series of oppressive autocratic governments took power in South Korea with American support and influence. The country eventually transitioned to become a market-oriented democracy in 1987 largely due to popular demand for reform, and its economy rapidly grew and became a developed economy by the 2000s. Due to Soviet Influence, North Korea established a communist government with a hereditary succession of leadership, with ties to China and the Soviet Union. Kim Il-sung became the supreme leader until his death in 1994, after which his son, Kim Jong-il took power. Kim Jong-il's son, Kim Jong-un, is the current leader, taking power after his father's death in 2011. After the Soviet Union's dissolution in 1991, the North Korean economy went on a path of steep decline, and it is currently heavily reliant on international food aid and trade with China. - List of Korea-related topics - List of monarchs of Korea - Military history of Korea - National Treasure of South Korea - Prehistory of Korea - Timeline of Korean history - Korean nationalist historiography - Eckert & Lee 1990, p. 2 - Christopher J. Norton, "The Current state of Korean Paleoanthropology", (2000), Journal of Human Evolution, 38: 803-825. - Sin 2005, p. 17 - Chong Pil Choe, Martin T. Bale, "Current Perspectives on Settlement, Subsistence, and Cultivation in Prehistoric Korea", (2002), Arctic Anthropology, 39: 1-2, pp. 95-121. - Eckert & Lee 1990, p. 9 - Connor 2002, p. 9 - Jong Chan Kim, Christopher J Bae, “Radiocarbon Dates Documenting The Neolithic-Bronze Age Transition in Korea”, (2010), Radiocarbon, 52: 2, pp. 483-492. - Sin 2005, p. 19. - Lee Ki-baik 1984, p. 14, 167 - Seth 2010, p. 17. - Hwang 2010, p. 4 - Pratt 2007, p. 63-64. - Peterson & Margulies 2009, p. 35-36. - Kim Jongseo, Jeong Inji, et al. "Goryeosa (The History of Goryeo)", 1451, Article for July 934, 17th year in the Reign of Taejo - Forced Annexation - Early Human Evolution: Homo ergaster and erectus. Anthro.palomar.edu. Retrieved on 2013-07-12. - Lee Hyun-hee 2005, pp. 8–12. - Stark 2005, p. 137. - Lee Hyun-hee 2005, pp. 23–26. - Nelson 1993, p. 110–116 - See also Jewang Ungi (1287) and Dongguk Tonggam (1485). - Hwang 2010, p. 2. - Connor 2002, p. 10. - Eckert & Lee 1990, p. 11. - Lee Ki-baik 1984, p. 14. - (Korean) Gojoseon territory at Encyclopedia of Korean Culture - Timeline of Art and History, Korea, 1000 BC-1 AD, Metropolitan Museum of Art - Yayoi Period History Summary, BookRags.com - Japanese Roots, Jared Diamond, Discover 19:6 (June 1998) - The Genetic Origins of the Japanese, Thayer Watkins - Lee Hyun-hee 2005, pp. 92–95. - Gochang, Hwasun and Ganghwa Dolmen Sites, UNESCO - Lee Hyun-hee 2005, pp. 82–85. - (Korean) Proto-Three Kingdoms period at Doosan Encyclopedia - Lee Hyun-hee 2005, pp. 109–116. - (Korean) Buyeo at Encyclopedia of Korean Culture - Lee Hyun-hee 2005, pp. 128–130. - Lee Hyun-hee 2005, pp. 130–131. - (Korean) Samhan at Doosan Encyclopedia - Lee Hyun-hee 2005, pp. 135–141. - (Korean) Goguryeo at Doosan Encyclopedia - (Korean) Buddhism in Goguryeo at Doosan Encyclopedia - Lee Hyun-hee 2005, pp. 199–202 - Lee Hyun-hee 2005, pp. 214–222. - Three Kingdoms Asian Info Organization - Lee Hyun-hee 2005, pp. 224–225. - Marshall Cavendish Corporation (2007, pp. 886–889) - Lee Hyun-hee 2005, pp. 202–206. - Korean Buddhism Basis of Japanese Buddhism, Seoul Times, 2006-06-18 - Buddhist Art of Korea & Japan, Asia Society Museum - Kanji, JapanGuide.com - "Pottery - MSN Encarta". Archived from the original on 2009-10-31. "The pottery of the Yayoi culture (300? BC-AD 250?), made by a Mongol people who came from Korea to Kyūshū, has been found throughout Japan. " - History of Japan, JapanVisitor.com - Archived 2009-10-31. - Baekje history, Baekje History & Culture Hall - Kenneth B. Lee (1997, pp. 48–49) - Sarah M. Nelson, (1993, pp. 243–258) - Lee Hyun-hee 2005, pp. 222–225. - Lee Hyun-hee 2005, pp. 159–162. - Lee Hyun-hee 2005, pp. 241–242. - Seokguram Grotto and Bulguksa Temple, UNESCO - Lee Hyun-hee 2005, pp. 266–269. - (Korean) Dae Joyeong at Doosan Encyclopedia - Lee Hyun-hee 2005, pp. 244–248 - (Korean) Later Three Kingdoms at Encyclopedia of Korean Culture - Koreans in History/People/Program/KBS World Radio. World.kbs.co.kr. Retrieved on 2013-07-12. - Goryeo Dynasty, Korean History information! - Lee Hyun-hee 2005, p. 266. - Association of Korean History Teachers 2005a, pp. 120–121. - (Korean) Korea at Doosan Encyclopedia - Lee Hyun-hee 2005, pp. 360–361. - Association of Korean History Teachers 2005a, pp. 122–123. - Lee Hyun-hee 2005, pp. 309–312. - Lee Hyun-hee 2005, pp. 343–350. - Association of Korean History Teachers 2005a, pp. 142–145. - Lee Hyun-hee 2005, pp. 351–353. - Association of Korean History Teachers 2005a, pp. 152–155. - Lee Hyun-hee 2005, pp. 369–370. - Literally "old Joseon", the term was first coined in the 13th century AD to differentiate the ancient kingdom from Wiman Joseon and is now used to differentiate it from the Joseon Dynasty. - Association of Korean History Teachers 2005a, pp. 160–163. - Lee Hyun-hee 2005, pp. 371–375. - Association of Korean History Teachers 2005a, pp. 190–195. - Lee Hyun-hee 2005, pp. 413–416. - Association of Korean History Teachers 2005a, pp. 421–424. - Lee Hyun-hee 2005, pp. 421–424. - Lee Hyun-hee 2005, pp. 469–470. - Lee Hyun-hee 2005, pp. 391–401. - Hangul, The National Institute of the Korean Language - Association of Korean History Teachers 2005a, pp. 168–173. - Lee Hyun-hee 2005, pp. 387–389. - Lee Hyun-hee 2005, pp. 435–437. - Hawley 2005, p. 195f. - Turnbull 2002, p. 244. - Roh, Young-koo: "Yi Sun-shin, an Admiral Who Became a Myth", The Review of Korean Studies, Vol. 7, No. 3 (2004), p.13 - Seth 2010, p. 225. - Schmid 2002, p. 72. - Association of Korean History Teachers 2005b, p. 43. - Association of Korean History Teachers 2005b, pp. 51-55. - Association of Korean History Teachers 2005b, pp. 58-61. - Lee Ki-baik 1984, pp. 309–317. - Hoare & Pares 1988, pp. 50–67 - An Jung-geun, Korea.net - Kawasaki, Yutaka (July 1996). "Was the 1910 Annexation Treaty Between Korea and Japan Concluded Legally?". Murdoch University Journal of Law 3 (2). Retrieved 2007-06-08. - Japan's Annexation of Korea 'Unjust and Invalid', Chosun Ilbo, 2010-05-11. Retrieved 2010-07-05. - (Korean) After the reconstruction Gyeongbok Palace of 1865–1867 at Doosan Encyclopedia - March 1st Movement - Lee Ki-baik, pp. 340–344 - Constitution of the Republic of Korea: Preamble, The National Assembly of the Republic of Korea. (In English) - Miyata 1992. - Kay Itoi; B. J. Lee (2007-10-17). "Korea: A Tussle over Treasures — Who rightfully owns Korean artifacts looted by Japan?". Newsweek. Retrieved 2008-06-06. - Lost treasures make trip home, Korea Times, 2008-12-28. - Yamawaki 1994. - Japan court rules against 'comfort women', CNN, 2001-03-29. - Congress backs off of wartime Japan rebuke, The Boston Globe, 2006-10-15. - Danielle Kane, and Jung Mee Park, "The Puzzle of Korean Christianity: Geopolitical Networks and Religious Conversion in Early Twentieth-Century East Asia," American Journal of Sociology (2009) 115#2 pp 365-404 - Kenneth Scott Latourette, A history of the expansion of Christianity: Volume VII: Advance through Storm: A.D. 1914 and after, with concluding generalizations (1945) 7:401-7 - Lee Hyun-hee 2005, p. 581. - Cairo Conference is held, Timelines; Cairo Conference, BBC - Yalta Conference - Robinson 2007, pp. 107–108. - Moscow conference - Resolution 195, UN Third General Assembly Books are sorted by author FAMILY name, called last-name in English... and the first coming part of a Korean name. - Association of Korean History Teachers (2005a). Korea through the Ages, Vol 1 Ancient. Editor's foreword by Lee Gil-sang. Seoul: Academy of Korean Studies. ISBN 9788-9710-5545-8. - Association of Korean History Teachers (2005b). Korea through the Ages, Vol. 2 Modern. Editor's foreword by Lee Gil-sang. Seoul: Academy of Korean Studies. ISBN 9788-9710-5546-5. - Connor, Mary E. (2002). The Koreas, A global studies handbook. ABC-CLIO. p. 307. ISBN 9781-5760-7277-6. - Eckert, Carter J.; Lee, Ki-Baik (1990). Korea, old and new: a history. Korea Institute Series. Published for the Korea Institute, Harvard University by Ilchokak. p. 454. ISBN 9780-9627-7130-9. - Hoare, James; Pares, Susan (1988). Korea: an introduction. New York: Routledge. ISBN 9780-7103-0299-1. - Hwang, Kyung-moon (2010). A History of Korea, An Episodic Narrative. Palgrave Macmillan. p. 328. ISBN 9780230364530. - Lee Ki-baik (1984). A new history of Korea. Cambridge: Harvard University Press. ISBN 9780-6746-1576-2. - Lee, Kenneth B. (1997). Korea and East Asia: the story of a Phoenix. Santa Barbara: Greenwood Publishing Group. ISBN 9780-2759-5823-7. - Lee, Hyun-hee; Park, Sung-soo; Yoon, Nae-hyun (2005). New History of Korea. Translated by Academy of Korean Studies. Paju: Jimoondang. ISBN 9788-9880-9585-0. - Lee, Hong-yung; Ha, Yong-Chool; Sorensen, Clark W., eds. (2013). Colonial Rule and Social Change in Korea, 1910-1945. University of Washington Press. p. 379. ISBN 9780-2959-9216-7. - Nahm, Andrew C.; Hoare, James (2004). Historical dictionary of the Republic of Korea. Lanham: Scarecrow Press. ISBN 9780-8108-4949-5. - Nelson, Sarah M. (1993). The archaeology of Korea. Cambridge: Cambridge University Press. p. 1013. ISBN 9780-5214-0783-0. - Pratt, Keith (2007). Everlasting Flower: A History of Korea. Reaktion Books. p. 320. ISBN 9781861893352. - Robinson, Michael Edson (2007). Korea's twentieth-century odyssey. Honolulu: University of Hawaii Press. ISBN 9780-8248-3174-5. - Schmid, Andre (2002). Korea Between Empires, 1895–1919. New York: Columbia University Press. ISBN 9780-2311-2538-3. - Seth, Michael J. (2006). A Concise History of Korea. Lanham: Rowman & Littlefield. ISBN 9780-7425-4005-7. - Seth, Michael J. (2010). A History of Korea: From Antiquity to the Present. Lanham: Rowman & Littlefield. p. 520. ISBN 9780-7425-6716-0. - Sin, Hyong-sik (2005). A Brief History of Korea. The Spirit of Korean Cultural Roots 1 (2nd ed.). Seoul: Ewha Woman's University Press. ISBN 9788-9730-0619-9. - Em, Henry H. (2013). The Great Enterprise: Sovereignty and Historiography in Modern Korea. Duke University Press. p. 272. ISBN 9780-8223-5372-0. Examines how Korean national ambitions have shaped the work of the country's historians. - Yuh, Leighanne (2010). "The Historiography of Korea in the United States". International Journal of Korean History 15#2: 127–144. Other books used in this page - Cwiertka, Katarzyna J. (2012). Cuisine, Colonialism, and Cold War: Food in Twentieth-Century Korea. Reaktion Books and University of Chicago Press. p. 237. ISBN 9781-7802-3025-2. Scholarly study of how food reflects Korea's history - Hawley, Samuel (2005). The Imjin War. Japan's Sixteenth-Century Invasion of Korea and Attempt to Conquer China. The Royal Asiatic Society, Korea Branch, Seoul. ISBN 89-954424-2-5. - Kim, Byung-Kook; Vogel, Ezra F. (2011). The Park Chung Hee Era: The Transformation of South Korea. Harvard University Press. p. 744. ISBN 9780-6740-5820-0. Studies of on modernization under Park, 1961–1979. - Peterson, Mark; Margulies, Phillip (2009). A Brief History of Korea. Infobase Publishing. p. 328. ISBN 9781-4381-2738-5. - Byeon Tae-seop (변태섭) (1999). 韓國史通論 (Hanguksa tongnon) (Outline of Korean history), 4th ed. Seoul: Samyeongsa. ISBN 89-445-9101-6. (Korean) - Yamawaki, Keizo (1994). Japan and Foreign Laborers: Chinese and Korean Laborers in the late 1890s and early 1920s (近代日本と外国人労働者―1890年代後半と1920年代前半における中国人・朝鮮人労働者問題). Tokyo: Akashi-shoten (明石書店). ISBN 4-7503-0568-5.(Japanese) |Wikimedia Commons has media related to History of Korea.| - Korean History online, Korean History Information Center - Timeline of Korean Dynasties - Kyujanggak Archive, pdf files of Korean classics in their original written classical Chinese - Korean History :Bibliography, Center for Korean Studies, University of Hawaii at Manoa - History of Korea, KBS World - History of Corea, Ancient and Modern; with Description of Manners and Customs, Language and Geography by John Ross, 1891
Fall foliage watchers can rejoice in the news that climate change may make the incredible autumn colors found in the Northeast and Midwest last a bit longer – but it spells bad news for the planet. New research from Princeton University shows that global warming will cause tree leaves to respond in wildly unpredictable ways come autumn. According to Modern Farmer, the research shows that leaves will start changing color later in the year and will keep their bright colors for longer. The Princeton team developed their conclusions by looking at 20 varieties of trees ranging from those that need full sun to partial shade and full shade, using the USA National Phenology Network and their own direct observations in the Harvard Forest of Petersham, MA. Their findings show that by the end of the 21st Century, Massachusetts foliage season will most likely happen in November, instead of October as it does currently. A study of trees in Alaska, meanwhile, indicated that foliage there wouldn’t be as greatly affected as that of the New England forests. Trees change colors in the fall because the shorter daylight hours and cooler temperatures cause them to shed their leaves and go into their winter dormancy period. That they’re changing later and lasting longer spells bad news for the planet because it’s a sign that the seasons are shifting and will likely be followed by many other species; for example, crops may go to seed later and animals will wait longer before starting their winter food harvest. Via Modern Farmer
Presentation on theme: "Trans-Atlantic Slave Trade US CIVIL WAR OF ALL THE CONTRADICTIONS IN AMERICA’S HISTORY, NONE SURPASSES ITS TOLERATION FIRST OF SLAVERY AND THEN OF SEGREGATION."— Presentation transcript: Trans-Atlantic Slave Trade US CIVIL WAR OF ALL THE CONTRADICTIONS IN AMERICA’S HISTORY, NONE SURPASSES ITS TOLERATION FIRST OF SLAVERY AND THEN OF SEGREGATION. – STEPHEN AMBROSE Slave Regions in Africa Slave trade moved people along 3000 miles of Africa’s west coast to the New World Many slaves were brought from inland areas of Africa Slave Coffle Definition: a group of animals, prisoners, or slaves chained together in a line Middle Passage 1600’s – 1850’s Approx. 60 forts build along the west coast of Africa. Walked in slave caravans to the forts some 1000 miles away. Selected by the Europeans and branded. One half survived the death march. Place in underground dungeons until they were boarded on ships. Cape Coast Castle, Ghana Middle Passage Statistics 10-16 million Africans forcibly transported across the Atlantic from 1500-1900. 2 million died during the Middle Passage (10-15%) Another 15-30% dies during the march to the coast. For every 100 slaves that reached the New World, another 40 died in Africa or during the Middle Passage. Middle Passage Conditions on Board the Ship Slaves chained together and crammed into spaces sometimes less than five feet high. Slavers packed three of four hundred Africans into the ship cargo holds. Little ventilation, human waste, horrific odors. Unclean. Conditions on Board the Ship Tight packing - belly to back, chained in twos, wrist to ankle (660+), naked. Loose packing - shoulder to shoulder chained wrist to wrist or ankle to ankle. Men and woman separated (men placed towards bow, women toward stern). Fed once or twice a day and brought on deck for limited times. Middle Passage Journey lasted 6-8 weeks. Due to high mortality rate, cargo was insured (reimbursed for drowning accidents but not for deaths from disease of sickness) Common to dump your cargo for sickness or food shortages. Slave mutinies on board ships were common (1 out of every 10 voyages across the Atlantic experience a revolt). Covert resistance (attempted suicide, jumped overboard, refusal to eat). Video Video Growth of African American Population 1820 1.77 million 13% free 1830 2.33 million 14% free 1840 2.87 million 13% free 1850 3.69 million 12% free 1860 4.44 million 11% free Growth of African American Population Early 18th Century - 36,000 per year During 1780’s - 80,000 per year Between 1740-1810 - 60,000 captives/year on average. 17th Century - slave sold in the Americas for about $150 each Slave trade illegal in Britain in 1807, US 1808, France 1831, Spain 1834. Once declared illegal prices went much higher. 1850s prime field hand $1200 - $1500 (about $18,000 in 1997 dollars). Your consent to our cookies if you continue to use this website.
Silas R. Beane, Zohreh Davoudi and Martin J. Savage found the notion of the universe as a computer simulation to be fascinating. They began to think about how it might be possible to determine if our own universe is a numerical simulation. It all begins with lattice gauge theory and quantum chromodynamics (QCD). We know of four fundamental forces in our universe: strong nuclear force, electromagnetism, weak nuclear force and gravity. Lattice gauge theory and QCD focus on the strong nuclear force, which is the force that holds subatomic particles together. It's the strongest of the four fundamental forces but also has the shortest range. Quantum chromodynamics is a theory that explains the fundamental nature of the strong force in four space-time dimensions. Using high-performance computing (HPC), it's possible for researchers to simulate an incredibly small universe in an effort to study QCD. It's on the femto scale, which is even smaller than the nano scale. A nanometer is one-billionth of a meter -- a femtometer is one-quadrillionth, or 10-15 meters. Within this simulation, researchers use a lattice structure to represent the space-time continuum. If we were to somehow shrink down small enough to be inside this universe, we might be able to detect that it's a construct by observing how certain energies interact with the lattice. In our universe, that energy could be cosmic rays. If scientists could observe cosmic rays behaving as if there is a lattice around our own universe, it would suggest that we are actually inside a computer simulation that uses the same techniques as lattice gauge theory. We would have to develop technology sufficiently sophisticated and powerful enough to detect these cosmic rays and measure their behaviors to notice a lattice structure. This approach also assumes a few other constraints: - The entities that designed the simulation followed a practice similar to what researchers are doing with QCD experiments. - The entities had limited resources with which to work, meaning our universe would also be finite. - The universe's designers are not actively preventing us from discovering that we're in a simulation. If your mind isn't spinning already, let's move on to think about what living within a computer simulation would actually mean.
From its founding in 1886 through the mid-20th century, the American Federation of Labor (AFL) supported restrictions on immigration from Southern and Eastern Europe, Asia, and Mexico. This policy was fueled by nativism, a desire to limit competition for jobs and a fear of communist influence from overseas. Despite legal restrictions, millions of immigrants obtained jobs in the United States and formed new industrial unions. One such union is the International Ladies Garment Workers Union, founded in 1900 by Eastern European Jewish immigrants. This union was a major force within the labor movement throughout the 20th century. When the Congress of Industrial Organizations (CIO) was founded in the 1930s, it welcomed new members without distinction to race, color, creed, or nationality, recognizing that many of their members immigrated or were from immigrant families. After the merger of AFL and CIO in 1955, a compromise was made to support legal immigration, while tightening restrictions on undocumented workers. Immigrants from Mexico and the Philippines pushed back against this decision and demonstrated the power of organizing all workers, regardless of legal status, by founding the United Farm Workers union. By the end of the 20th century, the labor movement supported a path to citizenship and opposed the deportation of undocumented workers. Today, all immigrants are welcome in the ranks of labor, validating their long struggle for equality and unity. This Italian immigrant mother and child represented a new generation of Southern and Eastern European immigrants who played a critical role in building the 20th century labor movement, Ellis Island, New York, 1905, Lewis Hine. AFL-CIO Still Images, Photographic Prints Collection. In 1965, farm workers of Filipino and Mexican heritage united to organize a strike for higher wages and better working conditions against the grape growers in the Delano region of California. The strike led to the founding of the United Farm Workers (UFW), which successfully appealed to students, religious leaders, civil rights activists, and urban union members. UFW backed the grape boycott, turning support for immigrant workers into a national cause. After a five-year strike, the UFW signed contracts with the growers that substantially improved the lives of farm workers. In the succeeding years, the UFW has organized farm workers around the country and championed the rights of immigrants, including undocumented workers. Farm worker leader Cesar Chavez addressing a rally of supporters of the grape boycott during a protest march from Columbia, Maryland, to Washington, DC, 1970. AFL-CIO Still Images, Photographic Prints Collection.
Melt inclusions are small droplets of silicate melt that are trapped in minerals during their growth in a magma. Once formed, they commonly retain much of their initial composition (with some exceptions) unless they are re-opened at some later stage. Melt inclusions thus offer several key advantages over whole rock samples: (i) they record pristine concentrations of volatiles and metals that are usually lost during magma solidification and degassing, (ii) they are snapshots in time whereas whole rocks are the time-integrated end products, thus allowing a more detailed, time-resolved view into magmatic processes (iii) they are largely unaffected by subsolidus alteration. Due to these characteristics, melt inclusions are an ideal tool to study the evolution of mineralized magma systems. This chapter first discusses general aspects of melt inclusions formation and methods for their investigation, before reviewing studies performed on mineralized magma systems.
Grade Range: 9-12 Resource Type(s): Lessons & Activities Duration: 90 Minutes Date Posted: 9/21/2010 In this lesson, students will examine primary sources to understand John Brown’s actions in Harpers Ferry and will develop a creative project on his legacy. This resource was produced to accompany the exhibition The Price of Freedom: Americans at War, by the Smithsonian’s National Museum of American History. Historical Thinking Standards (Grades 5-12) United States History Standards (Grades 5-12)
A bird that lives in the forests of northeastern India and sings a very melodic tune is a whole new species, scientists report. Listen to it sing: The discovery process began in 2009 when researchers realized that what was considered a single species, the plain-backed thrush Zoothera mollissima, was in fact two different species in northeastern India, says Pamela Rasmussen of the integrative biology department at Michigan State University. The new bird, described in the current issue of the journal Avian Research, is named Himalayan forest thrush Zoothera salimalii. The scientific name honors the Indian ornithologist Sálim Ali. What first caught scientists’ attention was the plain-backed thrush in the coniferous and mixed forest had a rather musical song, but birds found in the same area—but on bare rocky ground above the treeline—had a much harsher, scratchier, unmusical song. “It was an exciting moment when the penny dropped, and we realized that the two different song types from plain-backed thrushes that we first heard in northeast India in 2009, and which were associated with different habitats at different elevations, were given by two different species,” says lead author Per Alström of Uppsala University in Sweden. To make the discovery, scientists had to do field observations and a bit of sleuthing with museum specimens. Investigations involving collections in several countries revealed consistent differences in plumage and structure between birds that could be assigned to either of these two species. It was confirmed that the species breeding in the forests of the eastern Himalayas had no name. “At first we had no idea how or whether they differed morphologically. We were stunned to find that specimens in museums for over 150 years from the same parts of the Himalayas could readily be divided into two groups based on measurements and plumage,” Rasmussen says. Further analyses of plumage, structure, song, DNA, and ecology from throughout the range of the plain-backed thrush revealed that a third species was present in central China. This was already known but was treated as a subspecies of plain-backed thrush. The scientists called it Sichuan forest thrush. The song of the Sichuan forest thrush was found to be even more musical than the song of the Himalayan forest thrush. DNA analyses suggested that these three species have been genetically separated for several million years. Genetic data also yielded an additional exciting find: Three museum specimens indicated the presence of yet another unnamed species in China, the Yunnan thrush—but future studies are required to confirm this. New bird species are rarely discovered nowadays. In the last 15 years, on average approximately five new species have been discovered annually, mainly in South America. The Himalayan forest thrush is only the fourth new bird species discovered in India since 1949. Source: Michigan State University
Systemic lupus erythematosus (Lupus) is a chronic disease that causes inflammation in the tissues which provide strength and flexibility to structures throughout the body. The signs and symptoms of Lupus vary among affected individuals, and can involve many organs and systems, including the skin, joints, kidneys, lungs, central nervous system. Lupus is one of a large group of conditions called autoimmune disorders that occur when the immune system attacks the body's own tissues and organs. It affects predominantly women of child-bearing age and has population differences in both disease prevalence and severity. Genetic factors are known to play key roles in the disease through the use of association and family studies. Most of the genes associated with Lupus are involved in immune system function. Sex hormones and a variety of environmental factors including viral infections, diet, stress, chemical exposures, and sunlight are also thought to play a role in triggering this complex disorder. About 10 percent of Lupus cases are thought to be triggered by drug exposure, and more than 80 drugs that may be involved have been identified.
The behaviors of all dynamic systems are dependent upon their initial conditions. In classical mechanics, the initial conditions of systems are usually known. A very simple example, of a small ball dropped onto the edge of a razor blade, as shown in the illustration below, how important initial conditions can be to a dynamic system: ----BALL STRIKING RAZOR BLADE The ball can strike the blade in such a way that it can go off to the left (center of Figure) or to the right (right of Figure). The initial condition that will determine whether the ball goes to the left or right is minute. If the ball were initially held centered over the blade (left of Figure), a prediction of which direction the ball will bounce would be impossible to make with certainty. Dynamic systems, that are highly dependent on their initial conditions, are the main subjects of investigation in modern chaos theory. Kellert (1993) points out that: “A dynamical system that exhibits sensitive dependence on initial conditions will produce markedly different solutions for two specifications of initial states that are initially very close together." The ball falling on a razor blade is a good example of such a dynamic system because a very slight change in the initial conditions of the ball can result in falling to the right or left of the blade. According to Rosen (1991): The natural evolution of quasi-isolated systems should be analyzed by considering the evolution process as a sequence of states in time. A state is the condition of the system at any time, and this can be either discrete or continuous. At any time, we can consider the system's state as the initial conditions for whatever processes follow. The initial conditions of a complex system can therefore be found by making observations, at selected times, of the system’s state space. Theoretically, this can even be done for the universe at large. Ruelle (1991) says: Newtonian mechanics gives a completely deterministic picture of the world: if we know the state of the universe at some initial time, we should be able to determine its state at any other time. This ability is called determinism and it holds true for all dynamic systems. However, the initial conditions of many complex systems cannot be accurately determined. When systems exhibit sensitive dependence on initial conditions, they are no longer predictable, and determinism no longer holds. One complex system that is often used as a typical example, is the weather. Ruelle (1991) says: It is conceivable that the presence of Venus, or any other planet, modifies the evolution of the weather, with consequences that we cannot disregard. The evidence is that whether we have rain or not this afternoon depends upon, among many other things, the gravitational influence of Venus a few weeks ago! The state or condition of a complex system, over time, depends on its initial conditions. This phenomenon has been labeled the Butterfly Effect because it suggests that a butterfly, that beats its wings in Peking today, can transform a storm system next month in New York. This is now known to have some validity, especially with weather prediction. In 1961, Edward Lorenz discovered that his computer gave him a different answer when he started at the beginning of his calculations than when he took a "short-cut" and started near the midpoint. Intuitively it should not have mattered, because the differences were so very small they should have been negligible. But the final result, he discovered, was highly dependent on the starting conditions. In one computer run, he started with the number .506127. The short-cut run began with the number .506, a rounded-off number. The rounding off made all the difference. The calculations had to do with the weather, and the rounding off error should not have made the difference of a small puff of wind, yet the results of the two calculations were totally different. One of the practical conclusions from his discovery is that long-range weather forecasting is doomed to failure. This is not because we can't measure good enough; but rather, like the uncertainty principle of quantum mechanics, there are distinct limits to how far we can predict future events with certainty, even in our everyday macroscopic world. For every event that occurs, small uncertainties multiply over time, cascading upward into unpredictability (Briggs & Peat, 1989; Cohen & Stewart, 1994; Gleick, 1987). Every human being is a complex system, both physically and mentally. Our birth, and early development as a child, will largely determine how we find ourselves as adults. This is because we do not enter life as a "blank slate;" we enter life with pre-established desires and traits (Darley, Glucksberg & Kinchla, 1981). The early part of our lives can effect us in our later life. Psychoanalysis argues that we must remember our early childhood, if we are to find maturity in our adult life. Jung (1989) notes “the enormous influence which childhood has on the later development of character” (p. 136). He also points out that “most neuroses are misdevelopments that have been built up over many years” (1985, p. 24). We must come to grips with our childhood. The Butterfly Effect strongly suggests the importance of remembering our past and assimilating all of our childhood experiences in order to see clearly why we behave as we do today. Jung (1978) taught that the ego rises up from the psyche shortly after birth from friction between the body and the external environment. Jung (1954) wrote that “the child’s psyche, prior to the stage of ego-consciousness, is very far from being empty and devoid of content” (p. 44). How humans develop and learn depends upon the interplay between genetic (nature) and environmental (nurture) factors (Rubiner, 1997). Our brain is not a tabula rasa on which anything can be imprinted. The central nervous system has tendencies that are reflected in a gravitation toward particular behaviors partly expressed in our rituals, mythologies, religions, and social structures. Superimposed on this biological backdrop is an equally inherited ability to reason. Reason appears to be possible because built-in feedback loops create a hierarchical progression with the capacity to always look back at previous levels of integration (Showbris, 1994, p. 386). Once the ego has established itself as an individual identity, it goes on developing by virtue of continuous friction with the outer world as well as internal friction due to the need for assimilation of experiences. Furthermore, the ego’s “stability is relative, because far-reaching changes of personality can sometimes occur” (Jung, 1978, p. 6). Mental instability is as much the cause of growth as it is of illness or pathological behavior. Jung (1989) recognized three primary phases of life as: (1) the first few years of life, called the presexual stage; (2) the later years of childhood up to puberty, called the prepubertal stage; and (3) the adult period from puberty on, called the period of maturity. He also taught that the ego develops from the Self (the central archetype of the psyche) within the psyche during the first half of life, and then returns to the Self by assimilating it during the second half of life in what he calls the individuation process (Edinger, 1974). Jacobi (1973) says that: “Unless it is inhibited, obstructed, or distorted by some specific disturbance, [the individuation process] is a process of maturation or unfolding, the psychic parallel to the physical process of growth and aging” (p. 107). How far the Self of any person matures by means of the individuation process, during the second half of life, largely depends on how well the ego develops during the first half of life. Thus, according to Jung, the state of the psyche of any individual is highly dependent on its initial conditions. THE ABOVE FROM: THE CHAOS OF JUNG'S PSYCHE An on-line book by Gerald J. Schueler, Ph.D. and Betty J. Schueler, Ph.D that for whatever reason no longer be found in an internet search. THE INFORMANT AND CARLOS CASTANEDA
Information on substances Linear alkyl benzene sulfonates (2003) Physical properties are difficult to retrieve. Generally, water solubility decreases with increasing alkyl chain length and it is also dependant on the positive ion of the salt. The most common LAS is sodium dodecyl benzenesulfonate, often called sodium lauryl benzenesulfonate. At room temperature this is a white to light yellow solid substance. LAS is an anionic surfactant i.e. the surface-active part is a negatively charged ion in water. The linear alkyl chain makes the molecule more biodegradable than alkyl benzenesulfonates with branched carbon chains. The length of the alkyl chain is not always very specified for the substances used as surfactants as it is dependant on the alkyl chain raw material. When the alkyl chain contains hydrocarbons with a chain length of 10 to 13 carbon atoms the alkyl is often called “dodecyl” after the most common chain length of 12 carbons. If the chain is 12 to 15 carbon atoms long it is called “tridecyl”. LAS is manufactured by addition of a-olefiner (the alkyl chain to be) of required length to benzene. These, in turn, are produced by uniting four, five ethene and/or propene units, by extracting normal paraffines from kerosine by molecular sieve or by cracking petroleum wax. After the alkyl benzenes have been formed they are sulfonated with sulphuric acid to alkylbenzene sulfonic acids. The acid is neutralized with e.g. sodium hydroxide for use in water-based systems or calcium hydroxide for oil based products. Neutralizing with ammonium ion, e.g. triethanolamine, generates tensides with emulsifying properties in both water and oil based systems. LAS are easy to spray dry to powder, which could be a problem at tenside production and then necessitates handling in solutions. Sulfonation is an inexpensive process because of abundant supply of reactive sulphur oxide groups e.g. sulphuric acid, which results in sulfonates being the most produced surfactants. About 80% of all LAS is dodecylbenzene sulfonic acid and its salts. In 1992, production in the US was about 346,000 tons of different linear alkylbenzene sulfonates. Global demand of linear alkylbenzene was 2.5 million tons in 2001. LAS has good ability to remove and keep particles in dispersion. This is utilized in detergents for textile to remove inorganic dirt like earth. The alkyl chain of the molecule adhere to the solid surface that mostly has a faint negative charge while the likewise negatively charged sulfonic group tries to reach as far out as possible into the water phase and by this keeps the particle in dispersion. LAS are stable against oxidation making them suitable to use in mixtures containing oxidants like e.g. bleaching agents. They do not work well in hard water where water insoluble calcium soaps are precipitated. This is compensated by using them in mixtures together with softening substances and other surfactants. In oil based systems the calcium salts of LAS keep wear and soot particles in dispersion and they are therefore used in motor oil to prevent deposits in the motor. Use of LAS in detergents in Sweden has decreased from about 10,000-15,000 tons during the eighties to less than 100 tons in 2001 because of substitution to anionic surfactants with less environmental impact as they do not contain any aromatic part of the molecule. Other countries still use large amounts of LAS in detergents. In Sweden the main use of LAS is in allround cleaners and wash-up preparations, then in combination with other tensides. Another area of use is in different oils for motors, metal work and transmission. Industry also uses LAS as a surface active process aid.
Learn something new every day More Info... by email Most all musical compositions can be structurally defined as a progression of harmonic chords. A chord is the combined sound of two or more musical notes. Several centuries of musical theorists have developed a good understanding of why and how chords change from one to the next. In a given composition, chord substitution is the musical technique of, not playing the next chord, and instead playing a different one that still adheres to the principles of harmony. A good substitution is always, in some regard, derived from the original chord meant to be played. The course of music is established by its “key,” and starts from the harmonic backbone of a chord based on the first tone of the key. It is called the tonic chord. In the Key of C Major, the tonic chord consists of the three notes C, E and G. Although it is a generalization, music’s path is from this tonic chord to its dominant chord, based on the fifth tone. In the Key of C, the dominant chord is G, B and D. From attaining the musical climax of the dominant chord, music then returns to the tonic chord. The creative, round-about harmonic steps music takes to go from its tonic chord to its dominant, and to a lesser degree back to tonic, is the composition’s chord progression. The traditional notation of music theorists to express these chords are Roman numerals — I for the root, V for the dominant, and everything between through VII. A 12-bar blues song might be transcribed: I-I-I-I / IV-IV-I-I / V-V-I-I. Any of these chords can be substituted with another. If done so while retaining the harmonic connection between its preceding and succeeding chords, the song’s essential structure will remain. In the blues example, chord substitution in the first tonic bars with its harmonic sub-dominant chord based on the key’s fourth tone — I-IV-I-IV — will not markedly change the song, but give it a more complex sound. Categorically, a chord substitution falls into several different types. Another note can be added. The addition of the seventh tone, for example — C, E, G an B for the I7 or C-major-seventh chord — gives the original chord a tense, anticipatory sound. Notes can also be subtracted from the original. The simplest chord substitution might be a default change to the tonic chord. Chord substitutions are practiced by both amateur and proficient musicians alike. Beginner students of an instrument may be provided familiar music whose original score of chords have been substituted with simpler ones more appropriate to the student’s skill level. At a high level of instrumental skill however, say an improvisational jazz pianist, the technique of chord substitution is an extremely difficult one. The basic principle underlying the technique is the harmonic mapping of each note in a new chord within the established progression. One of the more common substitutions, called a secondary dominant, is to treat any given chord as if it were the tonic and then to play its equivalent harmonic dominant instead. Another substitution is to play the chord in its relative minor key, usually with the addition of the key’s sixth tone. The I-chord in C-Major can instead be played as C-E-G-A for the melancholy sound of vi7 or A-minor-seventh. There are other even more difficult options for chord substitution. A new chord, usually slightly discordant to the ear, can be inserted as an intermediate step or bridge between two perfectly good harmonic chords in a progression. Similarly, discord can be introduced by adding a second tone to the chord. Popularly called a “mu chord,” its difficult use comes from the necessity of resolving the dissonant sound with the next chord in the progression. Very skilled musicians such as the improvisational jazz saxophonist John Coltrane can substitute, not just one chord, but several successive chords. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Ancient Egyptians might have been just as vain as humans today. They seem to have styled their hair with fat-based products to enhance their appearance and accentuate their individuality, new research suggests. "Personal appearance was important to the ancient Egyptians so much so that in cases where the hair was styled, the embalming process was adapted to preserve the hairstyle," the researchers, based at the University of Manchester in the United Kingdom, write Aug. 16 in the Journal of Archaeological Science. "This further ensured that the deceased's individuality was retained in death, as it had been in life, and emphasizes the importance of the hair in ancient Egyptian society." The researchers studied hair from 18 mummies (15 mummified in a desert cemetery called the Dakhleh Oasis and three from museum samples of unknown origin) who lived around 300 B.C. in ancient Egypt. By taking a close look at the hairs under a microscope, the researchers noticed that nine of these mummies had an unknown substance coating their hair. Chemical analyses of the coating revealed it was made up of fatty acids from both plant and animal origins. The researchers believe that this fat-based hair gel was used by the Egyptians to mold and hold the hair in position to enhance appearance, since some of the deceased that had been mummified naturally in the desert also had fats in their hair. When mummified using embalming chemicals, the undertakers seem to have taken special care to retain the deceased's hairdos, as they used different chemicals on different parts of the body. "It is evident that different materials were used for different areas of the body," the researchers write. "The hair samples from the Dakhleh Oasis were not coated with resin/bitumen-based embalming materials, but were coated with a fat-based substance." The mummies had all different kinds of hairstyles depending on age, sex and presumed social status. Researchers have previously discovered objects in Egyptian tombs that seem to be curing tongs, so they might have been used in conjunction with the hair product to curl the hair into place, the researchers speculate.
by Staff Writers Washington DC (SPX) Nov 16, 2012 November 13, 2012 - Scientists from the American Meteorological Society (AMS) and University of California, Berkeley have demonstrated that plants and soils could release large amounts of carbon dioxide as global climate warms. This finding contrasts with the expectation that plants and soils will absorb carbon dioxide and is important because that additional carbon release from land surface could be a potent positive feedback that exacerbates climate warming. "We have been counting on plants and soils to soak up and store much of the carbon we're releasing when we burn fossil fuels," Paul Higgins, a study co-author and associate director of the AMS Policy Program, said. "However, our results suggest the opposite possibility. Plants and soils could react to warming by releasing additional carbon dioxide, increasing greenhouse gas concentrations and leading to even more climate warming." The research team used a computer model of Earth's land surface to examine how carbon storage might react to a warmer planet with higher atmospheric carbon dioxide concentrations. The experimental design helps quantify the possible range in future terrestrial carbon storage. Results indicated that the potential range of outcomes is vast and includes the possibility that plant and soil responses to human-caused warming could trigger a large additional release of carbon. If that outcome is realized, a given level of human emissions could result in much larger climate changes than scientists currently anticipate. It would also mean that greater reductions in greenhouse gas emissions could be required to ensure carbon dioxide concentrations remain at what might be considered safe levels for the climate system. These findings could pose additional challenges for climate change risk management. Recognizing such challenges will afford decision makers a greater chance of managing the risks of climate change more effectively. Dr. Higgins works on climate change and its causes, consequences, and potential solutions. His scientific research examines the two-way interaction between the atmosphere and the land-surface, which helps quantify responses and feedbacks to climate change. Dr. Higgins's policy analysis helps characterize climate risks and identify potential solutions. He also works to inform policy makers, members of the media, and the general public about climate science and climate policy. Dr. John Harte, the study's other co-author, is a professor of energy and resources and of environmental science, policy and management at University of California, Berkeley. His research interests include climate-ecosystem interactions, theoretical ecology and environmental policy. The study was published in a Journal of Climate paper titled, "Carbon cycle uncertainty increases climate change risks and mitigation challenges." American Meteorological Society Farming Today - Suppliers and Technology |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
The name of 16th century Polish astronomer Nicolaus Copernicus became a household world because he proposed that the Earth revolves around the sun. But the man who finally gathered scientific proof of that theory was English astronomer James Bradley, born during this month in 1693. Called the best astronomer in Europe by Isaac Newton, Bradley methodically observed the star Gamma Draconis and noticed slight seasonal shifts in its position, which he then observed in other stars as well. He called this effect "the aberration of light" and estimated its angle at 20 to 20.5 seconds; the modern value is 20.47 seconds. Eventually, Bradley realized that the displacement stemmed from viewing a stationary object from a moving one, the Earth--thus confirming Copernicus's concept. Bradley later discovered a second process that causes stars to wobble in their places in the firmament. In an effect he called "nutation"--known to be true today--subtle changes in the angle of Earth's rotation, caused by the moon's pull on the equator, alter apparent stellar positions.
Make science Amazing for your grade-schooler. Science is such an important subject to learn. It helps make sense out of everyday lives. When children are young, they are curious about lots of things and science explains it. The concepts are easy for your to understand and explain to your child. But as they grow, science becomes more difficult to learn and difficult to teach. The concepts grow more confusing. You might feel like you need to go out and buy expensive equipment if you are going to do any experiments. It doesn’t need to be difficult. With Amazing Science Discovery, you will learn right along with your child. You get five ebooks covering grades one to five. Each book explains in plain English specific topics and includes easy experiments to help reinforce the topic. The lessons are planned in a sequential manner so that your will both be learning what you need to know for the next lesson. You will even learn how to throw a science party! It’s never too late to develop a love for science in your child. Act now! Watch a great video to find out the answer plus what an atom is made up of at How Small is an Atom. Ever what happens to the food you eat after you eat lunch? Why do you feel better and have more energy? You could explain the digestive system to your 4th grader, but think how cool it would be to offer an interactive demonstration. At Biology in Motion, you get this demonstration as well as others like how the cardiovascular system works, cell division or how the thyroid works. Sometimes science can be boring for your 4th grader. Time to add a little fun like this lava lite activity from Family Fun online. Remember the 70’s? Flower power and lava lamps. You will learn about density and fluids that don’t mix and have a good time too. A very large and heavy boat floats on the water, but a tiny penny does not. Why is that? Check out this article with information and activities. Why Does a Boat Float, but Not a Penny? Do you have a 4th grader who is fascinated with bugs? At What’s That Bug, you will find 2,834 species of beetles plus hundreds of posts about bugs of all kinds. What makes this blog different is that people send in pictures of bugs and ask the “bugman” about the bug. The Bugman then answers the questions with lots of information. Get your ‘Bug On’! All that constant rubbing and washing; will your skin ever wear out? If you’re teaching life science, your 4th grader might come up with this question. Discover the answer to this question plus a lot more about your skin in the article: Will My Skin Wear Out? Did you know that your skin is the largest organ in your body?
In the early 19th century, most of the land that is now Alaska was claimed by the Russian empire, and its most significant community was Novo-Arkhangel’sk, which today is called Sitka. From 1808 until the sale of Alaska to the United States in 1867, Sitka was the administrative center of Russian possessions in America. The town was carved out of the forested lowlands of SE Alaska and housed a small but diverse population, with Russians, U.S. citizens, Europeans, and Native Alaskans co-existing. In this lesson, students use illustrations from a book commissioned by the Russian emperor to explore what Sitka might have been like during this period of transition, and how it might have looked if presented from a different point of view. - Analyze images; - Form hypotheses about audience and creator purpose; - Depict a place from a point of view other than that of the original image. Recommended Grade Level - World History and Cultures - National Expansion and Reform, 1815-1860 Adapted from a lesson by Roger Pearson
Southern white rhinoceros At the Detroit Zoo Male white rhinoceroses Jasiri ("courageous" in Swahili) and Tamba ("strut proudly" in Swahili) arrived in 2005 as the first of their species to live at the Detroit Zoo. Jasiri often shows his playful side by ganging up on his toys while running around the habitat. Tamba is the more dominant of the two and struts around with confidence and intelligence. The rhinos can be seen outdoors and indoors in their habitat near the Japanese macaques. The southern white rhinoceros is the second largest land mammal, smaller only than the elephant. It has a barrel-shaped body that is sparsely covered in hair. It has tufts of hair within its ears and on the end of its tail. Contrary to its name, the white rhino is actually light grey. Its name is thought to originate from the Dutch word "weit," meaning wide, referring to its wide square muzzle which is used for grazing on grasses. Scientific name: Ceratotherium simum simum Habitat: Open grasslands Size: 11-14 feet long; up to 6 feet tall at the shoulder Weight: 6,000 pounds Diet: The southern white rhinoceros is an herbivore and mainly eats grasses. Reproduction: Gestation 16 months; single calf Lifespan: 34 years Conservation Status: Near Threatened
Figure 2-29.Simple crystal frequency synthesizer. Q25. What is the primary function of an afc circuit? Q26. What is frequency synthesis? AUDIO REPRODUCTION DEVICES The purpose of audio reproduction devices, such as loudspeakers and headphones, is to convert electrical audio signals to sound power. Figure 2-30 shows you a diagram of a loudspeaker called the PERMANENT MAGNET SPEAKER. This speaker consists of a permanent magnet mounted on soft iron pole pieces, a voice coil that acts as an electromagnet, and a loudspeaker cone connected to the voice coil. The audio signal has been previously amplified (in terms of both voltage and power) and is applied to the voice coil. The voice coil is mounted on the center portion of the soft iron pole pieces in an air gap so that it is mechanically free to move. It is also connected to the loudspeaker cone; as it moves, the cone will also move. When audio currents flow through the voice coil, the coil moves back and forth proportionally to the applied ac current. As the cone (diaphragm) is attached to the voice coil, it also moves in accordance with the signal currents; in so doing, it periodically compresses and rarefies the air, which produces sound waves. Figure 2-30.Permanent magnet speaker.
Class 6th Ncert Geography Notes -ch-3 Ncert Geography Notes -ch-3 – Motions of the Earth - Hello readers as you know that the earth has two types of motions, namely rotation and revolution. Rotation is the movement of the earth on its axis. - The movement of the earth around the sun in a fixed path or orbit is called Revolution. - The axis of the earth which is an imaginary line, makes an angle of 66½° with its orbital plane. - The plane formed by the orbit is known as the orbital plane. - The earth receives light from the sun. Due to the spherical shape of the earth, only half of it gets light from the sun at a time . - The portion facing the sun experiences day while the other half away from the sun experiences night. - The circle that divides the day from night on the globe is called the circle of - . The earth takes about 24 hours to complete one rotation around its axis. The period of rotation is known as the Earth day. - This is the daily motion of the earth. - The portion of the earth facing the sun would always experience day, thus bringing continuous warmth to the region. - The other half would remain in darkness and be freezing cold all the time. - The second motion of the earth around the sun in its orbit is called revolution. - It takes 365¼ days (one year) to revolve around the sun. - We consider a year as consisting of 365 days only and ignore six hours for the sake of convenience. - Six hours saved every year are added to make one day (24 hours) over a span of four years. - This surplus day is added to the month of February. - Thus every fourth year, February is of 29 days instead of 28 days. - Such a year with 366 days is called a leap year. - it is clear that the earth is going around the sun in an elliptical orbit. - Notice that throughout its orbit, the earth is inclined in the same direction. - A year is usually divided into summer, winter, spring and autumn. - You will see that on 21st June, the Northern Hemisphere is tilted towards the sun. - The rays of the sun fall directly on the Tropic of Cancer. - As a result, these areas receive more heat. - The areas near the poles receive less heat as the rays of the sun are slanting. - The North Pole is inclined towards the sun and the places beyond the Arctic Circle experience continuous daylight for about six months. - Since a large portion of the Northern Hemisphere is getting light from the sun, it is summer in the regions north of the equator. - The longest day and the shortest night at these places occur on 21st June. - At this time in the Southern Hemisphere all these conditions are reversed. - It is winter season there. The nights are longer than the days. - This position of the earth is called the Summer Solstice - On 22nd December, the Tropic of Capricorn receives direct rays of the sun as the South Pole tilts towards it. - As the sun’s rays fall vertically at the Tropic of Capricorn (23½° S), a larger portion of the Southern Hemisphere gets light. - Therefore, it is summer in the Southern Hemisphere with longer days and shorter nights. - The reverse happens in the Northern Hemisphere. - This position of the earth is called the Winter Solstice. - Do you know that Christmas is celebrated in Australia in the summer season? - On 21st March and September 23rd, direct rays of the sun fall on the equator. - At this position, neither of the poles is tilted towards the sun; so, the whole earth experiences equal days and equal nights. - This is called an equinox. - On 23rd September, it is autumn season in the Northern Hemisphere and spring season in the Southern Hemisphere. - The opposite is the case on 21st March when it is spring in the Northern Hemisphere and autumn in the Southern Hemisphere.
Citation: Huitt, W., & Hummel, J. (2003). Piaget's theory of cognitive development. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved [date] from http://www.edpsycinteractive.org/topics/cognition/piaget.html Return to: | Overview of the Cognitive System | Home | more in-depth paper | Jean Piaget (1896-1980) was one of the most influential researchers in the area of developmental psychology during the 20th century. Piaget originally trained in the areas of biology and philosophy and considered himself a "genetic epistemologist." He was mainly interested in the biological influences on "how we come to know." He believed that what distinguishes human beings from other animals is our ability to do "abstract symbolic reasoning." Piaget's views are often compared with those of Lev Vygotsky (1896-1934), who looked more to social interaction as the primary source of cognition and behavior. This is somewhat similar to the distinctions made between Freud and Erikson in terms of the development of personality. The writings of Piaget (e.g., 1972, 1990; see Piaget, Gruber, & Voneche) and Vygotsky (e.g. Vygotsky, 1986; Vygotsky & Vygotsky, 1980), along with the work of John Dewey (e.g., Dewey, 1997a, 1997b), Jerome Bruner (e.g., 1966, 1974) and Ulrick Neisser (1967) form the basis of the constructivist theory of learning and instruction. While working in Binet's IQ test lab in Paris, Piaget became interested in how children think. He noticed that young children's answers were qualitatively different than older children which suggested to him that the younger ones were not dumber (a quantitative position since as they got older and had more experiences they would get smarter) but, instead, answered the questions differently than their older peers because they thought differently. There are two major aspects to his theory: the process of coming to know and the stages we move through as we gradually acquire this ability. Process of Cognitive Development. As a biologist, Piaget was interested in how an organism adapts to its environment (Piaget described as intelligence.) Behavior (adaptation to the environment) is controlled through mental organizations called schemata (sometimes called schema or schemes) that the individual uses to represent the world and designate action. This adaptation is driven by a biological drive to obtain balance between schemes and the environment (equilibration). Piaget hypothesized that infants are born with schema operating at birth that he called "reflexes." In other animals, these reflexes control behavior throughout life. However, in human beings as the infant uses these reflexes to adapt to the environment, these reflexes are quickly replaced with constructed schemata. Piaget described two processes used by the individual in its attempt to adapt: assimilation and accomodation. Both of these processes are used though out life as the person increasingly adapts to the environment in a more complex manner. Assimilation is the process of using or transforming the environment so that it can be placed in preexisting cognitive structures. Accomodation is the process of changing cognitive structures in order to accept something from the environment. Both processes are used simultaneously and alternately throughout life. An example of assimilation would be when an infant uses a sucking schema that was developed by sucking on a small bottle when attempting to suck on a larger bottle. An example of accomodation would be when the child needs to modify a sucking schema developed by sucking on a pacifier to one that would be successful for sucking on a bottle. As schema become increasingly more complex (i.e., responsible for more complex behaviors) they are termed structures. As one's structures become more complex, they are organized in a hierarchical manner (i.e., from general to specific). Stages of Cognitive Development. Piaget identified four stages in cognitive development: Many pre-school and primary programs are modeled on Piaget's theory, which, as stated previously, provided part of the foundation for constructivist learning. Discovery learning and supporting the developing interests of the child are two primary instructional techniques. It is recommended that parents and teachers challenge the child's abilities, but NOT present material or information that is too far beyond the child's level. It is also recommended that teachers use a wide variety of concrete experiences to help the child learn (e.g., use of manipulatives, working in groups to get experience seeing from another's perspective, field trips, etc). Piaget's research methods were based primarily on case studies (i.e., they were descriptive). While some of his ideas have been supported through more correlational and experimental methodologies, others have not. For example, Piaget believed that biological development drives the movement from one cognitive stage to the next. Data from cross-sectional studies of children in a variety of western cultures seem to support this assertion for the stages of sensorimotor, preoperational, and concrete operations (Renner, Stafford, Lawson, McKinnon, Friot & Kellogg, 1976). However, data from similar cross-sectional studies of adolescents do not support the assertion that all individuals will automatically move to the next cognitive stage as they biologically mature simply through normal interaction with the environment (Jordan & Brownlee, 1981). Data from adolescent populations indicates only 30 to 35% of high school seniors attained the cognitive development stage of formal operations (Kuhn, Langer, Kohlberg & Haan, 1977). For formal operations, it appears that maturation establishes the basis, but a special environment is required for most adolescents and adults to attain this stage. There are a number of specific examples of how to use Piagetian theory in teaching/learning process. | Internet Resources | Electronic Files | All materials on this website [http://www.edpsycinteractive.org] are, unless otherwise stated, the property of William G. Huitt. Copyright and other intellectual property laws protect these materials. Reproduction or retransmission of the materials, in whole or in part, in any manner, without the prior written consent of the copyright holder, is a violation of copyright law.
While many patients recently diagnosed with scoliosis feel their condition developed suddenly, the reality is that as a progressive condition, it can be difficult to detect when mild; in most cases, by the time it’s diagnosed, the condition has progressed enough to cause noticeable symptoms, meaning its initial development occurred some time before it was diagnosed. Scoliosis can develop at any age, but is most commonly diagnosed during adolescence. In addition, different types of scoliosis have different causes, and reasons for developing when they do. From idiopathic to neuromuscular, degenerative and congenital, scoliosis is a highly-prevalent. There are many spinal conditions a person can develop, so let’s start our discussion of scoliosis development with how the condition is first diagnosed. Table of Contents If a healthy spine was viewed from the sides it would have a soft ‘S’ shape, and if viewed from the front and/or back, it would appear straight, and this is due to natural and healthy spinal curves. There are a number of spinal conditions that involve a loss of the spine’s healthy curves, so in order to be considered a true scoliosis, certain guidelines have to be met. Being diagnosed with scoliosis means an unnatural lateral (side-to-side) spinal curve has developed, but a scoliotic curve doesn’t just bend unnaturally to the side, it also twists, and it’s the condition’s rotational component that makes scoliosis a complex 3-dimensional condition. In addition, the unnatural spinal curve has to be of a minimum size to be diagnosed as scoliosis: minimum Cobb angle measurement of 10 degrees. Cobb angle is a key piece of information because it tells me how far out of alignment a scoliotic spine is, and the measurement is taken during X-ray by drawing lines from the tops and bottoms of the curve’s most-tilted vertebrae; the intersecting lines form an angle that’s expressed in degrees: A patient’s severity level not only shapes the design of effective treatment plans, it indicates likely symptoms and progressive rates, and as a progressive condition, scoliosis has it in its nature to worsen over time, especially if left untreated. So when it comes to first detecting the presence of scoliosis, severity is a key variable because when mild, it can be difficult for anyone, other than a specialist trained in recognizing early condition indicators, to notice; in other words, when scoliosis is diagnosed isn’t generally indicative of the condition’s initial onset. Scoliosis being progressive means, the unnatural spinal curve is virtually guaranteed to increase in size over time, and as scoliosis introduces a lot of uneven forces to the body, those forces will also increase over time, as will their effects. As a progressive condition, where a scoliosis is at the time of diagnosis is not indicative of where it will stay, and while we don’t always understand what causes scoliosis to develop, we most certainly understand what causes it to progress: growth and development. Now, as mentioned, scoliosis can affect any age, and there are also different condition types with unique etiologies and development scenarios, so for our current purposes, we’ll focus on the condition’s most-prevalent types: idiopathic, degenerative, neuromuscular, and congenital. Idiopathic scoliosis is the most common, and the most mysterious, type of scoliosis; this is because idiopathic means not clearly associated with a single-known cause. Idiopathic scoliosis is generally regarded as multifactorial, meaning caused by multiple variables that can vary from patient to patient. Idiopathic scoliosis is the most common type to affect children and adults, with adolescent idiopathic scoliosis (AIS) being the most prevalent form overall. When it comes to addressing who is the most likely to develop scoliosis, the answer is adolescents between the ages of 10 and 18. As growth and development is the condition’s progressive trigger, adolescents in, or entering into, the stage of puberty are at risk for rapid-phase progression, and this is also why scoliosis can seem to develop suddenly if a large growth spurt triggers significant progression. The reality, however, is that the condition could have developed long before but was undetected until it progressed and caused more-noticeable symptoms; this is very common and is why the majority of my adolescent patients are in the moderate level because it’s often not until a condition progresses from mild to moderate that it starts to become noticeable. Adolescent Idiopathic Scoliosis Symptoms The earliest and telltale signs of AIS are often uneven shoulders and hips, and this is part of the condition’s main symptom of postural deviation, caused by the condition’s uneven forces disrupting the body’s overall symmetry. In addition, scoliosis development and progression in adolescents can cause: The aforementioned types of postural deviation can also cause changes to gait, balance, and coordination, as well as clothing suddenly seeming ill-fitting. AIS also isn’t commonly painful because scoliosis doesn’t become a compressive condition until adulthood, when compression of the spine and its surrounding muscles and nerves can cause varying levels of pain; this is another reason scoliosis in adolescents can develop long before it’s first diagnosed. Idiopathic Scoliosis Development in Adults A further testament to the challenge of scoliosis detection is the fact that idiopathic scoliosis is also the main type to affect adults, and these cases are, in fact, cases of AIS that went undiagnosed and untreated all through adolescence, not being diagnosed until the condition became painful in adulthood. Adults being diagnosed with idiopathic scoliosis in this way is a very-common scenario, and the unfortunate reality is that had these patients been diagnosed and treated during adolescence, their spines would be in far better shape than they are by the time I see them. So in cases of idiopathic scoliosis in adults, the condition actually developed years before it was finally detected and diagnosed; pain is the main symptom of adult scoliosis, and this is what commonly brings them in for a diagnosis and treatment. The second most common type to affect adults is degenerative scoliosis, and this condition type develops slowly over time. Degenerative Scoliosis Development in Adults When it comes to degenerative scoliosis, we’re talking about scoliosis that affects aging adults and is caused by natural age-related spinal degeneration that occurs over time. In addition to natural age-related spinal degeneration, certain lifestyle factors can also contribute such as carrying excess weight, low activity levels, chronic poor posture, and repeatedly lifting heavy objects incorrectly. As the spine starts to degenerate, and it’s often the intervertebral discs that are the first spinal structures to start deteriorating, its ability to maintain its natural curves and alignment is disrupted, causing the development of a scoliotic curve. Just as the spine degenerates slowly over the years, degenerative scoliosis can also be slow to develop, and once it’s diagnosed, degeneration that’s already occurred can’t necessarily be reversed, but treatment can focus on preserving spinal function and slowing further deterioration. In cases of neuromuscular scoliosis, the scoliosis develops as a secondary complication of a larger neuromuscular condition such as cerebral palsy, spina bifida, or muscular dystrophy. These patients are complex to treat because the underlying neuromuscular condition has to be the focus of treatment as it’s the underlying cause of the scoliosis. Scoliosis development in these scenarios is difficult to pinpoint because, again, its development was caused by the presence of another condition. Neuromuscular scoliosis can affect children, adolescents, and adults. Cases of congenital scoliosis develop quickly as they are caused by a malformation within the spine itself that develops in utero; infants are born with congenital scoliosis. Spinal malformations can mean one or more vertebrae (bones of the spine) are misshapen and/or vertebral bodies can fail to separate into separate bones, instead becoming fused together. Congenital scoliosis is a rare form, affecting approximately 1 in 10,000. When it comes to questions about scoliosis, there are rarely clear-cut answers, and this is because it’s such a complex condition that ranges widely in severity and type. So when does scoliosis develop? The answer will be case-specific and depend on the patient and condition-type in question. The reality is that as a progressive condition, it’s extremely common for it to be diagnosed after significant progression has already occurred over time, because in many cases, symptoms of mild scoliosis are difficult to detect. In the main condition type, adolescent idiopathic scoliosis, the condition isn’t generally painful, nor does it often cause noticeable functional deficits, and when mild, its main symptom of postural deviation can be very subtle. So in many cases, when scoliosis is first diagnosed isn’t indicative of its initial onset/development. When diagnosing a patient, a very common question I hear is, “Why do I suddenly have scoliosis?” That answer will vary based on key patient/condition variables, but oftentimes, it didn’t develop suddenly; it just became noticeable suddenly due to progression most often triggered by a growth spurt. Here at the Scoliosis Reduction Center, I’m proud to work towards increasing scoliosis awareness, particularly regarding its prevalence and early indicators: key to early detection and treatment success.
According to experts, 80% of learning is visual, which means that if your child is having difficulty seeing clearly, his or her learning can be affected. This also goes for infants who develop and learn about the world around them through their sense of sight. To ensure that your children have the visual resources they need to grow and develop normally, their eyes and vision should be checked by an eye doctor at certain stages of their development. According to the American Optometric Association (AOA) children should have their eyes examined by an eye doctor at 6 months, 3 years, at the start of school, and then at least every 2 years following. If there are any signs that there may be a vision problem or if the child has certain risk factors (such as developmental delays, premature birth, crossed or lazy eyes, family history or previous injuries) more frequent exams are recommended. A child that wears eyeglasses or contact lenses should have his or her eyes examined yearly. Children’s eyes can change rapidly as they grow. Eye Exams in Infants: Birth – 24 Months A baby’s visual system develops gradually over the first few months of life. They have to learn to focus and move their eyes, and use them together as a team. The brain also needs to learn how to process the visual information from the eyes to understand and interact with the world. With the development of eyesight, comes also the foundation for motor development such as crawling, walking and hand-eye coordination. You can ensure that your baby is reaching milestones by keeping an eye on what is happening with your infant’s development and by ensuring that you schedule a comprehensive infant eye exam at 6 months. At this exam, the eye doctor will check that the child is seeing properly and developing on track and look for conditions that could impair eye health or vision (such as strabismus(misalignment or crossing of the eyes), farsightedness, nearsightedness, or astigmatism). Since there is a higher risk of eye and vision problems if your infant was born premature or is showing signs of developmental delay, your eye doctor may require more frequent visits to keep watch on his or her progress. Eye Exams in Preschool Children: 2-5 The toddler and preschool age is a period where children experience drastic growth in intellectual and motor skills. During this time they will develop the fine motor skills, hand-eye coordination and perceptual abilities that will prepare them to read and write, play sports and participate in creative activities such as drawing, sculpting or building. This is all dependent upon good vision and visual processes. This is the age when parents should be on the lookout for signs of lazy eye (amblyopia) – when one eye doesn’t see clearly, or crossed eyes (strabismus) – when one or both eyes turns inward or outward. The earlier these conditions are treated, the higher the success rate. Parents should also be aware of any developmental delays having to do with object, number or letter recognition, color recognition or coordination, as the root of such problems can often be visual. If you notice your child squinting, rubbing his eyes frequently, sitting very close to the tv or reading material, or generally avoiding activities such as puzzles or coloring, it is worth a trip to the eye doctor. Eye Exams in School-Aged Children: Ages 6-18 Undetected or uncorrected vision problems can cause children and teens to suffer academically, socially, athletically and personally. If your child is having trouble in school or afterschool activities there could be an underlying vision problem. Proper learning, motor development, reading, and many other skills are dependent upon not only good vision, but also the ability of your eyes to work together. Children that have problems with focusing, reading, teaming their eyes or hand-eye coordination will often experience frustration, and may exhibit behavioral problems as well. Often they don’t know that the vision they are experiencing is abnormal, so they aren’t able to express that they need help. In addition to the symptoms written above, signs of vision problems in older children include: - Short attention span - Frequent blinking - Avoiding reading - Tilting the head to one side - Losing their place often while reading - Double vision - Poor reading comprehension The Eye Exam In addition to basic visual acuity (distance and near vision) an eye exam may assess the following visual skills that are required for learning and mobility: - Binocular vision: how the eyes work together as a team - Peripheral Vision - Color Vision - Hand-eye Coordination The doctor will also examine the area around the eye and inside the eye to check for any eye diseases or health conditions. You should tell the doctor any relevant personal history of your child such as a premature birth, developmental delays, family history of eye problems, eye injuries or medications the child is taking. This would also be the time to address any concerns or issues your child has that might indicate a vision problem. If the eye doctor does determine that your child has a vision problem, they may discuss a number of therapeutic options such as eyeglasses or contact lenses, an eye patch, vision therapy or Ortho-k, depending on the condition and the doctor’s specialty. Since some conditions are much easier to treat when they are caught early while the eyes are still developing, it is important to diagnose any eye and vision issues as early as possible. Following the guidelines for children’s eye exams and staying alert to any signs of vision problems can help your child to reach his or her potential.
A data structure is the storage of data, using a mathematical object. The data is arranged and linked in a specific way. Through the special structure of a data structure, one tries to implement desired functions particularly efficiently, whereby one usually optimises either for low memory requirements or high speed. The data is divided into elements. The individual elements are placed in regular relations, e.g. a predecessor and successor relation. In computer science the term Abstract Data Type (ADT) has become established. This term is closely related to the concept of data abstraction and especially data encapsulation. An abstract data type basically describes nothing more than a data encapsulation. When an abstract data type is defined, you don’t have to worry about the structure or implementation of the data, you just specify which operations this abstract data type should be able to perform.
Remember last September when NASA crashed a spaceship into an asteroid to see what would happen? Well, an investigation team led by the Johns Hopkins Applied Physics Lab (APL) released a paper confirming that the successful Double Asteroid Redirection Test (DART) mission wasn’t just for fun; it proves that humanity can deflect asteroids and also actually save the planet. NASA outlined the conclusion in a new blog post on Wednesday, explaining that the “kinetic impactor” technique, which APL writer Ajai Raj jokingly defines as “smashing a thing into another thing,” could indeed be used as an effective means of planetary defense. “These findings add to our fundamental understanding of asteroids and build a foundation for how humanity can defend Earth from a potentially hazardous asteroid by altering its course,” Nicola Fox, NASA’s associate administrator for the Science Mission Directorate, stated in the agency’s blog post. The findings are part of a series of four papers published in Nature describing the results and takeaways of the DART mission. The September 26th mission last year altered the orbit of asteroid moonlet Dimorphos by 33 minutes, as calculated in one of the papers. The DART spacecraft launched debris from the asteroid at the impact point, known as ejecta. The recoil effect of the debris was found to have contributed more to the asteroid’s momentum change than the impact itself. Authors at APL reported in another paper that asteroids like Dimorphos with a diameter of around half a mile can be successfully deflected by this method and not need an advance reconnaissance mission. But the writers warn that earthlings will need a sufficient warning time, ideally decades in advance or several years at a minimum, to mitigate such a threat. All in all, there’s a lot of optimism about humanity’s ability to protect itself from giant space rock bullies. And we’ll have to hand off the recipe for how to do this whole kinetic impactor thing to the next generation, because according to the APL: “no known asteroid poses a threat to Earth for at least the next century.”
The images of the Moon were captured by researchers at the Green Bank Telescope in the United States. Researchers have taken the highest resolution images to date of the Moon from Earth thanks to the Green Bank Telescope located in West Virginia, United States. Project scientists have been working on developing a prototype system to create images of stars in the Solar System and have targeted our satellite for these impressive photographs. The best shots of the Moon from Earth In order to capture the satellite, a low-power radar transmitter with up to 700 watts of output power at 13.9 gigahertz is used. Radio waves bounce off the Moon ‘s surface and are then reflected. These waves are then collected by the ten 25-meter antennas of the Very Long Baseline Array (VLBA). One of the first shots of the Moon was the Tycho crater with a resolution detail of up to 5 meters ( you can see it in all its quality at this link ) This accident is located in the southern part of the elevated areas. It is the youngest crater among the large impact craters on the Moon : its approximate age is 108 million years. However, the telescope points even higher. The full version is expected to use 500 kilowatts of power: with this it will be possible to detect, track and determine dangerous asteroids. “In our tests, we were able to zero in on an asteroid 2.1 million kilometers from us, more than 5 times the distance between Earth and the Moon . The asteroid is about a kilometer in size, which is large enough to cause global devastation in the event of an impact,” says Patrick Taylor, head of the radar division. “With the high-powered system, we could study more objects much further away. When it comes to strategizing for potential impacts, having more lead time is everything.”
- Species group A species group is an informal taxonomic rank into which an assemblage of closely related species within a genus are grouped because of their morphological similarities and their identity as a biological unit with a single monophyletic origin. The use of the term reduces the need to use a higher taxonomic category in cases with taxa that exhibit sufficient differentiation to be recognized as separate species but possess inadequate variation to be recognized as subgenera. Defining species groups is a convenient way of subdividing well-defined genera with a large number of recognized species. The use of species groups have enabled systematists to consolidate polytypic species species into nominal species which in turn can be grouped into the larger array of the species group. In regards to whether or not members of a species group share a range, sources differ. A source from Iowa State University Department of Agronomy says that members of a species group usually have partially overlapping ranges but do not interbreed with each other. A Dictionary of Zoology (Oxford University Press 1999) describes a species group as complex of related species that exist allopatrically and explains that this "grouping can often be supported by experimental crosses in which only certain pairs of species will produce hybrids." The examples given below may support both uses of the term "species group." - The fruit fly subgenus Sophophora contains the Drosophila melanogaster species group which itself contains 12 subgroups. The Drosophila obscura species group belongs to the same subgenus and contains 6 subgroups. - In Vespula, a genus of wasps, only a few species have a scavenging habit (as opposed to a strictly predatory habit) and thus are considered major pests. The most abundant and bothersome of these are the three species belonging to the "Vespula vulgaris species group" which includes the "common wasp" or "yellowjacket" (Vespula vulgaris), the "German wasp" or "European wasp" (Vespula germanica), and the "western wasp" or "western yellowjacket" (Vespula pensylvanica). - The Central American bark scorpions Centruroides limbatus and Centruroides bicolor belong to the "Gracilis species group". All of the species in this group are characterized by their long, narrow pedipalps and overall relatively large size. - The arachnids to which the common name "Black widow spider" is given are in a species group that includes the "southern black widow" (Latrodectus mactans), the "northern black widow" (Latrodectus variolus), and the "western black widow" (Latrodectus hesperus'). - The Neotropical butterfly Morpho adonis is in a species group with Morpho eugenia and Morpho uraneis. Morpho marcus is also included in the group but might actually be the same species as Morpho adonis - Brachygobius, a small genus of gobies which are popular as aquarium fish, are informally divided by taxonomists into two species groups. The dwarf "Brachygobius nunus species group" contains Brachygobius nunus, Brachygobius aggregatus, and Brachygobius mekongensis while the bigger "Brachygobius doriae species group" contains the bigger species of Brachygobius doriae, Brachygobius sabanus, and Brachygobius xanthomelas. - The chameleon Brookesia minima has been characterized as belonging to a species group with other "Madagascan Dwarf Chameleons" such as Brookesia dentata, Brookesia tuberculata, and other new or unidentified species such as a recently described chameleon from Tsingy de Bemaraha Strict Nature Reserve. - Peromyscus, a genus of deer mice, has been divided into subgenera Peromyscus and Haplomylomys and these subgenera are subdivided further into thirteen species groups. - Recent cytogenetic studies have shown that the Middle East Blind Mole Rat (Spalax ehrenbergi) may actually be a species group containing several cryptic species that can be distinguished by chromosome numbers. The term "species group" is also used in a different way so as to describe the manner in which individual organisms group together. In this non-taxonomic context one can refer to "same-species groups" and "mixed-species groups." While same-species groups are the norm, examples of mixed-species groups abound. For example, zebra (Equus burchelli) and wildebeest (Connochaetes taurinus) can remain in association during periods of long distance migration across the Serengeti as a strategy for thwarting predators. Cercopithecus mitis and Cercopithecus ascanius, species of monkey in the Kakamega Forest of Kenya, can stay in close proximity and travel along exactly the same routes through the forest for periods of up to 12 hours. These mixed-species groups are cannot be explained by the coincidence of sharing the same habitat. Rather, they are created by the active behavioural choice of at least one of the species in question. - Cryptic species - Parapatric speciation - Ring species - ^ a b Iowa State University Department of Agronomy - ^ a b Michael Allaby. "species group." A Dictionary of Zoology (Oxford University Press 1999) - ^ a b c Molecular systematics of the Peromyscus boylii species group - ^ Ranz JM, Maurin D, Chan YS, et al. (June 2007). "Principles of genome evolution in the Drosophila melanogaster species group". PLoS Biol. 5 (6): e152. doi:10.1371/journal.pbio.0050152. PMC 1885836. PMID 17550304. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1885836. - ^ World- Wide Distribution of Pestiferous Social Wasps(Vespidae) - ^ Walter Reed Biosystematics Unit "Scorpion of the Day":Centruroides limbatus - ^ Kaston, B. J. (1970). "Comparative biology of American black widow spiders". Transactions of the San Diego Society of Natural History 16 (3): 33–82. - ^ Anderson Tully Worldwide - ^ Le Moult (E.) & Réal (P.), 1962-1963. Les Morpho d'Amérique du Sud et Centrale, Editions du cabinet entomologique E. Le Moult, Paris]] - ^ Schäfer F (2005). Brackish Water Fishes. Aqualog. pp. 49–51. ISBN 3-936027-82-X. - ^ AdCham.com: Brookesia minima by E. Pollak - ^ Sözen M, Matur F, Çolak E, Özkurt Ş, Karataş A (2006). "Some karyological records and a new chromosomal form for Spalax (Mammalia: Rodentia) in Turkey" (PDF). Folia Zool. 55 (3): 247–256. http://www.ivb.cz/folia/55/3/247-256.pdf. - ^ Tosh CR, Jackson AL, Ruxton GD (March 2007). "Individuals from different-looking animal species may group together to confuse shared predators: simulations with artificial neural networks". Proc. Biol. Sci. 274 (1611): 827–32. doi:10.1098/rspb.2006.3760. PMC 2093981. PMID 17251090. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2093981. Wikimedia Foundation. 2010.
The history of electromagnetic theory begins with ancient measures to deal with atmospheric electricity, in particular lightning. People then had little understanding of electricity, and were unable to scientifically explain the phenomena. In the 19th century there was a unification of the history of electric theory with the history of magnetic theory. It became clear that electricity should be treated jointly with magnetism, because wherever charges are in motion electric current results and, magnetism is due to electric current. The source term for electric field is electric charge where as that for magnetic field is electric current( charges in motion). Magnetism was not fully explained until the idea of magnetic induction was developed. Electricity was not fully explained until the idea of electric charge was developed. Ancient and classical history The knowledge of static electricity dates back to the earliest civilizations, but for millennia it remained merely an interesting and mystifying phenomenon, without a theory to explain its behavior and often confused with magnetism. The ancients were acquainted with rather curious properties possessed by two minerals, amber (Greek: ἤλεκτρον, electron) and magnetic iron ore (Greek: Μάγνης λίθος, Magnes lithos, "the Magnesian stone, lodestone"). Amber, when rubbed, attracts light bodies; magnetic iron ore has the power of attracting iron.
In week three, we were looking at rights ethics with regards to Locke. As a reminder, Locke said we have inalienable rights to life, liberty, and property. It is immoral to violate them. Many think we have more rights than those listed by Locke. Some even think we have a right to health care. That means it is the duty of the state to provide each citizen with their medical needs. Rights theory says to respect the entitlements we have. If a right is inalienable, it cannot truly be violated ethically even with our consent. We have basic needs. Rights are something beyond needs. They are what we should be authorized to have. We are due what we have a right to. That is not always the case with need. For example, we need food, but people often go hungry. A need refers to something we need physically to exist. A right is a moral entitlement to something. Asking if we have a right to food is a moral question. Needs are determined by the requirements of the body and of material existence. Rights are determined by moral reflection, inquiry, an argument We have a right to own property. We do not need it to live. We could imaginably be allowed to use another’s. We have a right to own a home. We can rent. Initial Post Instructions For the initial post, respond to one of the following options, and label the beginning of your post indicating either Option 1 or Option 2: Option 1: Assess the moral solutions arrived at through “care” (care-based ethics) and “rights” ethics to social issues of ethical import such as poverty, drug use, and/or lack of health care, That is, note any ethical problems that arise related to those particular issues. Then, say how both care-based and rights theory of ethics would solve those problems. Are those solutions correct? Why or why not? What is your own approach there? Option 2: What moral guidelines should we use when it comes to recently introduced healthcare technologies of any kind (you will note and engage with your own examples) and social technologies of any kind (you will note and engage with your own examples)? Involve care-based ethics in your answer Follow-Up Post Instructions Respond to at least two peers or one peer and the instructor. If possible, respond to one peer who chose an option different than the one you chose. Further the dialogue by providing more information and clarification. Make sure that you add additional information and not repeat the same information already posted on the discussion board as you further the dialogue.
Hurricane Sandy is a stark reminder of the rising risks of climate change. A number of warming-related factors may well have intensified the storm’s impact. Higher ocean temperatures contributed to heavier rainfall. Higher sea levels produced stronger storm surges. New research suggests that Arctic melting may be increasing the risk of the kind of atmospheric traffic jam that drove Sandy inland. While no single weather event can be said to have been directly caused by climate change, our weather now is the product of our changing climate, as increased warming raises the probability of extreme weather events. In highlighting our vulnerabilities to extreme weather, Hurricane Sandy underscores two imperatives: We need to reduce the risks of climate change by reducing our carbon emissions, and we must strengthen our defenses against future impacts that it may be too late to avoid.
|Behavior Disorder Home | Child Behavior | Teen Behavior | Adult Behavior || Disruptive Behavior Disorders Disruptive Behavior Disorders are often diagnosed during early childhood and associated with Attention Deficit Hyperactivity Disorder. This article defines disruptive behavior disorders such as ADHD, Oppositional Defiant Disorder (ODD), and conduct disorders. Get information on treatment of these disorders as well. What Is a Disruptive Behavior Disorder? Disruptive Behavior Disorder is a technical term that refers to a specific portion of the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision, abbreviated as DSM-IV-TR. The section is a subset of the Disorders Usually First Diagnosed in Infancy, Childhood, or Adolescence, with the subcategory name of Attention-Deficit and Disruptive Behavior Disorders. The diagnoses that are left in this category when the Attention Deficit Hyperactivity Disorder is excluded are: It is reported that the Disruptive Behavior Disorders occur in between 4% and 9% of children, making them the most frequently found psychiatric disorder of children. It is found 3 to 4 times more often in children with an IQ that is below average than in those children who have a normal IQ. It is not unusual to find it coexisting with other disorders, including ADHD, mood disorders (such as depression), and substance abuse. In fact, estimates suggest that about 2/3 of children with ADHD will have a comorbid Disruptive Behavior Disorder diagnosis. The DSM-IV-TR characterizes the child with ODD as negative, hostile, and defiant more than is expected for his or her age and over a period of at least 6 months. At least four behaviors from a list of 8 must be present. These include being annoying or being annoyed, losing his/her temper, arguing with adults and resisting their requests or rules, blaming others, showing anger, resentfulness, spite, or vindictiveness. Oppositional Defiant Disorder is characterized as involving less aggression or violence than is found in Conduct Disorder. The DSM-IV-TR characterizes the child with Conduct Disorder as violating norms, rules, and the rights of others in a persistent and repetitive way, with 3 of 4 criteria evident over 12 months and at least 1 in the past 6 months, those criteria being: aggression towards people and animals, property destruction, theft or deception, and serious rule-breaking. The ICD-10 does not have a category called "Disruptive Behavior Disorder," but it does have subcategories of Conduct Disorder that refer to problems that are restricted entirely or almost entirely to the family situation (i.e., the child's behavior outside the home—at school, for instance—does not show the characteristics of Conduct Disorder) and to a social form in which the characteristic behavior occur in a group of peers who act out together. Both these Disruptive Behavior Disorders cause impaired functioning in the educational, occupational, and/or social realm, and for a diagnosis, the behaviors must not be limited to times during which a mood disorder or psychotic disorder is present. Treatment of Disruptive Behavior Disorders Treatment will depend on the age of the child and whether there are comorbid issues. Treatments may include medication and therapy and may often involve family members as well. Related Article: Behavior Modification >>
Sheila Heit Programs Coordinator of the Vermilion Public Library presented a Fossil Rub Activity with two sessions on July 25. Fossils were on display for the children to feel and examine and pass around. 6 fossil casts came from the University of Alberta, Department of Earth and Atmospheric Sciences and included fossil ferns, trilobites, and a fossil fish. Heit explained and asked the children questions about their knowledge of the various fossils that are found. She described how fossils have been created, one being an example of when a fish dies and goes to bottom of the water, what is left is bones and it is then gradually covered by sand and mud and become rocks in the sediment, then eventually (millions of years!) as the ground shifts and the earth changes they may be exposed for us to find. Other fossils Heit had to show the children were various pieces of Petrified wood. She went on to explain about Pine tree resin which is where some whole insects have been trapped and now are found in what is called Amber. Coal and oil are other fossils, so that is why we call oil a fossil fuel. All the children had fun participating in this fun, learning and doing program.
Students will learn about the fiercest predator in the ocean. From the biggest to the smallest, what adaptations sharks have made to remain on Earth for so long, scientific research and medicinal contributions the shark has made are all discussed, as well as their unique ability to hunt and survive. This book teaches students about what a marsupial is, their gestation period, how they raise their young, and whether they are carnivores, omnivores, or herbivores. With interesting facts and photographs, this book discusses how to classify organisms. Photographs and text introduce animals that utilize camouflage, mimicry, and other curious techniques as defense mechanisms. These animals include skunks, porcupines, walking sticks, kingsnakes, and octopuses. Engaging text describes animals that don't always do what the rest of their species does. Describes the process of minting money. The planets are highlighted in this book and answers questions such as distance between the planets, how many planets are really in our solar system, current and future travel plans, and who studies the planets. This introduction to stars discusses the different kinds of stars, the Milky Way, and what stars are made of. Clear definitions supported by every day examples and easy hands-on activities introduce young scientists to inclined planes. Outer space and its characteristics are discussed and identified throughout this book. The most up to date information from current research is provided.
- Show reverence (honor) for God and His name. - Show respect for the United States of America. - Show respect for others and for yourself. - Treat materials, equipment, and property in a responsible manner. - Use appropriate voice level and language at all times. - Bring only permitted items to school. - Follow dress code as printed in the Parent-Student Handbook. Pre-K & Kindergarten Rules - Use walking feet - Use listening ears - Use inside voices - Keep hands and feet to yourself - Be nice and share with everybody Behavior Policy for Pre-K and Kindergarten Our behavior policy uses green, yellow, and red faces. Each morning, every child receives a green smiley face beside his/her name which means that the child is being good. If a child begins to misbehave, then he/she will receive a verbal warning from the teacher. The student gets 3 verbal warnings. If the student has to be told a 4th time to stop the misbehavior, then he/she will receive a yellow face and lose 15 minutes of free play time. This is usually all that is needed for the child to behave. However, if a 5th warning is given, then the child receives a red face with a frown, loses all play time, and the parents are contacted. We find that this system works very well and is very easy for the children to understand. We have been very successful in using this method.
In literature allegory is used as a symbolic device to represent abstract ideas or principles beyond the surface meaning. Allegorical subjects, items or characters have a literal meaning as well as a figurative one. Recognizing allegory is an important part of literary analysis. Follow these steps to spot allegory in literature. Look for a didactic theme or moral tone in the work. Allegory is often used as an embodiment for moral qualities and messages as in Aesop's Fables. The story itself is constructed in such a way as to convey a central theme or lesson. Take note of other literary devices such as satire that are often used in conjunction with allegory. George Orwell's political satire "Animal Farm," for example, uses animals and a farm setting as a representation of human society and a critique of politics. Search for characters that are personifications of ideas like greed, envy or hate. Often their names help to decipher their literary purpose. John Bunyan's allegorical masterpiece, "Pilgrim's Progress," is a prime example with characters such as Christian, Old Honest and Lord Carnal Delight. Identify fantasy, science fiction or supernatural elements in a work. These forms, both as a genre and a device, are often used metaphorically to reflect an idea or belief about the real world. C.S. Lewis in "The Chronicles of Narnia" uses fantasy as a genre and allegory as a literary device to do just that.
Stem cells are truly wondrous. They are found throughout our bodies, from the intestines to the heart – and even in bones. However, the truth about these amazing, multi-potential cells is often mingled with political debate, which often irritates rather than educates. Because of this, there is much confusion about what stem cells really are and where they really come from. The first thing to understand is that stem cells are undifferentiated cells with special capabilities, which includes producing more stem cells through cell division. Under certain conditions they can become tissue or organ-specific cells that can serve in various capacities or communicate with other cells for specific outcomes. Due to their regenerative abilities, scientists study stem cells to find treatments for diabetes, heart disease, and a myriad other illnesses. Ongoing research into stem cells allows scientists to expand their understanding of how organisms develop and how a handful of tiny cells can become a complex being. One of the common misconceptions about stem cells is that they all come from fetuses. While this is partially true, there are several other sources of stem cells, including the umbilical cord, bone marrow, and fat tissue. There are two categories of stem cells: embryonic, derived from an embryo, and non-embryonic, which are taken from grown individuals. Another misconception is that the cells used in research could have matured and become fully developed babies. The embryonic cells used for research come from embryos left over from in vitro fertilization and are not fertilized inside the womb. They come from blastocysts, a fleeting cell that would otherwise disappear. Research continues with both embryonic and adult stem cells, but successful stem cell therapy for common diseases and common people have mainly been carried out using adult stem cells. In fact, about 70 diseases, from cancer to heart disease, have been successfully treated with adult stem cells. Another myth worth debunking is that scientists are using stem cells to clone humans. Again, this is not true and should be left to Hollywood.
Scientific name:Hydrilla verticillata Alternative common names: Esthwaite waterweed (English) Hydrilla is a submerged perennial aquatic plant with branching stems up to 2m long, but in deep water can reach 7m. It forms dense submerged masses which impacts on recreational activities such as boating and fishing. It also threatens indigenous aquatic plants. It looks similar to indigenous Lagarosiphon and would need to be examined microscopically. Where does this species come from?Suspected to originate from the warmer parts of Asia. What is its invasive status in South Africa?NEMBA Category 1a Where in South Africa is it a problem?Pongolapoort (Jozini) Dam in northern KwaZulu-Natal. How does it spread?Reproduces from seeds and two types of specialised buds. It can also propagate from fragments of the stem. Boats and fishing gear could spread this weed from Pongalpoort to other dams and rivers. Why is it a problem?It crowds out other aquatic plants. It hinders recreational boating, fishing and even makes swimming dangerous. What does it look like?Leaves: Leaves grow in whorls of 3-8 and have distinctly serrated margins. Leaves are usually 12mm long and 2mm wide and grow on long submerged stems up to 2m long. Flowers: Inconspicuous, about 3mm across, at the tip of long thin stalks and float on the surface. Fruit/seeds: Rounded capsule. Does the plant have any uses?Sometimes used for ornamental purposes in aquariums and ponds.
d = 2 and d = 6 Work Step by Step Using divisibility rules, we know that a number is divisible by 4 if its last two digits make a number that is divisible by 4. In 4|963,23d, we look at 3d with d being unknown. When we substitute each of the numbers, 0 - 9, into 3d for d, we find that only 32 and 36 are divisible by 4. Therefore, d = 2 and d = 6.
Discover your intellectual strengths A variety of methods to improve your short term memory. By spacing out study sessions instead of cramming, to using cues and acronyms, following these tips will produce results. An acrostic is a another mnemonic device that helps in memory retrieval. It is a series of lines or verses in which the letters taken in a particular order spell out a word or phrase. E G B D F Here is a way for remembering the order of keys with sharps. G D A E B F C Here is a useful sentence to memorize the geological ages. Cambrian, Ordivician, Silurian, Devonian, Carboniferous, Permian, Triassic, Jurassic and Cretaceus Here is another useful phrase in geology to remember which is which. Here are a number of variations for remembering the correct spelling of necessary. Never Eat Cake Eat Shiny Sandwiches And Remain Young. Necessary - One Coffee, two Sugars Here are two rhymes for remembering the compass points North, East, South, West. Never Eat Shredded Wheat. An acronym is a word formed from the initial letters of a name or by combining initial letters or parts of a series of words. For example DOS is a well know acronym for Disk Operating System. You may create your own acronyms in order to remember a series of items, for example IPMAT is an acronym for the stages of cell division Interphase Prophase Metaphase Anaphase Telephase Information is memorized more easily if learning periods are spaced out rather than crammed into a few study sessions. More material will be memorized if learning is spaced out and frequent breaks are made than if the same information is repeated over and over again in one study session. Cues are useful for behaviors that occur everyday. If there is something that you forget to do you may give yourself a cue that will help you remember what to do. For example you may say that whenever you go to sleep you must do your stretching exercises. It is important to learn something correctly during the first attempt as this will help in remembering how to do it properly when it is re visited. If it is learned the wrong way then the next time that it is attempted the same mistake may be made as the mind will simply be remembering the same incorrect method. When learning new material it helps if you test yourself at the end rather that reading everything twice or three times. By testing yourself you can check what you have learned and therefore all that is left will be to read up on anything that you may have forgotten. It may also help to try to teach the information that you have just learned to a friend. This will reinforce the knowledge that you have memorized and will let you know which information has been forgotten. More efficient memory recall is achieved by taking frequent study breaks. This gives time for new facts and ideas to relate to each other and previously learned information. After each study break make a small review of what you have learned and as this will help to consolidate the information even more. The highpoint of your recall occurs approximately 10 minutes after you have read something and this is when it should be first reviewed because it falls dramatically after that. The information is reinforced even more by making a second review after one day, a third review after one week and a fourth review after 1 month. If information is not reviewed at all most of it will be lost within one or two days. Before reading an article or a book go through it quickly noting the main ideas and reading the summary and the conclusion. This will help you to link information as you read through the book and understand the details in each chapter. Also ask yourself what you expect to learn before you start reading. As you read through the book new concepts and ideas will be absorbed and remembered more easily if they are associated with these questions. When trying to remember something that is on the tip of your tongue try to remember the context in which you learned it for example where you were at the time and who was with you. Regular exercise increases blood flow to the brain and can greatly improve mental abilities including memory.
Naturalism was a late nineteenth century movement in theater, film, art and literature that seeks to portray common values of the ordinary individual, as opposed to such movements as Romanticism or Surrealism, in which subjects may receive highly symbolic, idealistic, or even supernatural treatment. Naturalism was an outgrowth of Realism. Realism began after Romanticism, in part as a reaction to it. Unlike the Romantic ideal, which focused on the inner life of the (often great) individual, Realism focused on the description of the details of everyday existence as an expression of the social milieu of the characters. Honore de Balzac begins Old Goriot with a 30-some page description of the Maison Vaquer, a run-down but "respectable" boarding house owned by Madame Vaquer. While much of Realist literature moved attention away from the higher classes of society, there were some exceptions, such as Leo Tolstoy. But in naturalist literature and visual arts, the general direction of Realism is taken further. The subjects changed to primarily people of lower birth. In naturalist works writers concentrate on the filth of society and the travails of the lower classes as the focal point of their writing. Naturalism was heavily influenced by both Marxism and evolutionary theory. Naturalism attempted to apply what they saw as the scientific rigor and insights of those two theories to artistic representation of society, as a means of criticizing late nineteenth century social organization. In theater, the naturalism movement developed in the late nineteenth and early twentieth century. Naturalism in theater was an attempt to create a perfect illusion of reality through detailed sets, an unpoetic literary style that reflects the way ordinary people speak, and a style of acting that tries to recreate reality (often by seeking complete identification with the role, as advocated by Stanislavski). As founder of the first acting "System," co-founder of the Moscow Art Theater (1897 - ), and an eminent practitioner of the naturalist school of theater, Konstantin Stanislavski unequivocally challenged traditional notions of the dramatic process, establishing himself as one of the most pioneering thinkers in modern theater. Stanislavski coined phrases such as "stage direction," laid the foundations of modern opera and instantly brought fame to the works of such talented writers and playwrights as Maxim Gorky and Anton Chekhov. His process of character development, the "Stanislavski Method," was the catalyst for method acting–arguably the most influential acting system on the modern stage and screen. Such renowned schools of acting and directing as the Group Theater (1931 – 1941) and The Actors Studio (1947 - ) are a legacy of Stanislavski's pioneering vision and naturalist thought. Naturalism was criticized in the mid-twentieth century by Bertolt Brecht and others who argued instead for breaking the illusion of reality in order to encourage detached consideration of the issues the play raises. Though it retains a sizable following, most Western theater today follows a semi-naturalistic approach, with naturalistic acting but less realistic design elements (especially set pieces). Naturalistic performance is often unsuitable when performing other styles of theater, particularly older styles. For example, Shakespearean verse often requires an artificial acting style and scenography; naturalistic actors try to speak the lines as if they are normal, everyday speech, which often sounds awkward in context. Film, on the contrary, permits a greater scope of illusion than is possible on stage. Naturalism is the normal style, although there have been many exceptions, including the German Expressionists and modern directors such as Terry Gilliam, who have reveled in artificiality. Even a fantastical genre such as science fiction can have a naturalistic element, as in the gritty, proletarian environment of the commercial space-freighter in Alien. The term naturalism describes a type of literature that attempts to apply scientific principles of objectivity and detachment to its study of human beings. Unlike realism, which focuses on literary technique, naturalism implies a philosophical position. For naturalistic writers, since human beings are, in Emile Zola's phrase, "human beasts," characters can be studied through their relationships to their surroundings. Naturalistic writers were influenced by the evolution theory of Charles Darwin. They believed that one's heredity and social environment decide one's character. Whereas realism seeks only to describe subjects as they really are, naturalism also attempts to determine "scientifically" the underlying forces (i.e. the environment or heredity) influencing these subjects' actions. They are both opposed to Romanticism, in which subjects may receive highly symbolic, idealistic, or even supernatural treatment. Naturalistic works often include uncouth or sordid subject matter. For example, Émile Zola's works had a sexual frankness along with a pervasive pessimism. Naturalistic works exposed the dark harshness of life, including poverty, racism, prejudice, disease, prostitution, filth, etc. They were often very pessimistic and frequently criticized for being too blunt. In the United States, the genre is associated principally with writers such as Abraham Cahan, Ellen Glasgow, David Graham Phillips, Jack London, and most prominently Stephen Crane, Frank Norris, and Theodore Dreiser. The term naturalism operates primarily in counter distinction to realism, particularly the mode of realism codified in the 1870s and 1880s, and associated with William Dean Howells and Henry James. It is important to clarify the relationship between American literary naturalism, with which this entry is primarily concerned, from the genre also known as naturalism that flourished in France from the 1850s to the 1880s. French naturalism, as exemplified by Gustave Flaubert, and especially Emile Zola, can be regarded as a programmatic, well-defined and coherent theory of fiction that self-consciously rejected the notion of free will, and dedicated itself to the documentary and "scientific" exposition of human behavior as being determined by, as Zola put it, "nerves and blood." Many of the American naturalists, especially Norris and London, were heavily influenced by Zola. They sought explanations for human behavior in natural science, and were skeptical, at least, of organized religion and beliefs in human free will. However, the Americans did not form a coherent literary movement, and their occasional critical and theoretical reflections do not present a uniform philosophy. Although Zola was a touchstone of contemporary debates over genre, Dreiser, perhaps the most important of the naturalist writers, regarded Honore de Balzac, one of the founders of Realism, as a greater influence. Naturalism in American literature is therefore best understood historically in the generational manner outlined above. In philosophical and generic terms, American naturalism must be defined rather more loosely, as a reaction against the realist fiction of the 1870s and 1880s, whose scope was limited to middle-class or "local color" topics, with taboos on sexuality and violence. Naturalist fiction often concentrated on the non-Anglo, ethnically marked inhabitants of the growing American cities, many of them immigrants and most belonging to a class-spectrum ranging from the destitute to the lower middle-class. The naturalists were not the first to concentrate on the industrialized American city, but they were significant in that they believed that the realist tools refined in the 1870s and 1880s were inadequate to represent it. Abraham Cahan, for example, sought both to represent and to address the Jewish community of New York's East Side, of which he was a member. The fiction of Theodore Dreiser, the son of first and second generation immigrants from Central Europe, features many German and Irish figures. Frank Norris and Stephen Crane, themselves from established middle-class Anglophone families also registered the ethnic mix of the metropolis, though for the most part via reductive and offensive stereotypes. In somewhat different ways, more marginal to the mainstream of naturalism, Ellen Glasgow's version of realism was specifically directed against the mythologizing of the South, while the series of "problem novels" by David Graham Phillips, epitomized by the prostitution novel Susan Lenox: Her Fall and Rise (1917), can be regarded as naturalistic by virtue of their underclass subject-matter. Allied to this, naturalist writers were skeptical towards, or downright hostile to, the notions of bourgeois individualism that characterized realist novels about middle-class life. Most naturalists demonstrated a concern with the animal or the irrational motivations for human behavior, sometimes manifested in connection with sexuality and violence. Here they differed strikingly from their French counterparts. The naturalist often describes his characters as though they are conditioned and controlled by environment, heredity, instinct, or chance. But he also suggests a compensating humanistic value in his characters or their fates which affirms the significance of the individual and of his life. The tension here is that between the naturalist's desire to represent in fiction the new, discomfiting truths which he has found in the ideas and life of his late nineteenth-century world, and also his desire to find some meaning in experience which reasserts the validity of the human enterprise. The works of Stephen Crane played a fundamental role in the development of Literary Naturalism. While supporting himself by his writings, he lived among the poor in the Bowery slums to research his first novel: Maggie: A Girl Of The Streets (1893). Crane's first novel is the tale of a pretty young slum girl driven to brutal excesses by poverty and loneliness. It was considered so sexually frank and realistic, that the book had to be privately printed at first. It was eventually hailed as the first genuine expression of Naturalism in American letters and established its creator as the American apostle of an artistic revolution which was to alter the shape and destiny of civilization itself. Much of Crane's work is narrated from an ordinary point of view, who is in an extraordinary circumstance. For example, The Red Badge of Courage depicted the American Civil War from the point of view of an ordinary soldier. It has been called the first modern war novel. One of Stephen Crane's more famous quotes come from his naturalistic text, The Open Boat: "When it occurs to a man that nature does not regard him as important, and that she feels she would not maim the universe by disposing of him, he at first wishes to throw bricks at the temple, and he hates deeply the fact that there are no bricks and no temples." Benjamin Franklin Norris (March 5, 1870 – October 25, 1902) was an American novelist during the Progressive Era, writing predominantly in the naturalist genre. His notable works include McTeague (1899), The Octopus: A California Story (1901), and The Pit (1903). Although he did not support socialism as a political system, his work nevertheless evinces a socialist mentality and influenced socialist/progressive writers such as Upton Sinclair. Like many of his contemporaries, he was profoundly influenced by the advent of Darwinism. Through many of his novels, notably McTeague, runs a preoccupation with the notion of the civilized man overcoming the inner "brute," his animalistic tendencies. Considered by many as the leader of Naturalism in American writing, Dreiser is also remembered for his stinging criticism of the genteel tradition and of what William Dean Howells described as the "smiling aspects of life" typifying America. In his fiction, Dreiser deals with social problems and with characters who struggle to survive. His sympathetic treatment of a "morally loose" woman in Sister Carrie was called immoral and he suffered at the hands of publishers. One of Dreiser's favorite fictional devices was the use of contrast between the rich and the poor, the urbane and the unsophisticated, and the power brokers and the helpless. While he wrote about "raw" experiences of life in his earlier works, in his later writing he considered the impact of economic society on the lives of people in the remarkable trilogy—The Financier, The Titan, and The Stoic. His best known work is An American Tragedy which shows a young man trying to succeed in a materialistic society. There were quite a few authors that participated in the movement of literary naturalism. They include Edith Wharton (The House of Mirth (1905)), Ellen Glasgow (Barren Ground, 1925), John Dos Passos (U.S.A. trilogy (1938): The 42nd Parallel (1930), 1919 (1932), and The Big Money (1936)), James T. Farrell (Studs Lonigan (1934)), John Steinbeck (The Grapes of Wrath, 1939), Richard Wright (Native Son (1940), Black Boy (1945)), Norman Mailer (The Naked and the Dead, 1948), William Styron (Lie Down in Darkness, 1951), Saul Bellow (The Adventures of Augie March, 1953), and Jack London. These authors would reshape the way literature was perceived and their impact would spread all over the world (e.g. France). The literary naturalism movement had a tremendous effect on twentieth-century literature. Donald Prizer, author of Twentieth-Century Literary Naturalism, conducted an analysis to see exactly what attributes tied the different naturalistic texts together and gave them their naturalistic identity. He used John Dos Passos, John Steinbeck, and James T. Farrell's works in his experiment. Ultimately, Prizer concluded that the naturalistic tradition that glued these authors and their works together was the concept of the struggle between fiercely deterministic forces in the world and the individual's desire to exert freedom in the world. In other words, a reflection on Jean-Jacques Rousseau's quote, "Man is born free, and everywhere he is in chains," is what Donald Prizer is striving for. He states, "The naturalistic novelist is willing to concede that there are fundamental limitations to man's freedom, but he is unwilling to concede that man is thereby stripped of all value." Based on this, Prizer came up with three recurring themes in naturalistic writing: 1) the tragic waste of human potential due to vile circumstances, 2) order (or the lack of), and 3) the individual's struggle to understand the forces affecting one's life. In fact, the impact that the naturalism movement had on American writers of the twentieth century was colossal. It led to the evolution of the modernism movement, during the dreadfully real times of World War I and World War II, and made one realize that life was truly a struggle to embrace the forces of nature that toyed with the individual. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia:
Polyps are abnormal growths of tissue that can be found in any organ that has blood vessels. They are most often found in the colon, nose, or uterus. Most polyps are noncancerous (benign). However, because polyps are due to abnormal cell growth, they can eventually become cancerous (malignant). Whether or not a polyp is cancerous can be determined with a biopsy. Treatment for polyps depends on their location, size, and whether or not they are cancerous. When a polyp grows in the outside of your ear canal, it is called an aural polyp. Inflammation, a foreign object, a cyst, or a tumor can cause an aural polyp. Symptoms include loss of hearing and bloody drainage from the ear. Polyps that grow on the part of the uterus that connects to the vagina are called cervical polyps. These polyps are common in women after they reach the age of 20 and have had children. Symptoms include abnormal bleeding and heavy menstruation, but they often do not cause symptoms. Most cervical polyps are noncancerous. Most colonic polyps are noncancerous, but colorectal cancer generally develops from a benign polyp. Your risk of developing polyps in the colon increases with age. You are more likely to have them if you have a family history of colonic polyps or cancer, or have a high-fat, low-fiber diet. Symptoms can include blood in your stool, pain, obstruction, constipation, and diarrhea. Colonoscopy screenings are recommended because polyps of the colon often have no symptoms. Early stage polyps can be removed during a colonoscopy. Nasal polyps can be found near the sinuses. If they become large enough, they can block the sinuses and nasal airway. You are more likely to develop nasal polyps if you have chronic sinus infections, allergies, asthma, or cystic fibrosis. Symptoms feel like a common cold that persists. Stomach polyps, which are also known as gastric polyps, occur on the lining of your stomach. Gastric polyps are rare. Symptoms can include pain or tenderness, nausea, vomiting, and bleeding; in most cases, there are no symptoms. Most stomach polyps are noncancerous, but some types may eventually develop into stomach cancer. A biopsy is generally ordered. Most uterine polyps are noncancerous. Women of any age can develop uterine polyps, but they are more common after age 40. They can also occur after menopause. Symptoms can include irregular menstrual bleeding, but often there are no symptoms. When you have a polyp, your doctor may want to perform a biopsy to find out if it is cancerous or noncancerous. In a biopsy, a sample of tissue is removed and analyzed under a microscope. Depending on where the polyp or polyps are located, various procedures are used to obtain a sample. These include: - colonoscopy for polyps located in the large bowel - colposcopy for polyps located in the vagina or cervix - esophagogastroduodenoscopy or endoscopy for the small bowel, stomach If the polyp is located in an area that is easy to reach, a small piece of tissue is simply removed and biopsied. Treatment for polyps depends on a number of factors, including: - whether or not the polyps are cancerous - how many polyps are found - where they are located - their size Some polyps will not require treatment. Others may be removed as a precaution against the future development of cancer.
Creating an emotionally safe school is essential in developing intrapersonal and interpersonal intelligence (Bluestein, 2001). Emotionally safe schools can be established through creating environments where children feel safe, can take risks, are challenged but not overly stressed, and where play, pleasure, and fun are facilitated (Bluestein, 2001). In order for trust to be established, children must feel safe (Bluestein, 2001). If a child goes to school with fear of being bullied, beat up, or murdered, personal intelligence (along with most other intelligences) is not going to develop appropriately. A safe environment is created by not allowing one child to invade another child’s body, space, and material boundaries. A safe environment is one which has clear expectations regarding the safety of all students. Bullying is not tolerated. Conflict resolution skills are taught and modeled by teachers. An emotionally safe school allows the child to fail without feeling he is a failure (Bluestein, 2001). Appropriate challenges are facilitated by teachers. Children are not pressured to receive a particular grade or obtain a particular score. Children are expected to debate, discuss, and problem solve. If they come to an incorrect solution, they are encouraged to try again or to try another method of problem solving. Children are not belittled, punished, or embarrassed when they do not succeed or meet their own goals. The child’s worth is not determined by his test score or performance. The child is valued because she is a member of the class. In an emotionally safe classroom, teachers make mistakes. They share these mistakes with children and sometimes elicit the children’s help in solving their problem. Contemporary schoolchildren bring many forms of stress with them to the classroom. The stress can take the form of academic pressure, familial pressure to perform, being part of a single-parent family, hurried schedules, and pressure to grow up too fast (Elkind, 1988; Bluestein, 2001). The pressure can come from school, home, or the media. Stress causes wear on bodily systems and when one is overstressed, the immune system can be directly affected. Stress uses up energy reserves, demands a greater amount of energy, and forces the body to respond physically through aggression, outbursts, or illness (Elkind, 1988). Stress can be reduced by making sure children’s basic needs are met, they feel safe, and they are able to take risks without fear of failure; and by having appropriate expectations of children at specific ages. Play, Pleasure, and Fun Part of developing intrapersonal intelligence is being able to freely engage in pleasurable experiences and recognizing that pleasure, fun, and play are a normal and healthy part of life. Play can encourage the personal intelligences in a variety of ways. A quiet center can be incorporated into the classroom. This is a space where the child can retreat, rest, be alone, work on journals, or calm down. Soft, soothing sensory materials can be available for children to look at or touch. A cardboard box for a child to crawl into with pillows and blankets can be created for children who need to get away from the normal routine for a few minutes. (This is not used as a punishment, but as a child-initiated or teacher-suggested experience to help a child who needs to be alone for a few minutes.) Puppetry can offer the child an opportunity to communicate feelings and emotions in a nonthreatening environment. The author has made some interesting observations as a puppeteer. Children often try opposite roles with puppets. For example, a child who responds very physically or aggressively will often choose a shy or timid puppet. An introverted child will often choose an aggressive, loud, or large puppet. Children with emotional disorders often prefer to share emotions through the use of a puppet. The dramatic play area can have props available to encourage children to explore different familial and community roles. Children can begin to establish empathy through role-playing and risk-taking. The dramatic play and music area can also offer culturally appropriate props and instruments. Props that accurately represent various cultures that are relevant to the children’s lives can be available for exploration and play. The personal intelligences can be integrated throughout the rest of the classroom with appropriate facilitation. If a conflict arises between two children, the teacher can help facilitate a resolution. The conflict can be resolved by helping the children to verbalize the situation and allowing each child to state her/his side of the conflict. Acknowledge the child’s feelings with words such as, “I can see that made you angry,” or “I can see that you are frustrated by this.” This validates the child’s feelings without judging them. After both sides have been stated, encourage the children to discuss and brainstorm possible solutions to the problem. The teacher should then accept the solution (as long as it respects the safety of the children involved), even if the teacher disagrees with it. This type of conflict resolution encourages the child to take responsibility for the situation, encourages negotiation, and values each child’s ideas and input. In addition to stress, risk-taking, safety, and fun, teachers also have a responsibility for bringing experiences into the class that are emotionally relevant. Emotional relevance depends upon many factors. Culture, age, developmental level, interest, and experiences influence emotional relevance (Hyson, 1994). Hyson (1994, p. 84) advocates materials that “encourage children to talk about, write about, and play about emotionally important ideas.” For example, if a plane crashes nearby and children in the class know about it, planes, ambulances, policemen/women, EMTs, and hospital props would be necessary for children to express the events emotionally and cognitively. If a new baby were expected in the home, new baby dolls and care props (diapers, bottles, pacifiers, etc.) would be added to the housekeeping center. Through providing meaningful emotional experiences, a sense of community develops that greatly influences the child’s emotional development. © ______ 2004, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
We have all studied Chemistry and done our fair share of experiments. Most of our coursework has often revolved around metals, and this is no surprise. Metals are the most reactive of all elements, and involved in a major portion of everyday reactions. Some of these are more reactive than others. Let us talk about copper and zinc then. In a displacement reaction involving the two, we see Zinc displacing the Copper; Zinc attains a pink layer of Copper, and the copper ion solution starts to fade. This is because Zinc is more reactive than copper, but why so? Metals react by losing electrons, and this ease defines their reactivity. In this regard, they are reducing agents. A Zinc atom is larger in size than a copper atom. However, both Copper and Zinc have the same number of valence electrons i.e 2. This means that the Zinc has a lower electron density, and can lose its electrons more easily. Hence, it is more reactive. Metals can be arranged in a specific order called the ‘reactivity series’. This series is arranged in the order of increasing electron density. In this arrangement, Sodium is the most reactive (least electron density) of the metals, and Copper the least reactive (most electron density). The experimental way of finding the reactivity of metals, is to react them with each other. The more reactive metal will perform the function of a reducing agent and give its electrons to the other metal. Hence, it would displace the other metal. To conclude, Zinc is more reactive than Copper due to its relative electron releasing power, and this is also verified experimentally. This then is the simple explanation of the relative reactivity of metals. Try it out in your chemistry lab next time!
Critical and creative thinking skills Preparing creative and critical thinkers teams also learn and practice quick-thinking skills for the instant challenge portion of the program return to article. Skills instruction in schools is that american young people, in general, do not exhibit an impressive level of skill in critical or creative thinking. Learn some useful creative thinking techniques and skills to enable you to think more creatively, innovate and adapt to change more easily. Supporting children's critical and creative thinking skills in the early years jo dean & keri cheetham. Unit of competency details bsbcrt301a - develop and extend critical and creative thinking skills (release 1. Critical and creative thinking strategies all students can be taught to sharpen their critical and creative thinking skills and to become more independent and. Critical and creative thinkers in mathematics classrooms recognition of the need for teachers to equip students with critical and creative thinking skills. Creative ways while keating is instrumental in assisting students to challenge the repressive developing critical thinking skills learning centre 8. Relationships between critical and creative thinking creative and critical thinking may very well be different sides of the critical thinking skills include. How to improve your critical thinking skills and make better critical thinking is a way to intervene in your thought a secret to creative problem. Critical and creative thinking skills list - get to know key tips how to receive a plagiarism free themed dissertation from a professional writing service essays. Critical & creative thinking program from science to arts, from business to teaching, critical thinking skills create a more efficient thinker and problem solver. Critical and creative thinking in the of promoting critical thinking skills and creative thinking creative thinking in the english language classroom. Xxx lesson 18 learning skills 163 overview: this lesson provides basic foundational information about two types of thinking skills: critical and creative. Thinking skills and creativity is a design thinking: a creative effects of cooperative learning and concept mapping intervention on critical thinking and. Thinking skills - creative thinking, analytical thinking, critical thinking, convergent, divergent, analyzing, synthesizing, and/or evaluating, reflecting. [compiled by ann coughlan, 2007-08] 2 learning to learn creative thinking and critical thinking introduction: understanding why creative and critical thinking skills. Critical and creative thinking these domains and levels are still useful today as you develop the critical thinking skills of your students critical thinking. Confidence and skills to use critical and creative thinking purposefully critical thinking is at the core of most intellectual activity that involves students. Critical and creative thinking skills in the 21st century describe how technological advances have enhanced the critical thinking skills of criminal justice agencies. 3 critical and creative thinking introduction as we develop our ability to think, we improve our capacity to learn regardless of our age, perceived ability or. Critical and creative thinking skills are used throughout our lives to help us make important decisions and guide us through our most difficult and treasured moments. Critical and creative thinking activities, grade 1 profound thinking requires both imagination and intellectual ideas to produce excellence in thinking we need to. Chapter 8: critical and creative thinking skills thinking skills a thinking skill is any cognitive process that is broken down into steps and explicitly taught (johnson.
The spine is made up of bones called vertebrae separated by small discs to provide a cushion. The discs consist of a tough outer layer (annulus) and a soft inner layer (nucleus). Kyphosis is an exaggerated, forward rounding of the upper back. This condition can result from developmental problems, degenerative conditions, or trauma. It also can occur as the result of compensations for other conditions or cause further complications to the neck, low back, or shoulders. Lumbago describes low back pain or discomfort. The pain can be described as acute, lasting from a few days to a few weeks, or chronic, lasting for months or longer. Sacroiliitis is an inflammation of one or both of the sacroiliac joints, which connect the pelvis to the sacrum. The sacrum and the iliac bones are held together by a collection of strong ligaments allowing relatively little motion at the sacroiliac joints. With sacroiliitis, even slight movements can cause discomfort or pain.
Intrapersonal communication can be described as self-talk, or the conversation that people have with themselves, depending on the set of circumstances that they find themselves within. Sometimes it happens when people need to make an important decision or learn something about them. Some may state that it is a form of thought, and in a way this theory may be correct, because it is a form of thinking that goes within the individual to reason on what is going on within their mind. In the Shakespeare’s plays, the methodology is used quite often where a character engages in monologue to reason with the events that have occurred. Thus, it is an effective cognitive process. This is an effective form of communication, though it is within the individual and does not include a sender or a receiver. The relation of intrapersonal communication with general communication is deeper than perceived. Most researches conducted recently show that the first step towards effective communication with others is a successful communication with the self. The self is a concept mostly used to describe who and what people think they are, which in other words, relates to their identity. Some researchers define the self into 2 dimensions (Steinberg, 2007). The self can be described as an internal thing made up of a combination of characteristics, which are the individual’s personality, values and beliefs and habits that distinguish them from any other person (Steinberg, 2007). Similarly, it is a social thing, growing out of the contacts formed with other people, and the main function it serves is to guide communication. There is an even closer relationship between the self and the communication where the self is shaped by the relationships formed by others, such as family and friends built on communication, the self-guides, the communication and relationships formed by others (Steinberg, 2007). This information adequately states the parameters set for communication and shows that people do not have to be actively conscious to communicate. Rather, if the person can communicate by themselves, there does have to be an intention, because it is a cognitive process initially. All cognitive processes are not conscious and, thus, one does not have to try, but communication requires an objective and, thus, it is a conscious process; therefore, one does have to try to communicate. Intrapersonal communication for one is highly connected to concepts of the self, because the whole basis is the ability of an individual to communicate without aid. Intrapersonal communication relates to self-talk, or self-imaging. The term self-concept relates to how the individuals think and feel about their identities (Steinberg, 2007). In this way, it involves the way they view their appearance, physical, as well as their mental abilities, strengths and weaknesses. It is the picture that one sends to others by the way that one behaves in a certain situation. This is the basic looking glass that people use when evaluating themselves, and it comes up against self-comparison with others. This is important in intrapersonal communication, because during the monologues used in Shakespeare some of the characters used their perceptions of other people to gauge their personal behaviours and reactions to a certain situation, which would be essential in how they viewed their characters henceforth. The example given if a woman is sleeping and her husband walks into the room is not an example of interpersonal communication. Body language is a form of communication; however, it is not a form of intrapersonal communication or general verbal communication, because that would require the actual use of speech. This does not take away from the importance of body language, which is still perceived a better communication form than in speech in some instances.
The idea of applying a regular computer chip directly to your brain is silly, so scientists at Japan's Yokohama National University have created a new material that can be shaped into complex, conductive microscopic 3D structures. What does that mean? It could potentially lead to custom brain electrodes. While it might just look like a simple black and white bunny, the thing in the above photo is actually a microscopic 3D-printed object with features that measure just a few micrometers across. The scientists say their research could lead to the development of microelectrodes that interface directly with the brain. These customized microelectrodes would sit in the brain to send and receive electrical signals as a way to treat disorders like epilepsy, depression, and Parkinson's disease. The whole thing starts by using lasers to fashion a light-sensitive resin, called Resorcinol Diglycidyl Ether (RDGE), into a 3D print. 3D printing will of course allow the scientists to create any shape they want, including chips that could slip into your brain crevices. But that's only half the equation: This new resin is also designed to take more heat, so its baked at high-temperatures until it shrinks and darkens in a process called "carbonizing," or charring. This final curing process increases the conductivity of the resin along with its surface area, making it a better electrode. The bunny test To test the effectiveness of their new resin-based creations, the scientists printed the Sanford bunny, which is a standardized shape commonly used in 3D modeling and computer graphics. "When we got the carbon bunny structure, we were very surprised," said Shoji Maruo, who co-lead the research team, in a release. "Even with a very simple experimental structure, we could get this complicated 3-D carbon microstructure." Now that the researchers have developed a new material that can undergo carbonizing with out warping into a glob, they can focus on creating applications for it. If you want to read more about the study, it appears in the journal Optical Methods Express.
The formula for the surface area of a cube is 6s^2 where s=side, and the formula for a sphere's surface is 4πr^2 where r=radius, By equating the two formulas, we can determine a constant factor for the relation between r and s. Open a New Workbook and select the Media Browser tool, Shapes option, and create the diagram above. 2Set 6s^2 = 4πr^2 3Gather the numbers on the left and the variables on the right: 6/(4π) = r^2 / s^2 4Take the square root of both sides of the equation: sqrt(6/(4π)) = sqrt(r^2 / s^2) = r/s 5Evaluate the left side: sqrt(6/(4π)) =sqrt(6/(4*PI())) in MicroSoft Excel = 0.690988298942671, or .691 approximately. 6Thus, .691= r/s so .691 * s = r and s = r/.691 and given one, we can always find the other that will bring the two equal now. Problem solved. 7For example, if side s = 1, then 6s^2 = 6.000 and radius r = 1*.691, so 4πr^2 = 6.0002, which should be close enough for most purposes. Explanatory Charts, Diagrams, Photos 1Make use of helper articles when proceeding through this tutorial: - See the article How to Determine a Cube and Sphere of Equal Volume for a list of articles related to Excel, Geometric and/or Trigonometric Art, Charting/Diagramming and Algebraic Formulation. - For more art charts and graphs, you might also want to click on Category:Microsoft Excel Imagery, Category:Mathematics, Category:Spreadsheets or Category:Graphics to view many Excel worksheets and charts where Trigonometry, Geometry and Calculus have been turned into Art, or simply click on the category as appears in the upper right white portion of this page, or at the bottom left of the page. Sources and Citations - The worksheet for this article is "Cube and Sphere wks.xlsx".
Golden Ratio Exploration Suppose we start by drawing one small square, and attach another square to it on the right, making a 2x1 rectangle. Now draw another square along the long edge of the 2x1 rectangle, making a 3x2 rectangle. Continue this process, spiraling outward, until you're out of room on the page. The process creates larger and larger rectangles by attaching squares to them, and could continue indefinitely. - Here is a table showing the side lengths of each rectangle in the process: Dimension of rectangle Ratio of the two sides 1 x 2 2/1 = 2 2 x 3 3/2 = 1.5 3 x 5 5/3 = 1.666666666 5 x 8 8/5 = 1.6 - Find the dimensions of the next two rectangles. - Look at the list of the dimensions of the rectangles. How is the biggest number related to the two number in the row above it? Knowing this, continue the following sequence of numbers: 2, 3, 5, 8, 13,... - The table shows the ratio of the long side to the short side for the first four big rectangles. Find the ratio for the remaining rectangles. Are the ratios approaching a number? - Note that the rectangles changed shape less and less as the process continued (the ratios of long to short sides didn't change much). Suppose you want a rectangle that stays exactly the exactly the same shape when a square is attached. This means the ratioSolve this equation for . (Cross multiply and use the quadratic formula!) - Calculate the ratio of your height to the height of your navel. Do the same for four friends. How does this number compare to the ratios you have found before? - This triangle is called the Golden Triangle. Compute the ratio of the length of the side to the length of the base. - The Parthenon in Athens, Greece, may have been built according to the golden ratio. Measure the width and the height of the west face (shown below). How close is it to a golden rectangle? - How many Golden Rectangles can you find in Mondrian’s Broadway Boogie Woogie? The sequence of numbers in the table is the Fibonacci sequence.
The word cancer is derived from the Latin word for crab because cancers are often very irregularly shaped, and because, like a crab, they "grab on and don't let go." The term cancer specifically refers to a new growth which has the ability to invade surrounding tissues, metastasize (spread to other organs) and which may eventually lead to the patient's death if untreated. The terms tumor and cancer are sometimes used interchangeably which can be misleading. A tumor is not necessarily a cancer. The word tumor simply refers to a mass. For example, a collection of fluid would meet the definition of a tumor. A cancer is a particularly threatening type of tumor. It is helpful to keep these distinctions clear when discussing a possible cancer diagnosis. |neoplasm-||A neoplasm is an abnormal new growth of cells. The cells in a neoplasm usually grow more rapidly than normal cells and will continue to grow if not treated. As they grow, neoplasms can impinge upon and damage adjacent structures. The term neoplasm can refer to benign (usually curable) or malignant (cancerous) growths.| |tumor-||A tumor is a commonly used, but non-specific, term for a neoplasm. The word tumor simply refers to a mass. This is a general term that can refer to benign (generally harmless) or malignant (cancerous) growths.| |benign tumor-||Benign tumors are non-malignant/non-cancerous tumor. A benign tumor is usually localized, and does not spread to other parts of the body. Most benign tumors respond well to treatment. However, if left untreated, some benign tumors can grow large and lead to serious disease because of their size. Benign tumors can also mimic malignant tumors, and so for this reason are sometimes treated.| |malignant tumor-||Malignant tumors are cancerous growths. They are often resistant to treatment, may spread to other parts of the body and they sometimes recur after they were removed.| |cancer-||A cancer is another word for a malignant tumor (a malignant neoplasm).| Cancer of the pancreas is a malignant neoplasm that arises in the pancreas. It strikes approximately 9 out of every 100,000 people every year in the United States and is one of the deadliest forms of cancer. It is estimated that this year 45,000 Americans will be diagnosed with cancer of the pancreas. An almost equal number of patients (some diagnosed previous to this year) will die from pancreatic cancer during this year. Cancer of the pancreas is not one disease. In fact, as many as twenty different tumors have been lumped under the umbrella term "cancer of the pancreas." Each of these tumors has a different appearance when examined with a microscope, some require different treatments, and each carries its own unique prognosis (predicted or likely outcome). An understanding of the different types of neoplasms of the pancreas is required for rational treatment. different types of pancreatic tumors is required for rational treatment. Cancers of the pancreas can be broadly classified as: |Primary-||Primary cancers are those that arise in the pancreas itself.| |Metastatic-||tMetastatic cancers are cancers that arise in other organs and only later spread to the pancreas. These are usually not considered a pancreatic cancer, instead they are considered cancers of the organs from which they arose.| In the vast majority of cases the term "cancer of the pancreas" refers to primary cancers of the pancreas — cancers that arose in the pancreas. Primary cancers of the pancreas can be broadly subgrouped into those that look like endocrine cells under the microscope (have endocrine differentiation) and those that look like exocrine cells under the microscope (have exocrine differentiation). The distinction between endocrine neoplasms and exocrine neoplasms is very important and will greatly impact on treatment and outcome. Pathologists examine histological slides (slides of tissue samples) using a microscope to diagnose and classify pancreatic cancer. To make the cells visible the slides are stained with various dyes. A change in color from one slide to another does not indicate any disease or abnormality. The different colors indicate that a different dye has been used or a different part of the cells is stained. Pathologists identify abnormalities by changes in the size, shape or arrangement of cells. The classification of neoplasms of the pancreas given below is based on pathological examination.
We started the class by taking the safety quiz. It is a 10 question multiple choice quiz. I always use our Senteo Student Response system so the students can learn how to use them (although most students are now familiar with them because more of our teachers have started to use them.) Following the safety quiz, we started out first lab. I use the modeling method, so the experiment started with a demonstration and making some observations. SIDEBAR: Our first unit is called Essential Skills. It is everything we want the students to know how to do. It also allows us to let the students experience modeling style physics experiments… ones with no handouts. This unit includes reviewing how to write the EOL (equation of the line) for linear graphs, using Logger Pro to create graphs, linearizing non-linear graphs. Or first experiment is one that will give a linear data set. The first experiment uses an inertia balance, but I do not call it that right away. At this point it is known as a Physics Wiggler… it wiggles back and forth. This name leads to a definition of period (and frequency). We notice that we can change the period by adding mass to the wiggler. This leads to a discussion and decision about independent and dependent variables. Usually the class ends up with two experiments: Period vs. Amplitude (with constant mass) and Period vs. Mass Added. After just a few different amplitudes, the students see there is no effect on the period, so they don’t worry about amplitude with the mass added data set. We write the purpose, make our hypotheses, but really don’t talk too much about the procedure. I emphasis that we want to be as accurate as possible and to think about how we can make this happen as we gather data. With the General classes, we guide them a bit more… we demonstrate how do gather data for the period (time ten cycles, the divide by ten) by looking at the amplitude relationship. This makes the data gathering a bit shorter for them. Monday will be all data gathering.
Excel STDEV Function is a useful tool for anyone working with data in Microsoft Excel. It allows you to calculate the standard deviation of a set of numbers, which is a statistical measure that helps you understand how spread out your data is. In simple words, it tells you how much the values in your data set vary from the average. This can be helpful in various fields, from science and engineering to finance and business. STDEV(number1, [number2], …) |The first number or range of numbers you want to include in the standard deviation calculation. |(Optional) Additional numbers or ranges you want to include in the standard deviation calculation. You can add more numbers as needed. How to use The STDEV function is quite simple to use. You provide it with the numbers you want to calculate the standard deviation for. You can input these numbers directly or refer to a range of cells where the numbers are located. Here are a few examples of how to use the STDEV function: =STDEV(A1, A2, A3, A4, A5) This formula calculates the standard deviation for the values in cells A1 to A5. This formula does the same as the previous example. It calculates the standard deviation for the values in the range A1 to A5. =STDEV(B1:B10, C1:C10, D1:D10) You can also calculate the standard deviation for multiple ranges by providing them as separate arguments. Once you enter the STDEV formula and press Enter, Excel will return the standard deviation of the provided numbers or ranges. It gives you a measure of how much the data varies from the average, helping you make informed decisions and draw insights from your data. Let’s look at a practical example: This formula calculates the standard deviation of the data, and the result is approximately 5.3 (rounded to one decimal place). It tells you that the data points are, on average, about 5.3 units away from the mean, which is around 12. By using the STDEV function, you can gain valuable insights into the variability of your data, helping you make more informed decisions in your work or studies. If you’d like to learn more about statistical concepts like standard deviation, you can explore resources like Wikipedia. Understanding these concepts can be incredibly valuable in various fields where data analysis is essential.
Charleston, South Carolina, is a city steeped in American history, with a rich and complex past that has left an indelible mark on the development of the United States. Here is a brief description of Charleston’s role in U.S. history: - Early Settlement: Charleston was founded in 1670 by English settlers, making it one of the earliest English colonial settlements in North America. It was named in honor of King Charles II of England. The city’s location on the coast, at the confluence of the Ashley and Cooper Rivers, made it a strategic port for trade and a hub for colonial commerce. - Slavery and Plantation Economy: Charleston played a significant role in the development of the Southern plantation economy, primarily based on rice and indigo cultivation. This system relied heavily on the forced labor of enslaved Africans, leading to the city becoming a major slave trading port in North America. The historic district of Charleston still contains many preserved antebellum mansions, reflecting the wealth generated by the plantation system. - American Revolution: Charleston was a hotbed of revolutionary fervor during the American Revolution. The city’s citizens played a pivotal role in resisting British rule. In 1780, Charleston fell to the British, but it was liberated by American forces in 1782. - Fort Sumter: Charleston is most famously associated with the outbreak of the Civil War. The first shots of the Civil War were fired at Fort Sumter, located in Charleston Harbor, in April 1861. The city remained under Confederate control for much of the war and was a key center for the Southern war effort. - Post-Civil War Era: After the Civil War, Charleston went through a period of reconstruction and recovery. The city continued to be a center of African American culture and political activity, and it played a significant role in the Civil Rights Movement. - Architectural Heritage: Charleston is renowned for its historic architecture, which includes well-preserved colonial and antebellum buildings. The city’s historic district, with its cobblestone streets and elegant homes, is a major tourist attraction. The preservation of these historic buildings and neighborhoods has contributed to Charleston’s unique charm. - Cultural Significance: Charleston has a rich cultural heritage, including the Gullah/Geechee culture, which developed among descendants of African slaves in the Lowcountry region. The city is known for its distinctive cuisine, including Lowcountry dishes like shrimp and grits, as well as its vibrant arts scene and annual events like the Spoleto Festival. - Modern Charleston: Today, Charleston is a thriving city with a diverse economy, including tourism, manufacturing, and technology sectors. It continues to be a hub for education and culture in the region. Charleston’s history is a complex tapestry that encompasses the highs and lows of American development, from its colonial roots to its role in the Civil War and the ongoing legacy of its unique culture and architecture. The city’s historical significance and charm make it a compelling destination for history enthusiasts and tourists alike.
To design and engineer an aircraft, it is necessary to know what it is expected to do regarding its flight characteristics, including its maneuver performance. All aircraft types must be able to maneuver to some degree, some more than others. Airliners, for example, are only expected to perform gentle maneuvers such as 20-degree banked turns. Examples of more extreme flight maneuvers are high-rate turns, rapid pull-ups, and various aerobatics, as shown in the photograph below, or any situation where an aircraft may follow a curvilinear path under non-steady, accelerated flight conditions. However, any flight maneuver will have some limits, and the aircraft’s actual maneuver performance will be constrained by its aerodynamic capabilities and the structural strength of its airframe. For example, the ability to perform various high load factors or high “g” maneuvers is expected of many military airplanes, especially fighter airplanes. Therefore, not only must the airframe be sufficiently strong to carry the loads in these maneuvers, but the aerodynamic performance of the aircraft must be carefully considered in that aerodynamics is not a prematurely limiting factor. In this regard, the desired maneuver capability is not limited too early by the onset of wing stall and/or buffeting. Natural gusts in the atmosphere will also cause loads on the aircraft over and above those produced in steady, level flight in smooth air. In particular, vertical gusts can produce transient loads on an airplane that may be as large, or even more significant, than those expected during maneuvers and those that are otherwise within the normal flight envelope. For this reason, the aircraft’s needed maneuver performance and gust loading capabilities must be precisely defined, and the aerodynamics and structural requirements must be carefully established during the design process. Naturally, these capabilities also need to be verified by structural testing, which will be undertaken using a systematic combination of flight and ground tests. - Understand the meaning and significance of the airspeed/load factor or “–” diagram for maneuvers and atmospheric gusts. - Be able to interpret the – diagram and identify critical airspeeds. - Appreciate why the airspeed at which wing stall occurs will increase in a maneuver or with the application of a load factor. Aircraft Equations of Motion As previously derived, the following general equations can be used to describe the motion of an aircraft, i.e., The angle can be viewed as the climb or flight path angle, and the bank angle is denoted by . It should be remembered that during maneuvers, which are accelerated flight conditions, the lift on the wing will not equal the weight of the aircraft because of the need for the wing to create whatever lift value is needed to produce the necessary accelerations to follow the required flight path. This lift force may be greater or less than the airplane’s weight, so during flight, the load factor can be positive (i.e., an upward acceleration) or negative (downward acceleration). In many cases, the line of action of the thrust vector relative to the flight path is small, so it is reasonable to assume that in the forgoing equations, i.e., Maneuvers in a Vertical Plane Consider first the forces on an airplane maneuvering in a vertical plane with a circular flight path of radius = constant and at a constant airspeed , as shown in the figure below. Notice that the airplane would perform a complete loop in a vertical plane when continuing this trajectory. Vertical equilibrium in the maneuver requires that Therefore, the lift required on the wing is i.e., the lift must be greater than the weight, where the load factor is The excess lift is related to the load factor, , such that , i.e., the number of effective “g’s.” So, it can be seen that for a given radius of the flight path, the load factor increases with the square of the airspeed. Furthermore, for a given airspeed, the load factor is inversely proportional to the radius, i.e., a faster and/or tighter flight path will produce a higher load factor. The radius of curvature of the flight path, in this case, will be so for a given load factor, the radius of the flight path increases quickly with the square of the airspeed. Maneuvers in a Horizontal Plane Now consider the forces on an airplane in a pure horizontal turn with a bank angle and when flying at a constant airspeed , as shown in the figure below. Vertical equilibrium requires that and horizontal equilibrium requires where is the radius of curvature of the turn. It is apparent then that to perform a turn, the lift on the wing must again be greater than the weight of the airplane, i.e., , to create the necessary aerodynamic force not only to balance the weight of the aircraft but also to produce the inward radial force to create the needed centripetal acceleration to execute the turn. Solving for the lift required gives and so the load factor is The preceding result shows that the load factor must increase with the inverse of the cosine of the bank angle . For example, a 60 banked turn will correspond to , i.e., a load factor of two. Airspeed-Load Factor Diagram An airspeed-load factor or – diagram is one form of operating envelope for an aircraft. The figure below shows a representative – diagram for an airplane as a function of indicated airspeed. However, equivalent airspeed (related to actual dynamic pressure) or Mach number will sometimes be used on the “airspeed” axis. This latter result shows that the aircraft will stall at a higher airspeed when pulling any “g” loading with . Notice from the – diagram that as the load factor increases, the stall airspeed follows a curve defined by Eq. 11, i.e., its value increases, and so it traces out one part of the operating envelope on the – diagram. The – diagram also reflects that an airplane can only structurally withstand a finite amount of loading (e.g., it is capable of only so much wing stress and/or wing bending and/or skin. buckling) until it suffers permanent damage or structural failure; this maximum loading is denoted by . At one airspeed, which is called the corner airspeed or the maximum maneuvering airspeed, the aircraft will be operating at the edge of stall and also pulling the maximum load factor, i.e., The maximum maneuvering airspeed is often called (or sometimes ). Equation 12 defines an aerodynamic limitation on overall flight performance because of the attainment of the maximum lift coefficient on the wing and also a structural limitation in terms of a maximum attainable structural load factor. Therefore, for flight in rough air other than light turbulence or “chop,” the pilot or aircrew must operate the aircraft at its maximum maneuvering equivalent airspeed , which is marked on the airspeed indicator (white arc). This approach is required so that unexpected turbulent gusts will not create load factors that could potentially overload the airframe. The maximum attainable load factor that an aircraft is designed to withstand, i.e., its structural limits, depends on the aircraft type and what it is intended to do. For civil aircraft, the limiting values of the load factor will be defined by the appropriate certification authority, e.g., by the FARs in the U.S. Under limit load conditions, the FARs require that the aircraft components support those loads without any permanent detrimental structural deformations and that the stresses remain below the critical yield point to account for unexpected events such as severe gust loads. Another consideration is something like an emergency landing at weights that are higher than the standard landing weights. Normal Category Airplanes For transport category airplanes (e.g., airliners), ranges from -1g to +2.5g or up to +3.8g, depending on the design takeoff weight. To the limit load factors, a value of 50% is added for structural design purposes (i.e., an extra margin for extra safety), i.e., the ultimate structural strength needs to be at least 150% of the design limit load, which then becomes known as the ultimate load. For normal and commuter category airplanes, limit load factors range from g to g. The maximum specified load factor is only 2.5g for airplanes with a gross weight of less than 50,000 lb but increases linearly with weight up to a maximum of 3.8g. The relevant equation the FARs defines is up to a maximum of 3.8g where is measured in lb. For utility category airplanes (or those configured to operate in this category) range from g to g, which often allows the aircraft to perform limited aerobatics, at least within a specific center of gravity range. For fully acrobatic or aerobatic category airplanes, the load factor range from g to g. However, in particular for aerobatic and military fighter airplanes, many aircraft types are designed to tolerate load factors much higher than the minimum required by regulations, often between g and g. Aerobatic aircraft are generally much stronger than the pilot could sustain in terms of “g” loadings. The following additional points identify and describe the nature of the – diagram and what it means: - Strictly speaking, the – diagram applies to a single flight weight. Usually, a – diagram is defined at the maximum gross in-flight weight of the aircraft, i.e., . - The area inside the “Normal flight envelope,” as marked out by the – diagram, is the combination of airspeeds and load factors where the aircraft can be safely flown without stalling or suffering structural failure. - At a load factor unity (), which is level flight, the stall limit can be easily identified on the – diagram. - The corner airspeed where the aircraft is operating at the edge of the stall and pulling the maximum load factor can be easily identified. This point is usually marked on the diagram as the maximum maneuvering airspeed or (an equivalent airspeed and the limit of the “white arc” on the airspeed indicator). - Notice that both positive and negative load limits are identified. The airplane usually cannot withstand as much negative loading as positive loadings, as will the pilot and passengers. The exception, of course, is an airplane designed specifically for aerobatics. At higher airspeeds, the airplane reaches an aerodynamic limit based on dynamic pressure, i.e., this represents a “redline” or never-exceed airspeed, and is usually identified as and marked on the pilot’s airspeed indicator. Structural failure can be expected if exceeded, even by some narrow margin. Therefore, a “yellow arc” zone will also be marked on the airspeed indicator so the pilot knows that the aircraft is operating near its maximum airloads and structural capability. A new airplane design’s limit and ultimate failure loads (and its components) are validated on the ground. To this end, a test article of the entire airplane is mounted in a special rig that can simulate the magnitude and distribution of the airloads encountered during the actual flight, as shown in the photograph below. Naturally, this sort of test to failure would not be done in the air! The airplane will be randomly selected “as built” on the regular production line, so no special considerations are given to the aircraft structure. Other tests performed in this type of ground rig include simulations of extreme negative loads on the wing to validate the lower part of the – diagram, maneuver loads, and emergency landing loads. For a military aircraft, ballistic damage to the wing may also be a consideration, i.e., various types of damage may result in structural stress and a loading limitation. Eventually, the wing will be tested to destruction to verify the predicted ultimate loads on the aircraft can indeed be sustained. In most cases, wings will fail by compression buckling of the upper wing skins. There are two primary uses for a – diagram, one for maneuvering flight, as already considered, and another for the gust loads in the atmosphere produced when flying along in straight-and-level flight. The diagrams are basically the same, but the presented information has different interpretations. The atmosphere is never stagnant, and turbulence and gusts occur naturally. A gust can affect the airplane from any direction. However, upward (vertical) gusts, as shown in the figure below, have the most pronounced effects on the airplane regarding aerodynamic response and induced load factor. The primary effect of a vertical gust is to increase the angle-of-attack of the wing and so increase the wing lift, the principle being shown in the figure below. While there will also be an effect on drag, the effects are minor because of the angles typically involved. The change in the angle of attack on the wing will be where is denoted here as the vertical gust velocity. If the lift-curve slope of the wing is then the change in lift coefficient will be and the change in the lift is The change in the load factor is then There are two interesting observations from Eq. 17: 1. The load factor for a given gust intensity decreases with increasing wing loading, . This outcome means smaller, lighter aircraft (generally with relatively lower wing loadings) tend to respond much more to gusts than larger airplanes such as jet airliners. In this regard, smaller aircraft must be carefully flown at or below the maneuvering airspeed in turbulent air to prevent structural damage. 2. The load factor for a given gust intensity varies linearly with airspeed. So, lines with different values of can be plotted on the – diagram, an example being shown in the figure below. These sets of straight lines give the load factor produced on the airplane at a given airspeed from the effects of gusts. When one of these lines intersects the maximum (or minimum) load factor limit, the corresponding airspeed is the maximum airspeed that can be flown without either stalling the wing or exceeding the maximum allowable structural load factor. Each aircraft type will have its own specific envelope regarding gust magnitudes and airspeeds. If the gust is sufficiently severe, the airplane’s resulting load factor may cross over the allowable limit load. However, it cannot exceed the ultimate load without significant structural damage or failure. Even if the limit load factor is exceeded in flight, there is a possibility that minor damage may still have occurred (e.g., wrinkled skin panels from buckling), and the aircraft must be carefully inspected before further flight. Notice that if the airplane is flying slower than the corner airspeed (remember this is marked as or on the – diagram), any large gust will cause an angle-of-attack to exceed the value for maximum lift coefficient and so cause the aircraft to stall. Suppose the aircraft is flown at or below its maximum maneuvering speed. In that case, neither an atmospheric gust nor abrupt control movements (such as a rapid pull-up maneuver using full-up elevator) will be sufficient to cause the aircraft to exceed its maximum structural load factor. Consequently, when encountering very turbulent air, the aircraft should be flown at or below the maximum maneuvering speed so that any gusts and/or control inputs cannot exceed its structural limit loads. Regulations and the FARs The FARs define the values of the vertical gust velocities, as shown in the table below. is the design speed for maximum gust intensity, which assumes that the aircraft is in straight-and-level flight when it encounters the gust and that the effects of the gust are produced instantaneously. The gust value of “66 ft/s” is based on statistical information gathered about turbulence in the lower atmosphere and is the most extreme case considered representative of all-weather flying. The other values also are based on atmospheric gust statistics. They are used in airplane design to ensure the airplane is strong enough to withstand all anticipated structural loads in turbulent conditions. is the design cruise speed; for airplanes in the transport category (airliners), must not be less than + 43 kts. is based on allowable gusts at the maximum dive speed. |Below 20,000 ft |Above 50,000 ft |(rough air gust) |(gust at max. design speed) |(gust at max. dive speed) Summary & Closure It is essential to establish an aircraft’s maneuver and gust envelope so that it can be suitably designed to carry all expected flight loads, plus a margin of safety. Aircraft cannot be infinitely strong, so the diagram must be consistent with the aircraft’s intended purpose. The final performance capabilities of the aircraft will be limited by its aerodynamic capabilities and/or the structural strength of its airframe. Gusts in the atmosphere cause loads on the aircraft that are over and above those produced in smooth air. All aircraft must be designed to be strong enough to carry the normally expected flight loads and the extra loads induced from encounters with turbulent air. The FARs define these gust conditions depending on the aircraft type. It is reassuring that the structural margins built into certified aircraft designs are significant. Even when encountering the most severe turbulence, one can be confident that the aircraft will be strong enough to withstand all of the anticipated flight loads. - Think about some of the structural issues in designing a wing for a fully aerobatic aircraft. Hint: Include both normal and inverted flight. - What factors other than aerodynamic or structural may limit the acrobatic maneuver limits? - Compare the relative load factors in response to the same vertical gust at the same airspeed that would be produced on the following aircraft: glider, single-engine general aviation aircraft, and a small business jet. To understand more about an aircraft’s maneuvering flight envelope and the effects of gusts, then follow up with some of these online resources:
Autism and posttraumatic stress disorder (PTSD) share some symptoms. Sometimes, people may mistake one for the other. It is also possible for people to be autistic and have PTSD, too. The reasons for this are not fully understood, but it could be due to how autism affects perceptions of danger and the prevalence of autism stigma and abuse. Understanding the similarities, differences, and overlap between autism and PTSD can ensure people receive the right diagnosis and support. Keep reading to learn more about autism and PTSD, including a comparison of their symptoms, diagnosis, and treatment. Autism is a spectrum of neurodevelopmental differences that affect communication, social interaction, interests, and behavior. The signs develop in early childhood and have strong links to genetics, meaning that autism often runs in families. Autism is not a treatable condition; it is a long-term difference in the way people think and perceive the world. It is also relatively uncommon. Experts estimate autism affects around In contrast, PTSD is a When a person has PTSD as a result of several traumatic experiences or an ongoing experience, it is known as complex PTSD (C-PTSD). Vicarious trauma, where a person develops PTSD after witnessing or hearing about someone else’s traumatic experiences, is also possible. While autism and PTSD are distinct, they share some of the same symptoms and complications. Both may cause: - Sensory sensitivities: PTSD and autism can both cause sensitivity to certain noises, smells, or crowded places. - Difficulty in social situations: Autism can affect how a person communicates and how they interpret the behavior of people who are not autistic. For some, PTSD can also make socializing stressful if they are afraid of other people or strangers. - Repetitive behavior: Stimming is a repetitive behavior that a person uses to manage stress or anxiety. It is a potential feature of autism but can also affect those with PTSD and anxiety. Similarly, repetitive play can also featureamong children in either case. - Difficulty regulating emotions: People with either autism or PTSD may have more difficulty regulating their feelings than others, which can lead to outbursts of anger, panic, or withdrawal from others. - Avoidance: Autistic people and those with PTSD often try to avoid stimuli, places, or people that cause distress. - Lack of speech: Some autistic people are nonspeaking, meaning they do not talk. Sometimes, traumatic experiences or anxiety can also lead to a lack of speech. Doctors call this mutism. It is important for healthcare professionals to know about this overlap, as it can make it harder to accurately diagnose autism or PTSD. This is especially true for children, who may not be able to explain how they feel or what they have experienced. It is also pertinent information for people with C-PTSD. C-PTSD can cause a broader range of symptoms than PTSD and can be harder to identify because it does not occur after a single, acute event. Data suggests PTSD is more common among autistic people than nonautistic people. The authors of a small 2020 study with 59 adults estimated that A larger 2021 survey of 687 autistic adults found that 44% met the PTSD criteria. However, in both cases, the participants were not a random sample, so they are not representative of all autistic people. Why might PTSD be more common in autistic people? PTSD may be more common among autistic people because they can experience stigma and are more vulnerable to abuse. A 2023 study notes that previous research has shown autistic people often experience: - intimate partner violence The study also notes that autistic people are more likely to experience interpersonal violence than nonautistic people, while the 2021 survey mentioned above found that 72% of participants had experienced some form of assault. However, autism itself - affect how a person processes sensory information, making events seem more dangerous or scary - affect how able a person is to cope with what happened - make it more difficult to get social support due to difficulty communicating or socializing However, developing PTSD is Although there are similarities, autism and PTSD do have key differences in their symptoms. For example, PTSD can cause some symptoms that autism does not, such as: Autism also has some effects that PTSD does not, such as: - specific and intense interests - difficulty with imaginative play (in children) - very logical or literal thinking - significant distress when daily routines change, even if only slightly People who are autistic and have PTSD may have a mixture of symptoms, but those symptoms can also interact with each other in unique ways. For example, an autistic person’s sensory sensitivities may be even more pronounced as a result of PTSD, as this can cause hyperarousal. Similarly, avoidance may manifest as a retreat into repetitive behaviors or a focus on solitary activities. For people who have symptoms that could be autism or PTSD, diagnosis requires a comprehensive evaluation from a psychologist. In children, this may be a child psychologist. During an evaluation, the healthcare professional will consider the person’s: - personal history - communication skills - repetitive behaviors - mental health symptoms - daily functioning In children, the evaluation may also include play, which allows the healthcare professional to see whether there are repetitive themes or difficulty using imagination. However, distinguishing these conditions can be challenging. For people who already have one diagnosis, it can also be harder to get the second one. This is due to “ For autism, any needed interventions focus on alleviating symptoms that interfere with quality of life. It involves different types of support depending on the person but - speech and language therapy - sensory integration therapy - physical therapy - social skills training In cases of co-occurring autism and PTSD, therapists may need to adapt their approach to accommodate the individual’s specific needs. For example, autistic people in talk therapy may need: - a greater number of sessions to establish trust - a longer or shorter duration to each session - regular breaks There is currently a Anyone who believes they could have PTSD or that they are autistic can speak with a doctor or mental health professional for advice and support. The healthcare professional can explain the next steps and may provide a referral to a specialist. If you know someone at immediate risk of self-harm, suicide, or hurting another person: - Ask the tough question: “Are you considering suicide?” - Listen to the person without judgment. - Call 911 or the local emergency number, or text TALK to 741741 to communicate with a trained crisis counselor. - Stay with the person until professional help arrives. - Try to remove any weapons, medications, or other potentially harmful objects. If you or someone you know is having thoughts of suicide, a prevention hotline can help. The 988 Suicide and Crisis Lifeline is available 24 hours a day at 988. During a crisis, people who are hard of hearing can use their preferred relay service or dial 711 then 988. Autism and PTSD have some overlapping symptoms, including sensory sensitivities, avoidant behaviors, and potential difficulty in social situations. However, they are distinct conditions and have very different underlying causes. Understanding the overlap between autism and PTSD is essential for an accurate diagnosis. These conditions can also coexist. By understanding how they interact, mental health professionals can create a tailored approach that meets the needs of each person.
Before European colonization in the late 1400s, the land of what is now the United States was inhabited by Native Americans, who arrived on the continent by crossing the Bering land bridge some time between 50,000 and 11,000 years ago. In 1607, the first successful English settlement was at Jamestown, Virginia. Within the next two decades, several Dutch settlements, including New Amsterdam (the predecessor to New York City), were established, which was followed by extensive British settlement of the east coast. In 1775, George Washington led the American Revolution against colonial rule by Great Britain, which led to the Declaration of Independence by the thirteen colonies in 1776. The former colonies existed as an informal alliance of independent states with their own laws and sovereignty, while the Second Continental Congress was given the nominal authority by the colonies to make decisions regarding the formation and founding of the Continental Army, but it did not have the authority to levy taxes or make federal laws. Later, the United States Constitution was ratified by the Constitutional Convention in 1787 to establish a federal union of sovereign states and the federal government to operate that union. From 1803 to 1848 the size of the new nation nearly tripled. Even before the Louisiana Purchase, settlers had been pushing westward beyond their national boundaries, many carrying with them a belief that the republic was destined to expand across the continent. This belief was thwarted somewhat by the stalemate of the War of 1812, but was reinforced by victory in the Mexican-American War in 1848. In the process of its expansion, the U.S. displaced most Native American nations residing in the area. As new territories were being settled and incorporated into the country, a heated debate developed over whether slavery would be allowed to spread. In the mid-19th century, the nation was divided over the issue of states\' rights, the role of the federal government, and the expansion of slavery, which led to the American Civil War when, following the election of Abraham Lincoln in 1860, South Carolina became the first state to declare its secession from the Union. Six other Southern states followed, forming the Confederate States of America early in 1861. At the time, the Northern states were opposed to the expansion of slavery, while the Southern states saw the opposition as an attack on their way of life, since their economy, especially the cotton industry, was so dependent on slave labor. The Civil War effectively ended slavery, as well as the question of whether a state has the right to secede from the country, with an Union victory in 1865. The event is widely accepted as a major turning point in American history, with an increase in power for the federal government. The technological advances made during the Civil War, combined with an unprecedented wave of immigrants who helped to provide labor for American industry and create diverse communities in previously undeveloped areas, hastened the industrial development of the United States and its rise to international power. The country subsequently made many imperialist ventures abroad, including the annexation of Puerto Rico after a victory in the Spanish-American War. With the start of the First World War in 1914, the United States at first decided to maintain its neutrality, but eventually entered the war against the Central Powers and helped turn the tide of battle. American investors and the federal government had invested a large amount of money in Europe including loans granted to Great Britain and the Allies. Thus, one of the main reasons that the United States entered the First World War was to protect its interests in Europe and colonies governed by the European powers. There was also widespread public outrage over the German practice of unrestricted submarine warfare, which resulted in a loss of American life when ships operating in the waters around Europe were sunk. American sympathies, due to historical reasons, were also very much in favor of the British and French. However, a sizable number of citizens, mostly Irish and German, were staunchly opposed to U.S. intervention. Nonetheless, American involvement in the war brought the country much wealth and prestige, even though much of Europe laid in ruins. After the war in 1918, the United States Senate did not ratify the Treaty of Versailles imposed by its Allies on the defeated Central Powers, which would have consequently pulled the U.S. into European affairs. Instead, the country chose to pursue unilateralism, if not isolationism. During most of the 1920s, the United States enjoyed a period of unbalanced prosperity as farm prices fell and industrial profits grew. The boom was fueled by a rise in debt and an inflated stock market, which resulted in a crash in 1929. The Great Depression followed, which led the government, under Franklin Roosevelt, to abandon its Laissez-faire economic policy. The recovery, however, was not complete until the Second World War, where the United States joined the side of the Allies against the Axis after a surprise Japanese attack on Pearl Harbor. The ensuing war became the most costly war in American history, but it helped to pull the economy out of depression as it provided much needed jobs both at home and at the front. The post-war era in the United States was defined internationally by the beginning of the Cold War in the late 1940s, when the United States and the Soviet Union (USSR) attempted to expand their own global influence, with the U.S. representing democracy and capitalism, and the USSR Communism and a centrally planned economy. The actions of both sides, however, were checked by each side\'s massive nuclear arsenal. The result was a series of conflicts, including the Korean War, the massively unpopular Vietnam War, and the tense nuclear showdown of the Cuban Missile Crisis. Within the United States, the Cold War prompted concerns about Communist influences, which created the Red Scare of the 1950s. The space race between the two world superpowers resulted in government efforts to encourage greater math and science skills. Meanwhile in the country, urbanization was completed and Amercan society experienced a period of sustained economic expansion. At the same time, discrimination across the United States, especially in the South, was increasingly challenged by a growing Civil Rights movement led by prominent black American leaders like Martin Luther King, Jr, which led to the abolishment of the Jim Crow laws that legalized racial segregation between whites and blacks in the South. After the fall of the Soviet Union in 1990, the United States continued to involve itself in military action overseas as demonstrated by the Gulf War. Following his election in 1992, President Bill Clinton oversaw the largest economic expansion in American history, thanks to sound economic policies that helped to stoke the digital revolution and new business opportunities created by the Internet. At the beginning of the new millennium, following the September 11, 2001 attacks allegedly orchestrated by Osama bin Laden, the United States foreign policy became highly concerned with the threat of terrorist attacks. In response, the administration of President George W. Bush began a long series of military, police and legal operations it dubbed the War on Terror. With the support of most of the international community, the armed forces invaded Afghanistan and overthrew the Taliban regime, which was considered to be a safe haven for terrorism activities in the Middle East. More controversially, President Bush continued the \"War on Terror\" by invading Iraq and overthrowing Saddam Hussein\'s regime in 2003. This second invasion proved relatively unpopular amongst the international community, even amongst long-time American allies such as France and Germany, which resulted in a wave of anti-American sentiment. Most recently, this discontent with the war has spread to the home front, as a majority of Americans are now dissatisfied with its prosecution. Nonetheless, over 30 nations supported the U.S. led invasion of Iraq in what became known as \'the coalition of the willing. As of 2006, the political climate remains polarized as debates continue over issues such as the increasing trade deficit, a rising health care cost, illegal immigration, the separation of the church and the state, abortion, free speech, gay rights, as well as the ongoing war in Iraq.
The Ebola virus has remerged, this time in Democratic republic of Congo. According to WHO, more than 16 people have been confirmed dead in an area of North-western Democratic Republic of Congo and 393 people identified as contacts of Ebola patients are being followed up. In the pursuit to control this outbreak, WHO has authorized the use of an experimental vaccine in the region. Ebola also known as Ebola haemorrhagic fever is a highly contagious disease caused by a virus of the family filoviridae . It has a case fatality rate of 50% in average, however, in the past outbreaks the case fatality have varied from 25% to 90% (WHO, 2018). The introduction of Ebola to human population came through close contact with secretions, blood, organs, and other bodily fluids of infected animals such as gorilla, chimpanzees, forest antelope, fruit bats, monkeys found dead or ill in the forest. This then spread through human-to-human transmission via direct contacts with secretion, blood, or other bodily fluids of infected persons and with materials and surfaces like clothing or bedding contaminated with these fluids. The time interval from infection with the virus to the onset of symptoms (incubation period) is 2 to 21days. Symptoms includes: muscles pains, sudden onset of fever, fatigue, sore throats. This is usually followed by diarrhoea, vomiting, rash, internal and external bleeding in some cases. A range of potential treatments including blood products, drug therapies and immune therapies are being currently evaluated. However, supportive care- rehydration with intravenous and oral fluids and treatment of specific symptoms, improves survival (WHO, 2018). There is as yet no proven treatment available for Ebola viral disease. Prevention and control Some of the measures that can be employed to prevent and control Ebola are: - Practicing of careful hygiene. For example, wash your hands with water and soap or an alcohol-based hand sanitizer and avoid contact with body fluids and blood (such as urine, faeces, saliva, sweat, urine, vomit, breast milk, semen, and vaginal fluids). - Do not handle items that may have come in contact with an infected person’s body fluids or blood (such as clothes, bedding, needles, and medical equipment). - Avoid burial rituals or funeral that requires handling the body of someone who has died from Ebola. - Avoid contact with bats and nonhuman primates or blood, fluids, and raw meat prepared from these animals. Meats should be cooked properly before consumption.
The Take 5 Ideas have been created for schools to share with parents and carers as part of home learning. They are designed to allow children to develop their own responses to Power of Reading texts they may have studied or will be studying in school. Each set of notes contains five simple activities which will develop children’s comprehension skills and strategies and develop their imagination and creativity for writing, linked to the National Curriculum. The five key areas covered in the activities are: Explore it: Reading of text and/or illustrations and questions to develop children's awareness of language and vocabulary, including how this can be used for effect. Illustrate it: Drawing tasks to develop children's visualisation skills - a key aspect of comprehension. Talk about it: Questions or talking points to support children's understanding of key parts of the text, encouraging them to refer back to the text to support their ideas. Imagine it: Talking points and questions that encourage deeper responses to texts, thinking beyond the text and linking to real life knowledge and understanding. Create it: A range of different ideas for writing in response to a text, developing children's imagination and creative ideas. We have also linked our writing activities to the Take 5 books. This means the children have a in-depth understanding of the book and their writing. Work through each days reading activity first then move onto the days writing task. They are all linked and make transitions from one task to the next easy.