content
stringlengths
275
370k
Arduino is a small circuit board which allows you to make a computer that can sense and control the physical environment. You can learn about electronics by building your own circuits, and you can program your Arduino to become anything, from a mobile phone to a Geiger counter! Technocamps have developed a wearable computing workshop, where pupils can use an Arduino device to make an item of fabric or clothing which does something – for example a hat which beeps when it detects gas! There are also resources here to help you get started with our Swansea University Computer Science Open Day Arduino Kit! A set of challenge sheets to accompany our Swansea University Computer Science Open Day Arduino sets. S4A stands for Scratch for Arduino, the software which can be downloaded onto either Mac, Linux or Windows looks similar to “Scratch” but there are some additional blocks on this enabling you to interact with your Arduino.
Rapid Review Guide To achieve the perfect 5, you should be able to explain the following: • The Great Depression had numerous long-lasting effects on American society. • Franklin Roosevelt was the first activist president of the twentieth century who used the power of the federal government to help those who could not help themselves. • The Great Depression’s origins lay in economic problems of the late 1920s. • The 1929 stock market crash was caused by, among others things, speculation on the part of investors and buying stocks “on the margin.” • The stock market crash began to affect the economy almost immediately, and its effects were felt by almost all by 1931. • Herbert Hoover did act to end the Depression, but believed that voluntary actions by both business and labor would lead America out of its economic difficulties. • Franklin Roosevelt won the 1932 election by promising “The New Deal” to the American people and by promising to act in a decisive manner. • Suffering was felt across American society; many in the Dust Bowl were forced to leave their farms. • During the first hundred days, Roosevelt restored confidence in the banks, established the Civilian Conservation Corps, stabilized farm prices, and attempted to stabilize industry through the National Industrial Recovery Act. • During the Second New Deal, the WPA was created and the Social Security Act was enacted; this was the most long-lasting piece of legislation from the New Deal. • Roosevelt was able to craft a political coalition of urban whites. Southerners, union members, and blacks that kept the Democratic party in power through the 1980s. • The New Deal had opponents from the left who said it didn’t do enough to alleviate the effects of the Depression and opponents from the right who said that the New Deal was socialist in nature. • Roosevelt’s 1937 plan to pack the Supreme Court and the recession of 1937 demonstrated that New Deal programs were not entirely successful in ending the Great Depression. • Many Americans turned to radio and the movies for relief during the Depression. 1929: Stock market crash 1930: Hawley-Smoot Tariff enacted 1931: Ford plants in Detroit shut down Initial trial of the Scottsboro Boys 1932: Glass-Steagall Banking Act enacted Bonus marchers routed from Washington Franklin D. Roosevelt elected president Huey Long announces “Share Our Wealth” movement 1933: Emergency Banking Relief Act enacted Agricultural Adjustment Act enacted National Industrial Recovery Act enacted Civilian Conservation Corps established Tennessee Valley Authority formed Public Works Administration established 1934: American unemployment reaches highest point 1935: Beginning of the Second New Deal Works Progress Administration established Social Security Act enacted Wagner Act enacted Formation of Committee for Industrial Organization (CIO) 1936: Franklin Roosevelt reelected Sit-down strike against CM begins 1937: Recession of 1937 begins Roosevelt’s plan to expand the Supreme Court defeated 1939: Gone with the Wind published The Grapes of Wrath published 1. Which of the following was not a cause of the stock market crash? A. Excessive American loans to European countries B. Uneven division of wealth C. Installment buying D. Drop in farm prices E. Purchasing of stocks “on the margin” (Correct Answer: A. All of the others were major underlying reasons for the crash. Americans loans to Europe benefited both European countries and American banking houses until the crash.) 2. Wealthy businessmen who objected to the New Deal programs of Franklin Roosevelt claimed that A. they unfairly aided the many who did not deserve it B. Roosevelt was personally a traitor to his class C. New Deal programs smacked of “Bolshevism” D. New Deal programs unfairly regulated businesses E. All of the above (Correct Answer: E. All of the criticisms listed were heard throughout the 1930s.) 3. The purpose of the Federal Deposit Insurance Corporation (FDIC) was to A. ensure that poor Americans had something to fall back on when they retired B. inspect the financial transactions of important businesses C. insure bank deposits of individual citizens D. ensure that businesses were established insurance funds for their workers, as mandated by congressional legislation E. increase governmental control over the economy (Correct Answer: C. The FDIC was established after the bank holiday to insure individual accounts in certified banks and to increase confidence in the banking system. Americans began to put money back into banks after its institution.) 4. One group of women who were able to keep their jobs during the Great Depression were B. clerical workers C. domestic workers D. government employees E. professional workers (Correct Answer: C. In the other occupations women were oftentimes fired before men, or had their hours drastically reduced. Those women who were employed as domestic workers were relatively safe, as this was one occupation that men, as a whole, rejected.) 5. The popularity of Huey Long and Father Coughlin in the mid-1930s demonstrated that A. most Americans felt that the New Deal had gone too far in undermining traditional American values B. more Americans were turning to religion in the 1930s C. most Americans favored truly radical solutions to America’s problems D. many Americans felt that the government should do more to end the problems associated with the Depression E. Franklin Roosevelt was losing the support of large numbers of voters (Correct Answer: D. Many Americans wanted more New Deal-style programs and felt that Roosevelt should have gone even further in his proposed legislation. Many may have listened to Long and Coughlin, but when it came time to vote, cast their ballots for Roosevelt—thus negating answer C. The idea that the New Deal went too far in destroying American capitalism was popular in the business community, but was not widely shared in mainstream America.)
PASSAGE.—When a bill passes the house in which it originated, the clerk transmits and reports it to the other house for action. The house to which it is transmitted may pass it without commitment, but usually refers it to a committee, and, when reported, may pass it or reject it, or amend it and return it with the amendment to the house in which it originated. When passed by both houses, the bill is engrossed—that is, rewritten without blots or erasures—and transmitted to the President or governor, as the case may be, for his approval. If approved and signed, or if not returned within a fixed time, the bill becomes a law. If vetoed, it must be again considered by both bodies, and is lost unless again passed by each, and in Congress and in many States by a two thirds vote. 1. Obtain from any convenient source and present in the recitation a sample of a bill, and also of a resolution. 2. Why should a bill have three separate readings on three different days? 3. Why is the report of a committee generally adopted by the body? 4. Why are chairmanships of committees usually much sought after in legislative bodies? 5. Present in the recitation a copy of the report of a legislative committee upon some subject. REVENUE AND TAXATION. Revenue.—The regulation of revenue and taxation is one of the most important and difficult questions of government. One of the wisest of modern statesmen has said that the management of finance is government. Government, whatever its form, is an intricate and expensive machine, and therefore sure and ample sources of revenue are as necessary to it as blood is to the human body. The necessary expenses of a local community, such as a village, a city, or a county, are heavy; while those of a State are immense, and those of a nation almost beyond conception. These expenses must be promptly met, or the government becomes bankrupt, lacking in respect, without power to enforce its rights even among its own people, and finally ceases to exist. TAXATION.—The chief source of revenue in all governments is taxation. A tax is a portion of private property taken by the government for public purposes. Taxation, the act of laying taxes, is regarded as the highest function of government. It is also one of the most delicate, because it touches the people directly, and is therefore frequently the cause of discontent among the masses. The government makes no direct return to the citizen for the taxes it exacts, and in this respect only does taxation differ from the exercise of the right of eminent domain. How much revenue must be raised? what articles should be taxed? what should be the rate of taxation? are questions that concern every government.
A makerspace can be defined as a collaborative lab or studio space where students create, think, share and grow using an assortment of materials and technologies. These spaces can have high-tech “maker equipment” such as 3D printers, laser cutters and coding kits or simple, no-tech tools such as Legos, Playdoh and cardboard. Every makerspace is unique, and what materials and technologies are used often depend on the types of projects the space is built for and who exactly it serves. As such, there is no one definition of a makerspace. Even the name “makerspace” varies: fab lab, TechShop, hackerspace or hacklab are all terms used by practitioners to describe the environment. The Maker Movement buzz is everywhere and so are Makerspaces. Makerspaces cultivate the imagination and provide a hands-on space to tinker, invent, and innovate. In the classroom, library, community center, or home, Makerspaces foster creativity, play, and problem-solving. - Makerspace layout and design - Budgeting for materials and construction - Workshop and curriculum Development - Marketing/Branding your Makerspace - Sourcing your materials
Rhinitis is an inflammation of the mucous membranes lining the inside of the nose. The condition is often referred to as a commonplace cold and may either be acute or chronic. The acute variety is characterized by sneezing and thin, watery discharge from the nostrils. Swelling of the mucous membranes can impair breathing and also affect the animal’s sense of smell, which results in a weak appetite. In many cases, the condition is worsened by super-infections, which make the discharge turn mucusy and pusy. A typical symptom of this is when nostrils become crusted, which may impair breathing. Dogs normally breathe through their nose and only breathe through their mouth when necessary, and or it is impossible to breathe though their nose. Sneezing persists concurrently. An affected dog often tries to clear his nose by scratching or rubbing it, which may lead to bleeding. Repeated scratching of the nose can cause skin complaints and bleeding in this area. If the rhinitis fails to heal, it can become a chronic condition. Nasal rhinitis is considered chronic if symptoms persist for at least two months. In chronic cases, nasal discharge is usually pusy. Affected dogs often have seriously impaired breathing and may show the previously mentioned a lack of appetite. The condition often appears in along with an inflammation of the sinuses or the lower airways. In this case fever, impaired overall health and pain also occur. Rhinitis may be triggered by various causes. The most common cause is a viral or bacterial infection. Damp weather, drafts, and dusty or dry air can be predisposing factors for the infiltration of an infectious agent. Tumors, parasites or tooth abscesses can also produce symptoms of chronic rhinitis. If a bacterial infection is causing rhinitis, the administration of a suitable antibiotic is usually effective in curing the condition. Viral rhinitis is often self-limiting and may regress spontaneously after a number of days. If disturbing environmental factors, i.e. exposure to damp and cold weather are present, healing may be impaired and the condition can assume a chronic character. If an underlying cause of the condition is present, it will have to be treated accordingly. If there is a tooth abscess, it is usually lanced and drained. And, if nasal tumors are causing the rhinitis, surgery may be required in order to remove them, sometimes with or without chemotherapy. If your dog is showing one or more of the symptoms mentioned above, contact your veterinarian and arrange for an appointment within the next couple of days. Dogs with rhinitis should not be kept in dusty environments.
Children Are Little Scientists By Tim Seldin - President, The Montessori Foundation; Chair, The International Montessori Council Children have an inbuilt drive for discovery. Encourage your child to observe the world and to feel a sense of wonder for everything in it. Maria Montessori believed that all children behave like "little scientists" in that they are eager to observe and make "what if" discoveries about their world. Infants and toddlers test the environment to see what happens when, for example, they drop a toy out of their highchair or play with the water in their bath. This drive for discovery continues to develop as they grow and become more adventurous in the things that they try out, from making mud pies in the garden to starting a worm farm in the living room. Children are born with marvelous imaginations and a keen desire to explore the world. Encourage this in your child - help her to discover the beauty and wonder of everything around her. Child's eye view Remember that your child's world is up close and low to the ground. Seeing life from her point of view can help you to rediscover the sense of wonder of a young child. Keep in mind the slow moving pace of her world. Follow your child's lead, and be prepared to stop and examine anything that captures her interest - a ladybug or a flower, for example. Don't get impatient when she dawdles - adjust to her pace. The best way for children to learn is by doing things, not by being told about them. This is especially true when they are young, but it also applies to older children and even adults. When children are young, they are not only learning things, they are learning how to learn. No book using words and illustrations to describe the world that exists around a small brook or under a rotting log can replace the value of spending time closely studying the real thing. Books and other materials help children to pull these powerful impressions and experiences together in their minds, but the foundation needs to be laid in direct observation and hands-on experience. The outdoor world Children love to be outdoors, wandering around, climbing trees, picking berries, and collecting pinecones. They enjoy helping to look after the family garden or feeding small animals such as ducks, rabbits, and chickens. They form lifelong memories of days spent hiking with their parents in the woods, playing in a creek, and walking along a beach looking for shells. You will probably begin your child's life outdoors by taking her out for little excursions in her stroller or carrying her on your back. Take time to introduce her to your world. Even very young infants absorb the sights and sounds of the outdoors -clouds passing overhead, the sight and smell of flowers in the garden, the wind rustling the leaves in the trees. All these leave a strong and lasting impression. Whether it is summer, fall, spring, or winter, every season has its own beauty. Point out small things: a tiny flower poking up through the snow, a beautiful shell, a perfect leaf. As your child gets older, begin to point out familiar things as you walk around. "Look, there's Grandma's house! What lovely flowers she has growing outside her door!" or "My goodness, Mary, can you see the nest those birds have built in the tree? Some day they will lay eggs, and they will have baby birds up there!" In the winter, when you see animal tracks in the fresh snow, ask, "Who has been walking here?" Next Page >>
Definition - What does Easement mean? An easement is a legal property privilege, permission or right to use or enter someone else's property – without possessing it – in order to travel from point A to point B. An easement gives a person the right to travel across another person’s land, but does not give them the right to do anything else with that person’s land. It generally arises between owners whose properties are adjacent to each other. Easements can include a variety of rights granted by a property owner to someone else. Right of way is one of the most common examples of an easement; in such cases, a property owner with no street front is granted the right to use a specific part of a neighbor's land in order to access the road. Other common forms of easements include the right of light and air, rights concerning artificial waterways and rights concerning excavations. Justipedia explains Easement Historically, easements provided the right of way and rights relevant to flowing water. In the United States, an easement can be granted by one property owner to someone else in the form of a deed, will or contract, such that it complies with the statute of frauds. Some easements are made by agreement between two parties, while other easements are automatically created in some instances. The following is an example of when an easement would be required of a person: Person A owns land that is landlocked, except for its south entrance. Person A then sells the top half of the land to Person B. In this situation, the only way that Person B could get on and off the land is to travel through Person A’s land.
The Arts are essential to the core educational experience of all children. They consist of unique disciplines that foster multiple intelligences, and build the bridges that strengthen bonds between cultures and generations. The nurturing of every student’s ability to perform, create, and respond to the Arts empowers and encourages the intellectual, social, and emotional growth of all children. Through Arts education children learn responsibility, self-discipline, commitment, and collaboration, which are key attributes of successful learners. Furthermore, higher order thinking skills that include evaluation, analysis, and synthesis, are constantly engaged as students reflect on the Arts, aspire to reach performance standards, and learn to make qualitative judgments. The Arts are an integral part of a child’s education. Participation in the Arts fosters communication and understanding of ourselves and those around us. Skills learned through the Arts reinforce and improve learning in many subject areas such as reading, language, and physical education. It is through the Arts that students experience the culture and aesthetics that enrich all aspects of life. 10 lessons the Arts Teach: - The arts teach children to make good judgments about qualitative relationships. Unlike much of the curriculum in which correct answers and rules prevail, in the arts, it is judgment rather than rules that prevail. - The arts teach children that problems can have more than one solution and that questions can have more than one answer. - The arts celebrate multiple perspectives. One of their large lessons is that there are many ways to see and interpret the world. - The arts teach children that in complex forms of problem solving purposes are seldom fixed, but change with circumstance and opportunity. Learning in the arts requires the ability and a willingness to surrender to the unanticipated possibilities of the work as it unfolds. - The arts make vivid the fact that neither words in their literal form nor numbers exhaust what we can know. The limits of our language do not define the limits of our cognition. - The arts teach students that small differences can have large effects. The arts traffic in subtleties. - The arts teach students to think through and within a material. All art forms employ some means through which images become real. - The arts help children learn to say what cannot be said. When children are invited to disclose what a work of art helps them feel, they must reach into their poetic capacities to find the words that will do the job. - The arts enable us to have experience we can have from no other source and through such experience to discover the range and variety of what we are capable of feeling. - The arts' position in the school curriculum symbolizes to the young what adults believe is important. SOURCE: Eisner, E. (2002). The Arts and the Creation of Mind.
Testing the function of female song in the Bachman's sparrow. Rindy Anderson Florida Atlantic University Our lab studies animal communication, in particular, the acoustic structure and social function of bird song. One of our ongoing projects at Johnathan Dickinson State Park is to study the structure and function of female song in the Bachman's sparrow. In mid-July, we rotated our SM4 recorder near the nests of several of the mated pairs we were monitoring, trying to capture recordings of the elusive females. Female birds do not sing in the majority or North American songbird species, but Bachman's sparrow females do sing. They don't sing with the showiness or bravado that their mates do, but yet they do produce song-like vocalizations. We want to know why. Over the past two breeding seasons, we made several observations of females singing in the proximity of their mates, and near the nests they were building. We obtained good quality recordings from one female, which will allow us to develop a protocol for making larger scale acoustic comparisons between the songs of males and females next season. In addition, we placed the recorder on the territories of males that had been subjects in the aggression experiment we were completing. We recorded each male for 24-48 hours, two critical pieces of information: singing patterns from pre-dawn to post-dusk, and singing patterns during undisturbed, unprovoked singing. We are now comparing those singing patterns to the patterns we recorded in response to a simulated territorial intrusion by a singing rival male. In April 2018, we will obtain recordings of 8-10 females to use for acoustic analysis and for playback experiments designed to test when and why females sing. Female birds do not sing in the majority or North American songbird species. Bachman's sparrow females do not sing with the showiness or bravado that their mates do, but they do produce song-like vocalizations. Below is a spectrogram (a visual representation of sound plotting song pitch over time, much like music is visualized) showing an example of one female's song that we recorded using our SM4 song meter. Also pictured are examples of male broadcast songs (called primary song) and an example of "warbled song," which is quite distinct from primary song. The female songs we have visualized so far bear some resemblance to male warbled song, being a non-stereotyped, seemingly jumbled series of notes. During the next field season, we will use the songs we have recorded to perform playback experiments to measure female behavioral responses to a simulated female intruder. We bought four additional SM4 units, which will allow us to rotate the meters among the territories of many more females to capture song at different stages of the nesting cycle. In addition to our study of female song, we used the SM4 meter to record many hours of male Bachman's sparrows singing at the dawn chorus, and throughout the day. From these recordings we are gaining understanding about how males use their song type repertoires in different behavioral contexts, and the degree to which males share song types. Our preliminary data suggest that neighbors share a large number of song types on average (> 50%) while non-neighbors share fewer song types (< 30%). This pattern has implications for how males use their songs to communicate with neighbors, and how song type sharing may influence where young males choose to defend a territory. We continue our efforts to gather data on the acoustic structure and social function of female song in Bachman's Sparrow. This is a shy, elusive species. Females are tricky to find, and even more challenging to record, because they sing infrequently. In 2017 we captured a few recordings of female song. So far in 2018, we have observed females singing in the field, but are still working to capture audio recordings. Our observations suggest that females sing when fertile, in temporal proximity to copulation. Perhaps their songs serve as an invitation to mate? Working on this hunch, we are placing our SM4 recorders on trees within 10 meters or so of nests that are being built, and as eggs are being laid. We have not yet analyzed the many hours of recordings obtained so far (over 620 hours!), but we are hopeful that we havecaptured examples of female song from several different birds. With these recordings, we will compare the acoustic structure of female song to those of males, and quantify variation in female songs both within and between females. Male Bachman's sparrows sing over 40 types of Primary Song – do females also sing many song types? Does an individual's song vary from day to day, or does she sing a consistent song? How will females respond to songs of other females played on their territories? We look forward to digging into our data to answer these and other questions about this interesting species. Our lab studies animal communication, in particular, the acoustic structure and social function of bird song. One of our projects is to study the structure and function of female song in the Bachman's sparrow. Female birds do not sing in most North American songbird species, but Bachman's sparrow females produce song-like vocalizations. We want to know why. Over the past two breeding seasons here in South Florida, we made several observations of females singing in the proximity of their mates, and near the nests they were building. In April 2018 we began placing our SM4 recorders near nests during the building stage, hoping to capture recordings of female song. This has proven to be a challenging task! In for a total of 118 hours of recordings. From April – July 2018, we recorded on a total of 2017, we recorded on three territories 38 territories, placing the recorders near known active nests, for a total of 3,108 hours of recording. So far we have found three good examples of female song, and these recordings will serve as stimuli for a playback experiment next spring in which we will test the responses of territorial pairs to playbacks of female song at different stages of the nesting cycle. Our primary goal for the next several months is to analyze the many hours of recordings we gathered this past season to find additional examples of female song. We will tackle this challenge using a custom software program written by one of our undergraduate students. This program can be "trained" to look for vocalizations matching the acoustic qualities of female Bachman's sparrow song. This program will automate and thus greatly speed-up the process of combing through the recordings, and we hope to find at least a dozen examples of female song by March 2019. In addition to using our SM4 recorders to capture female song, we have been using them to "eavesdrop" on the natural singing interactions of neighboring male sparrows. Bachman's sparrow males have large repertoires of broadcast song types, and neighboring males tend to share quite a few types in common. In the field, we often hear males within ear-shot of each other counter-singing by matching each other's song types. We are using the SM4 recorders to capture the natural dynamics of these social interactions. This season we recorded a total of 10 sets of neighbors by placing an SM4 recorder near the boundary between neighboring territories. This season we recorded approximately 12 hours a day for several days for each pair of neighbors (2 hours before and 4 hours after sunrise, and 4 hours before and 2 hours after sunset). One student in the lab is now pouring over these recordings to document and describe cases of natural song type matching interactions, which has not been done for this species. So far she has found several astonishing examples of song matching, in which males matched song-for-song during bouts of counter-singing. Why do they do this? We will use these SM4 data along with song playback experiments to try and uncover the social significance of song matching behavior. In addition, a graduate student in the lab will be analyzing the 3,108 hours of territorial recordings from the SM4 recorders to gain two critical pieces of information: singing patterns of individual birds from pre-dawn to post-dusk, and singing patterns during undisturbed, unprovoked singing. We have many hours of recordings of males singing in response to simulated territorial intrusions, in which we use song playback to provoke territorial behavior. However, little is known about how male Bachman's sparrows utilize their large repertoires during bouts of natural advertisement singing, which are most common at dawn and dusk. Each field season with Bachman's sparrow brings new challenges and exciting new questions to tackle. We are very enthusiastic to be adding to general knowledge about this understudied and enigmatic species, and why it has evolved such a large and varied vocal communication system. In addition, we are beginning new projects with the Northern cardinal in South Florida, in which we will continue our studies of female song, and will ask new questions about how vocal communication differs across urbanization gradients. We are deeply appreciative to Wildlife Acoustics for their Scientific Product Award of an SM4 meter, which has been a game-changer for our research!
Activities and Lesson Plans You may print Turtle Hurdles children's pages from the Texas Parks & Wildlife Magazine. We hope you'll consider a subscription to our magazine. Be sure to check out the Texas Parks & Wildlife Magazine special offer for teachers. And please let us know your suggestions for future issues at: [email protected] Suggested Topics: adaptations, classification, habitat, migration, myths. Related 4th Grade TEKS: - Language Arts: - 4.1 A, B, C: Listening, Speaking, Purposes: Listens Actively and Purposefully in a Variety of Settings - 4.13 C,D,G: Reading, Inquiry, Research: Inquires and Conducts Research Using a Variety of Sources - 4.15 C: Writing, Purposes: Writes for Variety of Audiences and Purposes in Various Forms - Social studies: - 4.9 A, C: Geography: Humans Adapt to and Modify their Environment - 4.23 C, E: Social Studies Skills: Communicates in Written, Oral and Visual Forms - 4.24 A: Social Studies Skills: Problem Solving and Decision Making - 4.1 A, B: Scientific Processes: Conducts Field and Laboratory Investigations - 4.2 A, B, C, D, E: Scientific Processes: Develops Abilities to do Scientific Inquiry in Field and Laboratory - 4.3 D: Scientific Processes: Uses Critical Thinking and Scientific Problem Solving to Make Informed Decisions - 4.5 B: Science Concepts: Parts Removed from Complex Systems - 4.8 B: Science Concepts: Adaptations Increase Survival - 4.10 B: Science Concepts: Past Events Affect Present and Future Events - 4.3 A: Number, Operations and Quantitative Reasoning : Addition and Subtraction - 4.13 C: Probability and Statistics : Solve Problems by Collecting, Organizing, Displaying and Inerpreting Data - 4.15 A: Underlying Processes and Mathematical Tools : Communicates about Math - 4.16 A: Underlying Processes and Mathematical Tools : Uses Logical Reasoning - Have you ever seen a turtle outside? Where did you see it? What did it look like? - Turtles are reptiles. What characteristics makes them a reptile? - Why are the red-eared sliders sitting in the sun? - The article describes three different groups of turtles based on where they live. What are they? - Which sea turtle is the smallest and most rare? - How did the "leatherback" get its name? - Which turtles have flippers for feet? Why is this an advantage? - Describe a turtle's shell. Do you think a turtle can crawl out of its shell? Why or why not? - Describe the inherited trait of the alligator snapping turtle that helps it catch its prey. - After reading the magazine, describe why you think the author titled it "Turtle Hurdles." - What can you do to help turtles? - Compare and contrast the turtles featured in the magazine. Make a chart to help with the comparison. - Have students chose an episode of Tortuga Tex to describe a Texas aquatic habitat or species. - Have students make presentations about one of the species of turtles based on an interesting characteristic. See if the students can group the turtles by whether they are sea turtles, freshwater turtles or terrestrial turtles. Have students use one of the vocabulary words in their report. For more advanced readers, use the as sources for their research. For students doing research on sea turtles, National Geographic has an excellent illustration at: http://ngm.nationalgeographic.com/2009/05/leatherback-turtles/swimming-machine-interactive - Color a Texas tortoise - Learn about sea turtle migration. - Voyage of the Lonely Turtle: www.pbs.org/wnet/nature/episodes/voyage-of-the-lonely-turtle/introduction/2503/ - Lesson plans: Leatherback migration: www.nationalgeographic.com/xpeditions/lessons/09/gk2/migrationturtles.html and Human migration: www.nationalgeographic.com/xpeditions/lessons/09/g35/ - Background on sea turtle migration: http://news.nationalgeographic.com/news/2001/10/1012_TVanimalnavigation.html - Have students identify turtles used in media and myth. What characteristics of turtles does the story use? characer use? (for starters, use: Crush in the movie Finding Nemo; Franklin on Nick Jr.'s Noggin; The Tortoise and the Hare; Teenage Mutant Ninja Turtles) - Make a turtle craft. Do a search on the web for many ideas. Interesting Links for Further Research Turtle as swimming machine illustration: Restoration efforts for sea turtles: Threats to sea turtles: Sea turtles in Texas Girl scouts help find, measure tracked turtles in S Carolina Texas Kemp's ridley in the news Rules and identification of Texas Turtles
Question 1: Elaborate on either the Enlightenment or the Great Awakening. How did the movement impact the ideological development of the colonies? Question 2: Explain the purpose of the Proclamation of 1763. Was the proclamation effective? Why or why not? How did colonials, natives, and the British react to both the Proclamation and its effects? Hi and thank you for using Brainmass. The solution below should get you started. Good luck on your studies. You can also use the listed references for further research. OTA 105878/Xenia Jones The Enlightenment in Colonial America The Enlightenment was an intellectual movement that began and took grip in Europe - in its academic halls, in the discourse among the learned and the thinkers. Basically, it marked a shift in thinking. Whereas prior to it religion was used to establish knowledge, during the enlightenment - observation, facts weighed by reason determined what is established as accepted knowledge. The Enlightenment, historians argue began with Rene Descartes' publication of 'Cogitatio' where he declared 'cogito ergo sum' - I think therefore I am. This argues that we are who we are because of our minds and it is our cognition, our thinking that allows us to be, to understand and interact with our world. It flourished well into the 18th and early 19th century and via this Age, science and the scientific method was established leading to the establishment of the scientific process as a means to create knowledge, establish knowledge and discover and understand our world. The Enlightenment led to explorations of philosophy, of the sciences both social and natural and many other avenues of thinking, of knowledge and of the ... The solution is a 2-part narrative that discusses the impact of the Enlightenment in the intellectual and ideological development in the colonies and also goes into a discussion of the Proclamation of 1763 - assessing is effectiveness in relation to the reaction of the colonials, the natives and the British. References are listed for the purpose of expansion on the topic.
When an object is travelling on a circular path, it constantly changes its direction due to the curvy path. This constant change in the direction of the velocity causes the acceleration and this acceleration is known as the angular acceleration. Therefore angular acceleration is described as the change in the angular velocity with respect to change in time. Instantaneous angular acceleration also implies the same definition, but here the angular velocity is considered at a given particular instant of time. Angular acceleration is a vector quantity since it has both magnitude and direction. Example 1: The angular velocity of an object travelling in a circular path is changing at a rate 15rad/sec. Calculate the angular acceleration of the object in a time interval of 5secs. The formula is given by: Angular acceleration, α = ?ω/?t Here, ?ω = change in the angular velocity = 15rad/sec ?t= change in time or time interval = 5secs This gives: Angular acceleration, α = (15rad/sec)/ (5secs) Hence the angular acceleration of the given object, α = 3 rad/sec2 Example 2: What is the instantaneous angular acceleration of an object travelling with an angular velocity of 38rad/sec at the instant when the time is 8secs? The formula is given by: Instantaneous Angular acceleration, α = dω/ dt Given: dω = angular velocity at the particular instant = 38rad/sec dt= time at a particular instant = 8secs This gives: α = (38rad/sec)/ (8secs) ==> Instantaneous angular acceleration, α = 38/8 = 4.75 rad/sec2 Hence the instantaneous angular acceleration of the object is 4.75 rad/sec2
The Ottoman period for Athens began in 1458 with the city’s peaceful occupation, following a treaty between the Ottomans and the last duke of the Acciaioli, and ended in 1821 with the proclamation of Greek Independence. During this period the city was in Ottoman hands continuously with the exception of a brief interval of Venetian occupation between 1687 and 1688, which is usually taken as the boundary between the historical subdivisions of the first and second Ottoman periods. In the first years following the conquest, the Ottoman presence was limited to the area of the Acropolis, and the city seems not to have expanded beyond the boundaries of the late Roman walls. Soon, however, with the arrival of new Christian inhabitants and, later, the increase in the Muslim population as well, the city expanded and was gradually reorganized, as it enjoyed administrative and religious privileges bestowed upon it by Mehmet the Conqueror himself. This course of development was interrupted in 1687 by the Venetian attack led by Captain-General Morosini, whose siege and bombardment of the Acropolis brought the greatest destruction to its monuments in history. The Venetians left Athens after five months, and the Ottomans, as well as the Greek inhabitants who had in the meantime taken refuge in neighboring regions, returned in 1690. The subsequent period was characterized by social and economic realignment, while from 1790 the city’s incorporation into the administrative system known as malikâne led to a drastic deterioration in the living conditions of the inhabitants, reaching its nadir with the twenty-year despotic rule of the voyvoda Hacı Ali Haseki (1775-1795). Throughout the period, the organization of the urban fabric seems to have followed the basic distinction between areas of the citadel (kastro) and of the lower city. This distinction also extended to the corresponding economic and social functions of the two areas. The relevant sources for the first two centuries are limited, but after the third quarter of the 17th century, valuable information is to be found in the descriptions of foreign travelers who visited the city ever more frequently and described its topography and sketched its monuments. The Acropolis, or “kastro,” was the exclusive residential compound of the Muslim population and the Ottoman garrison. Among the first projects of the Ottomans was the conversion of the Christian church in the Parthenon into a mosque and the repairs to the fortifications, which were more systematically strengthened in the 17th century with the construction of the Wall of the Ypapanti. The lower city, densely built with habitations and public buildings, was divided up into open spaces and parishes/neighborhoods, and concentrated economic, administrative and religious functions. Commercial activities — focused in the daily (lower bazaar) and weekly (upper bazaar) market — flourished in the area of the Library of Hadrian and the Roman Agora, where the administrative center and voyvoda’s residence were still located. This picture of the public spaces of Ottoman Athens is filled in with baths, mosques, tekkes and educational establishments, as well as a large number of churches, both old and new. As for works of fortification, in 1778 the only such project was the Haseki wall, an almost makeshift construction, built from materials plundered from ancient monuments. After the outbreak of the War of Independence, the Greeks managed to take control of Athens for four years (1822-1826), during the course of which attempts were made to reorganize public life and maintain the city’s monuments. However, in 1827, the forces of Kütahi recaptured the city, which was once again abandoned by its inhabitants. The city’s final emancipation, with the surrender of the Acropolis to the new Bavarian rulers, took place on the 31st of March 1833, and Athens’ proclamation as the capital of the independent Greek State soon followed. Raïna Pouli in Ottoman Architecture in Greece, Hellenic Ministry of Culture, Directorate of Byzantine and Post-Byzantine Antiquities, 2008, reprinted courtesy of the Hellenic Ministry of Culture.
Head of Department: Kylie Crawley Members of Department: - Emma Carr - Russell Maddison Psychology is the scientific study of the mind and behaviour. The human mind is the most complex machine on Earth. It is the source of all thought and behaviour. Students study theories about behaviour, key studies and research, as well as learning how to conduct their own experiments both at GCSE and A level. KS4 Curriculum and Enrichment Opportunities (Years 9-11) GCSE Psychology is studied over 1 year and can be chosen as an option in either years 9, 10, or 11. The course investigates the following 5 research questions and subtopics, which are assessed through two examinations. Unit 1: Perception and Dreaming (40% of final grade) - How do we see our world? - Is dreaming meaningful? Unit 2: Social and Biological Psychological Debates (60% of final grade) - Do TV and video games affect young people’s behavior? - Why do we have phobias? - Are criminals born or made? We arrange a selection of extracurricular events in order for students to develop their research skills and see Psychology in action. These include a visit to the Zoo, the Welcome Museum, as well as guest speakers. KS5 Curriculum and Enrichment Opportunities (Years 12-13) A level Psychology is taught over 2 years with three exams at the end of the two years, and is comprised of the following: In Year 1 students gain a broad knowledge of Psychology by studying 4 approaches and by applying their knowledge and skills to key questions, whilst considering issues and debates including how psychological understanding has developed over time, the nature-nurture debate, issues of social control and issues related to socially sensitive research. In Social Psychology students will investigate areas such as prejudice, obedience, and the impact of role models. In Cognitive Psychology students will also learn about models of memory, look at case studies of brain damaged patients, as well as considering disorders such as dyslexia or Alzheimer’s. In Biological Psychology they will look at the CNS, the role of the brain in aggression and brain abnormalities. Finally, in Learning Theories students will study classical conditioning, operant conditioning and social learning theory, considering the ways in which these influence behaviour. In each approach students will be required to perform mathematical calculations and undertake scientific research, thus they will need to be familiar with subject specific terminology of a scientific and mathematical nature. A key component of the course is designing and conducting their own experiments, as well as reviewing them. In Year 2 students investigate areas that Psychology is applied to, including Clinical Psychology and Child Psychology. Here they gain a closer look at Psychology in action and the practical applications of psychological theories and therapies. They will also be required to demonstrate their psychological skills including scientific methods, a synoptic review of studies and must be able to demonstrate a broad understanding of issues and debates as they apply to Psychology. Students are encouraged to widen their studies and are supported with extra-curricular activities and events, such as guest speakers, student conferences, theatre tirips, and a trip to the Freud museum. The department is continuously updating and reviewing resources. We have a large and growing selection of books and publications and magazines for students to further and consolidate their knowledge. Other Important Information - We subscribe to Psychology publications and magazines that are aimed at developing your subject knowledge and exam skills. - We have over 30 Horizon programmes on various psychological issues/debates available for loan. - We also lend out various novels (We need to talk about Kevin, The curious incident of the dog in the night) - We have various non-fiction books such as Bad Science, Jigsaw man, and Opening Skinner's Box to borrow. - There are many Psychology related DVDs including A beautiful Mind, Rain Man, Green Street, One flew over the cuckoo's nest, 50 First dates and Shutter Island to watch. - We attend workshops and university open days to give you a taste of Psychology at university. Each term two students from each class are rewarded with £5 WH Smith vouchers. These are awarded at the teachers’ discretion but are generally for improvement or academic achievement. - Regularly, certificates are sent home to parents, as well as postcards notifying parents and carers of the progress made by their child.
Lesson 1 (from Prologue) In the prologue, Lestat introduces himself to the reader, reminding those who have read the first two books, that he is a vampire who seeks fame and fortune in the mortal world. In that, he has been successful, having written a successful autobiography, sold records detailing information about the vampire world, and staging a sold-out rock concert. The prologue is entirely in the first person, though Lestat says much of the rest of the book will be told from other points of view. The objective of this lesson is for students to understand the concept of point of view. Activity 1. Class discussion: What does point of view mean? (Who is telling the story) What are the most common points of view used by an author? (Objective, first person, third person). In each case, what does the narrator know and what is the narrator able to... This section contains 9,313 words (approx. 32 pages at 300 words per page)
Amino acids are essential for life and researchers believe these compounds could be produced as a result of comet colliding into a planet. Researchers believe this “cosmic factory” could lead to new insights on how life began on Earth and provides an intriguing process for the possibility of life forming on other planets. Researchers from Imperial College London and the University of Kent discovered, under the right circumstances, a comet, with its icy nucleus and surface of dust and debris, colliding into a planet could lead to the creation of amino acids. The process also works when a meteorite crashes into an icy planet. The research was published in the journal Nature Geoscience. According to the researchers, the resulting shock wave, caused by a comet or meteorite, creates molecules that make up amino acids and the heat from the impact turns those molecules into amino acids. The researchers experimented with this process by firing projectiles from a high-speed gun into icy targets with a similar mixture of components found in comets. From the shock wave and heat from impact, researchers created the amino acids “D- and L-alanine, and the non-protein amino acids α-aminoisobutyric acid and isovaline as well as their precursors.” Co-author Mark Price, from the University of Kent, said in a statement, “This process demonstrates a very simple mechanism whereby we can go from a mix of simple molecules, such as water and carbon-dioxide ice, to a more complicated molecule, such as an amino acid. This is the first step towards life. The next step is to work out how to go from an amino acid to even more complex molecules such as proteins.” Co-author Zita Martins, from Imperial College London, said the simple process can allow for the creation of amino acids throughout the universe based on the right conditions. “Excitingly, our study widens the scope for where these important ingredients may be formed in the Solar System and adds another piece to the puzzle of how life on our planet took root,” said Martins. According to the researchers, this process could have helped life begin on Earth during a time, approximately 4.5 billion and 3.8 billion years ago, when comets and meteorites routinely collided with Earth. The process could lead to a renewed interest in icy moons such as Saturn’s Enceladus and Jupiter’s Europa as meteorites that crash into these moons could produce amino acids.
This skills book is designed for use in conjuction with "The Macmillan Picture Dictionary Skills Book". The book features vocabulary skills and development exercises for primary learners of English. A picture dictionary is ideal for introducing young children to books at a very early stage in their education. Children can "read" picture dictionaries without knowing how to read or write any words at all - either in their own language or English. Children can also compare their experience of the world with the wider world illustrated in the dictionary. The illustrated dictionary contains almost 600 words chosen for their high frequency value and their appeal to young learners of English. The dictionary is split into two sections: an alphabetical wordlist containing almost all the words included in the dictionary; and theme pages, which introduce related words in context. More then two-thirds of the words listed in the dictionary are also illustrated in the theme pages. The book also includes detailed teaching notes and suggestions for practice activities, including tasks for children who have already begun to master the skills of reading and writing.
The functions in this category are the big ones in terms of providing the true power of a GIS. (So pay attention!) A. Spatial relationship functions This function takes two geometries as input and determines whether the first geometry contains the other. The example below selects each city in the state of New York and checks to see if it is contained by a bounding box, the box representing the bounds of Pennsylvania which we created earlier using MakeEnvelope. This query should return True values for the border cities of Binghamton, Elmira and Jamestown; and False for all other cities. SELECT name, ST_Contains(ST_MakeEnvelope(-80.52, 39.72, -74.70, 42.27, 4269),geom) FROM usa.cities WHERE stateabb = 'US-NY'; The converse of the ST_Contains() function is ST_Within(), which determines whether the first geometry is within the other. Thus you could obtain the same results returned by ST_Contains() by reversing the geometries: SELECT name, ST_Within(geom,ST_MakeEnvelope(-80.52, 39.72, -74.70, 42.27, 4269)) FROM usa.cities WHERE stateabb = 'US-NY'; This function will return the same results as ST_Contains() in most cases. To illustrate the difference between the two functions imagine a road segment that is exactly coincident with a county boundary (i.e., the road forms the boundary between two counties). If the road segment and county geometries were fed to the ST_Contains() function it would return False. The ST_Covers() function, on the other hand would return True. This function is to ST_Covers() as ST_Within() is to ST_Contains(). This function determines whether two geometries share the same space in any way. Unlike ST_Contains(), which tests whether one geometry is fully within another, ST_Intersects() looks for intersection between any parts of the geometries. Returning to the road/county example, a road segment that is partially within a county and partially outside of it would return False using ST_Contains(), but True using ST_Intersects(). This function is the converse of ST_Intersects(). It returns True if the two geometries share no space and False if they intersect. This function is quite similar to ST_Intersects with a couple of exceptions: a. the geometries must be of the same dimension (i.e., two lines or two polygons), and b. one geometry cannot completely contain the other. This function returns True if the two geometries are tangent to one another but do not share any interior space. If the geometries are disjoint or overlapping the function returns False. Two neighboring land parcels would return True when fed to ST_Touches(); a county and its parent state would yield a return value of False. This function performs "within a distance of" logic, accepting two geometries and a distance as inputs. It returns True if the geometries are within the specified distance of one another and False if they are not. The example below reports on whether features in the NYC pts table are within a distance of 2 miles (5280 feet x 2) of the Empire State Building. SELECT ptsA.name, ptsB.name, ST_DWithin(ST_Transform(ptsA.geom,2260),ST_Transform(ptsB.geom,2260),5280*2) FROM pts AS ptsA, pts AS ptsB WHERE ptsA.name = 'Empire State Building'; Some important aspects of this query are: - The geometries (stored in the NAD83 latitude/longitude coordinates) are transformed to the New York East State Plane coordinate system before being passed to ST_DWithin(). This avoids measuring distance in decimal degrees. - A cross join is used to join the pts table to itself. As we saw in Lesson 1, a cross join produces the cross product of two tables; i.e., it joins every row in the first table to every row in the second table. - The WHERE clause restricts the query to showing just the Empire State Building records, if that clause were omitted the query would output every combination of features from the pts table. This function is similar to ST_DWithin(), with the difference being that ST_DFullyWithin() requires each point that makes up the two geometries to be within the search distance, whereas ST_DWithin() is satisfied if any of the points comprising the geometries are within the search distance. The example below demonstrates the difference by performing a cross join between the NYC pts and polys. SELECT pts.name, polys.name, ST_DWithin(ST_Transform(pts.geom,2260),ST_Transform(polys.geom,2260),5280*2), ST_DFullyWithin(ST_Transform(pts.geom,2260),ST_Transform(polys.geom,2260),5280*2) FROM pts CROSS JOIN polys WHERE pts.name = 'Empire State Building'; ST_DWithin() reports that the Empire State Building and Central Park are within 2 miles of each other, whereas ST_DFullyWithin() reports that they are not (because part of the Central Park polygon is greater than 2 miles away). Note that this query shows an alternative syntax for specifying a cross join in Postgres. B. Measurement functions The key point to remember with this function is to use it on a geometry that is suitable for measuring areas. As we saw in Lesson 3, the ST_Transform() function can be used to re-project data on the fly if it is not stored in an appropriate projection. ST_Area() can be used on both geometry and geography data types. Though geography objects are in latitude/longitude coordinates by definition, ST_Area() is programmed to return area values in square meters when a geography object is passed to it. By default the area will be calculated using the WGS84 spheroid. This can be costly in terms of performance so the function has an optional use_spheroid parameter. Setting that parameter to false, causes the function to use a much simpler but less accurate sphere. See Lesson 3 for example usages of this function. This function calculates the 2D (Cartesian) distance between two geometries. It should only be used at a local or regional scale when the curvature of the earth's surface is not a significant factor. The example below again uses a cross join between the NYC pts table and itself to compute the distance in miles between the Empire State Building and the other features in the table: SELECT ptsA.name, ptsB.name, ST_Distance(ST_Transform(ptsA.geom,2260),ST_Transform(ptsB.geom,2260))/5280 FROM pts AS ptsA CROSS JOIN pts AS ptsB WHERE ptsA.name = 'Empire State Building'; The ST_Distance() function can also be used to calculate distances between geography data types. If only geography objects are supplied in the call to the function, the distance will be calculated based on a simple sphere. For a more accurate calculation an optional use_spheroid argument can be set to True, as we saw with ST_Area(). ST_Distance_Spheroid() and ST_Distance_Sphere() These functions exist to provide for high-accuracy distance measurement when the data are stored using the geometry data type (rather than geography) and the distance is large enough for the earth's curvature to have an impact. They essentially eliminate the need to transform lat/long data stored as geometries prior to using ST_Distance(). The example below illustrates the use of both functions to calculate the distance between Los Angeles and New York. SELECT cityA.name, cityB.name, ST_Distance_Sphere(cityA.geom,cityB.geom)/1000 AS dist_sphere, ST_Distance_Spheroid(cityA.geom,cityB.geom,'SPHEROID["GRS 1980",6378137,298.257222101]')/1000 AS dist_spheroid FROM cities AS cityA CROSS JOIN cities AS cityB WHERE cityA.name = 'Los Angeles' AND cityB.name = 'New York'; Note that the Spheroid function requires specification of a spheroid. In this case the GRS80 spheroid is used because it is associated with the NAD83 GCS. Other spheroid specifications can be found in the spatial_ref_sys table in the public schema. You can query that table like so: SELECT srtext FROM spatial_ref_sys WHERE srid = 4326; The query above returns the description of the WGS84 GCS, including its spheroid parameters. These parameters could be copied for use in the ST_Distance_Spheroid() function as in the example above. This function returns the length of a linestring. The length of polygon outlines is provided by ST_Perimeter(); see below. As with measuring distance, be sure to use an appropriate spatial reference. Here we get the length of the features in our NYC lines table in feet: SELECT name, ST_Length(ST_Transform(geom,2260)) FROM lines; As with the ST_Distance() function, ST_Length() accepts the geography data type as an input and can calculate length using either a sphere or spheroid. Like the ST_Distance_Spheroid() function, this function is intended for measuring the lengths of lat/long geometries without having to transform to a different spatial reference. This function is used to measure the lengths of linestrings that have a Z dimension. This function is to ST_Length3D() as the ST_Length_Spheroid() function is to ST_Length(). It makes it possible to measure the lengths of lines with a Z component using a spheroid rather than a 2D approach. This function is used to measure the length of the exterior ring of a polygon. Here we obtain the perimeter of Central Park: SELECT name, ST_Perimeter(ST_Transform(geom,2260)) FROM polys; This function is used to measure the perimeter of polygons whose boundaries include a Z dimension.
CBSE Guess > Papers > Question Papers > Class XII > 2004 > Economics > Compartment Delhi Set-I ECONOMICS (Set I—Compartment Delhi) SECTION – A Q. 1. Answer the following questions: 1x4 (i) Give meaning of opportunity cost. (ii) Define production function. (iii) Give meaning of producer’s equilibrium. (iv) Give one example of variable cost. Q. 2. Explain the central problem of ‘what to produce’. 3 Q. 3. What is the relation between the change in the price of a good and the change in demand of its substitute good? Explain with the help of an example. 1, 2 Q. 4. How is equilibrium price determined under perfect competition? Explain with the help of a diagram. 3 Q. 5. What happens to equilibrium price when there is decrease in demand? Explain with the help of a diagram. 3 Q. 6. At a price of Rs. 4 per unit a consumer buys 50 units of a good. The price elasticity of demand is —2. How many units will the consumer buy at Rs. 3 per unit? 4 Q. 7. Given that Fixed Cost is Rs. 20, calculate (a) Total Variable Cost and (b) Total Cost from the following: 4 Marginal Cost (Rs.) Q. 8. Explain the effects on output when all inputs are increased in the same proportion. 4 Q. 9. State any two features of monopolistic competition Draw Average Revenue and Marginal Revenue curves of a firm in a single diagram in this market. 4 State any three features of perfect competition. Also draw Average Revenue curve of the firm in this market. Q. 10. Explain briefly any three factors which lead to ‘increase in demand’. 6 Q. 11. Explain briefly any three determinants of supply of a good. 6 Explain the Law of Variable Proportions. Also state the reasons behind the law. Q. 12. What is ‘revenue’ of a firm? Give meaning of Average Revenue and Marginal Revenue. What happens to average revenue when marginal revenue is (i) greater than average revenue, (ii) equal to average and (iii) less than average revenue? 6 SECTION - B Q. 13. Answer the following questions 1x4 (i) Give meaning of macro economics. (ii) Give one example of micro economics. (iii) Define foreign exchange. (iv) What is Balance of Trade? Q. 14. Calculate Net Value Added at Factor Cost from the following: 3 |(i) Purchases of materials (iv) Excise tax (v) Opening stock (vi) Intermediate consumption (vii) Closing stock Q. 16. Explain briefly the meaning of involuntary unemployment and full employment. 3 Q. 17. Explain the relation between foreign exchange rate and demand for foreign exchange. 3 Q. 18. Explain the ‘unit of value’ function of money. 4 Explain the ‘standard of deferred payment’ function of money. Q. 19. Explain the ‘issue of currency’ function of a central bank. 4 Q. 20. Explain revenue receipts in a government budget with appropriate examples. 4 Q. 21. Explain the concept of ‘revenue deficit’ in a government budget. 4 Q. 22. Distinguish between intermediate products and final products. Giving reason, state whether the following are intermediate products or final products: 3, 3 (i) Purchase of equipments for installation in a factory (ii) Purchase of food items by a hotel (iii) Purchase of armaments by military Q. 23. Find out (a) National Income and (b) Gross National Disposable Income from the following data: 4, 2 |(i) Private final consumption expenditure (ii) Net current transfers from the rest of the world (iii) Indirect tax (iv) Net domestic capital formation (v) Government final consumption expenditure (vi) Consumption of fixed capital (depreciation) (ix) Net factor income from abroad Q. 24. Explain the role of taxation and government expenditure in reducing aggregate demand in an economy.6 Or Explain the role of ‘reserve ratio’ and ‘rate of interest’ in reducing aggregate demand in an economy. CBSE 2004 Question Papers Class XII
Here's all the evidence-based, expert advice to ensure your child's future health. more Scoliosis is an unnatural sideways curve to the spine. When viewed from behind, the spine should look straight. Doctors believe that as many as half of all individuals have some degree of scoliosis. Scoliosis can range from mild to severe. Severe scoliosis can be very painful and may require corrective surgery. What causes scoliosis? Doctors arenét sure exactly what causes scoliosis, but theories range from congenital factors (the condition is present at birth), to hormones, to genetics, to a weakness in the connective tissues that support the spine. Some severe cases of scoliosis may be the result of cerebral palsy and muscular dystrophy. Is scoliosis serious? Severe scoliosis can be painful and may require surgery to correct. Mild to moderate scoliosis may not be noticeable and does not cause any discomfort. In the worst cases, the spine and rib cage can compress the heart and lungs. Some people with untreated scoliosis can have chronic back pain and arthritis of the spine. Can I prevent scoliosis? Making sure that your child remains active in sport and other athletics may help prevent scoliosis, but thereés no guaranteed way to prevent it. How do I know if my child has scoliosis? Just before puberty, children experience a growth spurt. This is the time when the symptoms of scoliosis begin to develop. Watch for the following signs: - One shoulder tilted down towards a raised hip, as if the child is leaning sideways - Prominent ribs - A protruding shoulder blade - Tilted waist - Scoliosis is more pronounced when the child bends forward How do I treat scoliosis? Scoliosis is generally treated via one of two methods: - Braces é to prevent further curvature of the spine and prevent severe scoliosis - Surgery é used in extreme cases to straighten the spine Should I call the doctor? Your doctor can use x-rays to diagnose scoliosis. Make an appointment if you notice an unusual curve to your childés spine or if your child is complaining of discomfort. What you need to know about scoliosis - Scoliosis is an abnormal sideways curve to the spine. - Girls are more likely to develop moderate to severe scoliosis. - Scoliosis is treated using braces and surgery. Find information on other conditions: - Read more about muscular dystrophy - Learn about juvenile rheumatoid arthritis - More on bone and muscle conditions Written by Rebecca Stigall for Kidspot, Australiaés parenting resource for family health. Sources include Better Health Channel, NSW Health and Health Insite. Last revised: Wednesday, 20 January 2010 This article contains general information only and is not intended to replace advice from a qualified health professional.
(Last Updated on : 08/05/2014) Black Hole Tragedy is an incident depicts the dark side of Indian History. On June 20, 1756, Siraj-ud-daulah , the then Nawab of Bengal, captured Fort William and Calcutta (Kolkata ), wherein lied the main power of the British East India Company . The British and Anglo- Indian prisoners of war were thrust in a small and stuffy dungeon at Fort William after the fall of the fort, which is referred to as the Black Hole of Calcutta. 146 people were reported to be imprisoned. The British claimed that the dungeon with a probable dimension of 24 x 18 feet was not spacious enough to accommodate so many people who were forcibly pushed into the congested place. The British records held that by the next morning 123 of the prisoners had succumbed to the adverse conditions, mainly due to suffocation, unbearable heat and crushing. John Zephaniah Holwell, one of the survivors, mainly provided this statistical information. But some say that the total number of captives was not more than 69. Indian troops took the surviving defenders prisoner. Among the prisoners were civilians as well as soldiers. Holwell and three other captives were sent as prisoners to Murshidabad, the rest of the survivors were released after the interference and subsequent victory of Robert Clive. The controversies about the exact toll continue till date and the exact figures are not known. The Black Hole of Calcutta was afterwards used as a warehouse, and an obelisk, 50 feet (15 m) high, was set up in remembrance of the dead. No traces of the black hole remain today. There is significant history behind the capture of Fort William and the incident of the Black Hole of Calcutta. The British set up Fort William to safeguard the British East India Company's trade in the city of Calcutta in the region around Bengal. In 1756, with an aim to colonize Bengal and gradually the rest of India and also to be preparing for probable combats with the French forces, the British began strengthening the military defense of Fort William. In doing so, they interfered a lot into the internal political and military affairs of Bengal. The ruling Nawab of Bengal, Siraj ud-Daulah, was unsatisfied with such excessive interference and saw it as a possible threat to the sovereignty of Bengal. He ordered the British to cease the ongoing military actions but the British did not listen to him. As a result, to curb the atrocities of the British, the Nawab of Bengal stormed the fort and killed many. The battalion's chief officer planned an escape, and a token force was kept in the military fort under the control of John Zephaniah Holwell, who was a military surgeon as well as a top East India Company civil servant. In the meantime, the soldiers belonging to the allied troops, who were primarily Dutch abandoned the fight and the British ultimately failed to resist the attack of the Nawab. Holwell had erected a memoriam on the spot of the Black Hole of Calcutta to honor the dead, but around 1822 (the precise date is uncertain) it vanished. Lord Curzon, who became Viceroy of India in 1899, constructed a new monument in 1901 at the corner of Dalhousie Square, which is the probable site of the Black Hole. During 1940, when the Indian National movement was at its peak, the epitaph was removed from Dalhousie Square in July, 1940 and again established in the graveyard of St John's Church , where it remains till date.
Higher Standards and Smarter Tests High academic standards are a road map for educators and parents -- showing what students need to learn each year to be ready for high school graduation, college, or careers. In 2009, our state adopted the Colorado Academic Standards. These standards, in 10 subject areas, have been implemented by educators across our state to ensure our kids have the 21st century skills they need to succeed. They are already helping make sure that that, throughout their education, students are prepared for their next step. Ultimately, higher standards mean that a high school diploma represents readiness to succeed after high school. Just like higher academic standards, statewide academic tests are an important, necessary part of a high-quality education system. Good assessments are valuable educational tools for students, parents, teachers, principals, districts, and the state. In 2012, Colorado adopted new tests aligned to the Colorado Academic Standards to measure how our students are doing. These new, smarter tests (which replaced our old tests) were designed by educators and experts to ensure that students are on track. These tests measures a wide range of real-world skills that we know students need in the real world, like critical thinking, problem solving, and analysis. This new test is just one measure of a student’s knowledge and the results are another tool to help parents understand how their child is doing. What does a smarter test look like?
For the second year, the HBCSD Board Members passed a resolution to recognize Dyslexia Awareness Month in October. HBCSD is continuing to raise awareness through sharing information on the website, Constant Contact notifications at the school site, professional development for staff, and the West Orange County Consortium for Special Education Community Advisory Committee. The International Dyslexia Association (IDA) provides the following definition of dyslexia, which was adopted by the United States National Institutes of Child Health and Human Development: Dyslexia is a specific learning disability that is neurobiological in origin. It is characterized by difficulties with accurate and/or fluent word recognition and by poor spelling and decoding abilities. These difficulties typically result from a deficit in the phonological component of language that is often unexpected in relation to other cognitive abilities and the provision of effective classroom instruction. Secondary consequences may include problems in reading comprehension and reduced reading experience that can impede growth of vocabulary and background knowledge. Below is a link to the new California Dyslexia Guidelines prepared by the California Department of Education in August 2017. This is a rich source of information including describing dyslexia, assessment, and best practices to support students that are struggling with dyslexia. If you would like further information about dyslexia, please contact the Special Education Department at 714-378-2046.
How did Time Zones come to the US?: originally appeared on Quora: The best answer to any question. Ask a question, get a great answer. Learn from experts and get insider knowledge. You can follow Quora on Twitter, Facebook, and Google+. How The Railroad Changed Time Prior to 12 noon, November 18th, 1883, time was usually determined locally. With the majority of areas using a Solar position references with the Apparent Solar Time techniques. Each town had its own defacto reference. The clock maintained on a church steeple, city hall or by a jeweler, in a window or outside pedestal. There simply was no universal time standard and no clearly defined time zones from town to town and from state to state. The practical concept of time meridians (time zones) was first credited to Dr. William Hyde Wollaston in the late 1700s. It was later popularized by Abraham Follett Osler in the late 1800s in Britain. This led to the formation of the Greewich Observatory Mean Time (GMT) standard with ship and rail chronometers set to the known GMT standard. The first US national push for a universal time standard and time zones was proposed by William Lambert, who in 1809 presented a report to Congress for the establishment of time zones. The proposal was not accepted, nor were revised versions by Charles Dowd in 1870 and 1872. The expansion of the railroads pushing to the Pacific ocean was the largest motivation for a universal time standard and time zones. To maintain a fairly accurate railroad schedule, a time standard was absolutely necessary. There were also major safety issues as many trains would share a single track and thus exact time was critical. A number of notable train crashes could have been averted if a better time system was adopted on a nationwide basis. It was in this spirt that on November 18, 1883 at high noon, all the major railroads set their clocks to a universal time standard and recognized the 5 railroad time zones. Mo Town's, Mo Time A notable exception was Detroit, they used a local time basis until 1900, when the City Council decreed that clocks should be put back 28 minutes to Central Standard Time. About half the city businesses obeyed, while many individuals refused. Some saw exact time "dehumanizing" and used this as a reason to rebel. The decision was rescinded and the city reverted to solar time. After many railroad companies refused to use Detroit time, the city voted in 1905 to follow Central Standard Time. By March 19, 1918, in the US Congress passed the Standard Time Act. In 1966 the Department of Transportation was formed and took over the function of time zones setting and modifications in the US. The National Institute of Standards And Technology keeps the universal time standard for the US. Standard Time Changed The Human Pace It is hard to comprehend the world not on a universal time standard today. Certainly commerce as we know it, demanded it. But it is interesting that each little town kept its own time and in some very meaningful way, kept its own pace. The industrial revolution certainly accelerated the pace. By having an exact time standard humans added a very real and meaningful layer of stress to our lives that still impacts all of us today. The lack of an exact time standard allowed for far more flexibility in just about ever aspect of everyone's life. A vast majority of people were perfectly comfortable dividing the day in 3 or 4 parts. What a different world it was, just a little more than 100 years ago. More questions on Quora:
Hawaii’s hurricane season is from June to November of every year, but just how much of a threat to Hawaii are hurricanes? How frequent are hurricanes in Hawaii? One might think that hurricanes are severe threats to the Hawaiian islands, as many islands found in the middle of the Pacific are at high risk of being hit by hurricanes. Yet unlike many other islands, Hawaii actually sees hurricanes fairly rarely. There are a variety of reasons that Hawaii doesn’t experience many hurricanes, including the island’s location and a nearby high-pressure area. Let’s explore Hawaii’s relationship with hurricanes in greater detail. Where Is Hawaii? The islands of Hawaii are situated at approximately 19.8968° N, 155.5828° W, in the middle of the Pacific Ocean between the Philippines and North America. Because of Hawaii’s centralized location in the Pacific, it functions as a stopover for many planes and ships. It also functions as a major hub of transit for people and goods headings to regions on either side of the Pacific. Note Hawaii’s GPS coordinates, as they are important for understanding its positioning relative to areas with frequent tropical storms and hurricanes. Some Facts About Hurricanes And Their Formation Hurricanes are only capable of forming over the warmer ocean waters that are near the Earth’s equator. This is because the moist, warm air of the waters near the equator is rising upwards and away from the surface, leaving a low air pressure area below the rising air. The air in surrounding regions of higher air pressure then rushes into the low-pressure area to fill it in, but then that new column of air also become warm and moist and rises as well. The moist air continues to rise and cool off and at higher altitudes, forming clouds. This cycle of warm moist air and descending cool air combines with the wind to create a spinning system of clouds, moisture, and air. The storms that form south of the equator spinning clockwise, while the storms which end up forming north of the equator spin counterclockwise. As the wind speeds continue to accelerate in the system, they reach speeds which can destroy structures when they hit a body of land. Hurricane-force winds can easily occupy an area with a radius approaching 200 miles, and can last up to two weeks long. The average duration of a hurricane is about six days, however. Hurricanes are capable of heavily damaging coastal areas, causing mass flooding by raising the sea levels up to 30 feet in the areas they make landfall, pushing large amounts of earth and sand around, and throwing heavy or sharp objects through the air. Most hurricanes, about two-thirds of them, occur in the northern hemisphere, in one of two different latitude ranges. One of the common latitude ranges for hurricanes is between 4 and 22 degrees south latitude, while the other latitude range is between 4 and 35 degrees north latitude. Thankfully, despite the damage that hurricanes can cause to life, property, and the environment, Hawaii actually experiences substantially fewer hurricanes than many other island regions or coastal areas. United States scientists have monitored hurricanes since around 1950, and during the six decades to follow, only approximately four hurricanes have hit the islands with enough force to do substantial damage. Most of these hurricanes did indeed occur between June to November. Hurricanes are more likely to appear in the late summer because the surface of the ocean is warmer at this time of year, which creates low-pressure areas. The Most Damaging Hawaii Hurricanes There are between four to six tropical cyclones that form in the middle of the Pacific every year, yet despite this, only four hurricanes have done substantial damage to the islands of Hawaii. These hurricanes were Iniki, Iwa, Dot, and Nina. Hurricane Nina formed south of Hawaii in November of 1957 and did approximately 100,000 dollars of damage to the island. Hurricane Dot hit just south of Hawaii in August of 1959 and did a few million dollars in damage. Hurricane Iwa did approximately 250 million dollars of damage to Hawaii in November of 1982. Hurricane Iniki hit the island in September of 1992 and causes approximately 3.1 billion dollars in damage. For a point of comparison, the mainland United States experiences about seven hurricanes every four years. Florida alone has experienced four different major hurricanes since 2010. Why Is Hawaii Relatively Safe From Hurricanes? One of the reasons that Hawaii doesn’t experience many hurricanes is just that it is a rather small target. The Pacific Ocean is vast and Hawaii is fairly small in comparison, meaning that its relatively unlikely any individual hurricane will hit the island group. Yet there is another reason that Hawaii doesn’t experience many hurricanes. As it happens, Hawaii is fortunate enough to have a high-pressure region located just to its northeast. This area of high pressure helps ensure that water temperatures are fairly stable over of the year, meaning that relatively few hurricanes form in the region. The high-pressure area essentially helps shield the islands from large, powerful storms. This high-pressure area keeps the water near Hawaii cool, and since hurricanes need warm water to form they are less likely to occur near the islands. The westerly currents near the islands also create wind shears that aren’t very conducive to the formation of hurricanes. These combined effects mean that storms generally don’t get to close to the islands and if they do they generally become substantially weaker. The most common path that hurricanes moving through the central Pacific Ocean take is to the south of Hawaii. The ocean is warmer in this southern region, close to the equator. Three of the four devastating hurricanes to hit Hawaii in the last 60 years all formed in the south of Hawaii. Hurricane Iniki occurred when an aberration in the south westerlies let the storm track to the north. However, environmental scientists worry that climate change could be causing this high-pressure system to destabilize, and as a consequence, Hawaii could see more devastating hurricanes in the future.
Technique could allow cells to be extracted in real time, help prevent cancer from spreading Researchers at UCLA and NantWorks have developed an artificial intelligence-powered device that detects cancer cells in a few milliseconds — hundreds of times faster than previous methods. With that speed, the invention could make it possible to extract cancer cells from blood immediately after they are detected, which could in turn help prevent the disease from spreading in the body. A paper about the advance was published in the journal Nature Scientific Reports. The approach relies on two core technologies: deep learning and photonic time stretch. Deep learning is a type of machine learning, an artificial intelligence technique in which algorithms are “trained” to perform tasks using large volumes of data. In deep learning, algorithms called neural networks are modeled after how the human brain works. Compared to other types of machine learning, deep learning has proven to be especially effective for recognizing and generating images, speech, music and videos. Photonic time stretch is an ultrafast measurement technology that was invented at UCLA. Photonic time stretch instruments use ultrashort laser bursts to capture trillions of data points per second, more than 1,000 times faster than today’s fastest microprocessors. The technology has helped scientists discover rare phenomena in laser physics and invent new types of biomedical instruments for 3D microscopy, spectroscopy and other applications. “Because of the extreme volume of precious data they generate, time-stretch instruments and deep learning are a match made in heaven,” said senior author Bahram Jalali, a UCLA professor of electrical and computer engineering at the UCLA Samueli School of Engineering and a member of the California NanoSystems Institute at UCLA. The system also uses a technology called imaging flow cytometry. Cytometry is the science of measuring cell characteristics; in imaging flow cytometry, those measurements are obtained by using a laser to take images of the cells one at a time as they flow through a carrier fluid. Although there are already techniques for categorizing cells in imaging flow cytometry, those techniques’ processing steps occur so slowly that devices don’t have time to physically separate cells from one another. Building on their previous work, Jalali and his colleagues developed a deep learning pipeline which solves that problem by operating directly on the laser signals that are part of the imaging flow cytometry process, which eliminates the time-intensive processing steps of other techniques. “We optimized the design of the deep neural network to handle the large amounts of data created by our time-stretch imaging flow cytometer — upgrading the performance of both the software and instrument,” said Yueqin Li, a visiting doctoral student and the paper’s first author. Ata Mahjoubfar, a UCLA postdoctoral researcher and a co-author of the paper, said the technique allows the instrument to determine whether a cell is cancerous virtually instantaneously. “We don’t need to extract biophysical parameters of the cells anymore,” he said. “Instead, deep neural networks analyze the raw data itself extremely quickly.” The code for the neural network was developed using an advanced graphics processing unit donated by NVidia. Other contributors to the study were Kayvan Reza Niazi of NantWorks and Claire Lifan Chen, a former UCLA doctoral student. NantWorks is a Culver City, California-based company.
Development Codes & Standards Moving development away from watercourses protects floodplains, which absorb water that supports groundwater recharge, growth of native and cultivated plants, habitat for wildlife, and can offer recreational opportunities. Aerial view of a floodplain area with damaged properties requiring remediation, and identifying the floodplain restoration area. Photo credit: USDA NRCS. Floodplain and erosion hazard codes are intended to protect public health and safety by regulating development in areas subject to inundation by a flood; typically a 100-year flood event. Compliance with federal law requires that the local regulatory agency conduct studies and submit floodplain maps to the Federal Emergency Management Agency (FEMA) in order for the information to be incorporated onto the FEMA Flood Insurance Rate Maps (FIRM). Developments in these mapped areas must obtain federal flood insurance. Communities may also designate locally regulated floodplains and institute erosion hazard setbacks where certain construction standards are required, such as riprap, concrete scour protection, and elevated building pads and utilities. Floodplains are dynamic natural systems that change over time in response to flooding and deposition of sediment. Benefits of adopting codes to restrict development in floodplains are that they reduce the risk and impact of flooding, reduce disturbance to natural drainage systems that have developed over time and naturally control flooding and erosion, preserve open space, and reduce potential new impermeable surfaces (roads and buildings) that increase runoff and erosion. Communities can also define and use floodplain and erosion hazard designations to protect and regulate significant “listed” watercourses, which may or may not have riparian habitat. Significant watercourses can be delineated in stormwater management plans and other studies, and protection of watercourses can be specified in policy documents and comprehensive plans. Case Study: City of Tucson Riparian Vegetation Preservation and Protection Codes and Standards The City of Tucson has adopted a number of policies, codes, and standards since the early 1980s to protect riparian vegetation along the City’s watercourses. These changes have been partly in response to public expectations about vegetation protection, which has increased over the years. In 2006, the City adopted Development Standard 9-06 to consolidate sections of the Tucson Code that address watercourse protection into one document. This provided a single application review process that was intended to help property owners and developers better understand the regulations. Sections that were consolidated are from codes regulating the following: 1) Floodplain and Erosion Hazard Management, 2) the Environmental Resource Zone (ERZ), and 3) Watercourse Amenities, Safety and Habitat (WASH). The Floodplain and Erosion Hazard Management Code states that development within the 100-year floodplain and erosion hazard area may not unnecessarily alter riparian habitat along the watercourse and its banks. “Necessary” alteration of habitat must be demonstrated by developers, given the many land use scenarios and configurations allowed under current or proposed zoning. Any necessary habitat disturbance must be mitigated pursuant to the WASH Regulations. The ERZ is an overlay zone that was adopted to preserve open space and critical riparian habitat on properties shown on ERZ overlay maps within the 100-year floodplain. Any development that encroaches into the floodplain requires a Resource Corridor Study and, if disturbance will occur, a mitigation plan. The WASH Regulations are intended to protect existing vegetation in “resource areas” along specific washes listed in the WASH code, to provide for the restoration of vegetation along disturbed wash reaches, and to aid groundwater recharge. Resource areas consist of wildlife habitat and vegetative resources within the wash banks and extend 50 feet from either side of the banks. Proposed development requires an inventory of the vegetative resources and wildlife habitat within the resource area. The WASH Regulations specify criteria for demonstrating why the resource area cannot be left undisturbed, as well as criteria for the mitigation plan required for any site disturbance. City of Tucson Department of Transportation - City of Tucson Riparian Vegetation Preservation and Protection Codes - Model Riparian Protection Ordinance developed by the Association of State Wetlands Managers - Riparian Buffer Protection: A Municipal Ordinance Perspective provides a general overview of items to be considered in drafting an ordinance.
This plant I am designing grows on earth, in locations where competition between plants is fierce, water is scarce and also nutrients in the soil are diluted. The dilution requires the plant roots to be developed at least in 1 cubic meter of ground to be able to sustain the plant life. This plant has developed peculiar traits to succeed in this environments: - roots secrete a growth inhibitor, to prevent other plants to grow close - growth is fast in the first phase, to occupy the ground area until roots can sustain life. Then it slows down. - the plants growing in an area can synchronize the blooming and fructification on periods of 17 years, by means of chemicals secreted by the leaves. - the seed of the plant requires a bath in chloridric acid in order to develop (basically it has to go through the stomach of an animal) In order to secure that seeds are spread away from the mother plant and to provide a boost of nutrients to the growing plant, the fruit of this plant is highly sweet, attracting animals from miles around thanks to smell. Once eaten the fruit releases, together with the seed into the stomach of the animal, a poison which stops peristaltic movements and kill the animal in 2-3 days. The rotting corpse will grant the growing seed the nutrients needed to develop the roots until the cubic meter mentioned above. Is this a realistic mechanism?
Native Seadragons to waters off southern and eastern Australia, leafy and weedy seadragons are closely related to seahorses and pipefish. Similar to Seahorses they have very long, thin snouts; slender trunks covered in bony rings, and thin tails but these tails are not prehensile and cannot be used for gripping as with the seahorses which means they do not contend well with strong currents and they are often washed ashore after big storms. Both species are slow-moving and rely on their camouflage as protection against predation. Leafy seadragons (Phycodurus eques) as the name implies are covered with leaf-shaped skin appendages over their entire bodies and these are shaped to blend in with the seaweed they live among. These leaf shaped appendages called cirri serve no other purpose than camouflage and they propel themselves slowly through the water using their transparent dorsal fins, steering with pectoral fins located on the neck behind the head, giving the appearance that they are nothing more than a floating piece of seaweed. They are not strong swimmers given their hydrodynamic properties and the drag created by the cirri. Male leafy sea dragons have a spongy brood patch under the tail where females deposit their eggs during mating. The eggs are fertilized during the transfer and the males incubate the eggs on the brood patch, the specialized patch area supplying the eggs with oxygen until they hatch. The young are released into the water after about four to six weeks and are completely independent after hatching. Leafy sea dragons are brown to yellow in color with olive-tinted appendages and they have the ability to change color to adapt to their surroundings. They grow to approximately 35 cm (14″) in length. They eat crustaceans and plankton, as well as small fish which they suck into their teethless mouths. Weedy sea dragons(Phyllopteryx taeniolatus) have less appendages on their skins and the appendages are smaller than the leafy seadragons and they inhabit areas of grassy seabed or any area on the reef colonized by seaweed and are often found in floating debris in these areas. They are usually reddish in color with yellow spots and grow slightly larger than their Leafy cousins to Eighteen Inches or Forty Six Centimeters long. They also have a long dorsal fin along the back for propulsion and small pectoral fins for steering. The adults are a reddish color with yellow and purple markings. Males are narrower and darker than females and like seahorses they have a brood pouch in which the female deposits the eggs. The male then carries the eggs for around a month until they hatch and the fully independent weedy seadragons emerge. As with the leafy seadragon they suck their prey of small crustaceans , plankton and small fish into their mouths.
This unit explores nine of the main systems of the body. Children learn the purpose of each body system, what the main organs of the system are and their specific functions, and how the organs work together in order for the system and body to function as a whole. This 68-page unit includes lesson extensions designed for students Grades 7–8. The Human Body Part 2, which will be released at a future date, will cover the topics of genetics and illness and disease, and it will take a closer look at some of the different types of cells within the body. Lesson 1 – Our Bodies Lesson 2 – The Skeletal System Lesson 3 – The Muscular System Lesson 4 – The Respiratory System Lesson 5 – The Circulatory System Lesson 6 – The Nervous System Lesson 7 – The Digestive System Lesson 8 – The Urinary System Lesson 9 – The Immune System Lesson 10 – The Integumentary System Class Requirements:Your child must be able to listen and also participate in discussions. Homework is required for grades 7 and up to be able to count as credit. Last time I created a private messenger account to display the homework. The kids are to remind the parents to look at the information, but it is the child’s responsibility to read and follow through Materials Needed: A 1″-2″ ring binder. It must have wide-ruled paper as well as blank white paper, Pencils, Colored pencils.
Everything You Need to Know About Malaria read Dec 02 202017073 Views Malaria is a serious illness caused by a parasite known as the Plasmodium parasite. This is typically transmitted when an infected mosquito bites a human. The infected mosquito carries the Plasmodium parasite, which then gets transferred to the bloodstream when it bites the person. Once the parasite enters the body, it travels to the liver and later begins to affect the red blood cells. If not treated well on time, malaria can lead to life-threatening complications. Sign and Symptoms of Malaria Typically the symptoms of malaria develop within 10 to 4 weeks once the person gets infected. In some cases, it may even take months for the onset of the symptoms. People who do experience the malaria symptoms complain of the following: - High fever - Muscle Pain - Bloody stools If the disease does not get proper treatment, it may lead to complications such as: - Organ failure - Swelling and rupture of the spleen - Abnormally low blood sugar - Breathing problems - Cerebral malaria characterised by coma and seizures Causes of Malaria The main cause of malaria is a parasite that gets transmitted through an infected female anopheles mosquito. There are mainly four types of plasmodium parasite that cause malaria in humans. These include : - Plasmodium falciparum - Plasmodium vivax - Plasmodium ovale, and - Plasmodium malariae The parasite multiplies in the liver and then affects the red blood cells. Other than a bite of a mosquito, malaria can also spread through blood transfusion and other cases of needle sharing. It can also be transmitted from a mother to an unborn child. When the symptoms of malaria occur, the best thing to do is seek medical attention as soon as possible. During the doctor’s visit, you will first have to go through a diagnosis. Below are the steps by which malaria can be diagnosed : - The doctor begins with a physical examination to check the symptoms and will ask for your medical history. - To confirm the presence of the parasite, certain blood tests will also be recommended. The tests will also help determine the type of malaria and how much it has affected the patient. - If the tests show the presence of malaria infection, the doctor will then begin the suitable treatment. The treatment of malaria usually involves various medicines that destroy the parasite. The medicine prescription may vary depending on the type of malaria, patient’s age and severity of the symptoms. These factors also determine if the patient must get the treatment at the hospital or at home. In most cases, doctors advise the patients to get treatment at the hospital under the supervision of health experts. Home Remedies of Malaria Other than medicines, various home remedies prove helpful in treating malaria. Some of the common effective home remedies that can alleviate the symptoms are as follows: - Cinnamon: Since it is anti-inflammatory and antioxidant, cinnamon helps in treating malaria symptoms. Just add cinnamon in hot water along with honey and black pepper, and drink the solution once or twice a day. - Turmeric: Turmeric helps in flushing out toxins from the body while killing the malaria parasite. You can drink it with milk every night. - Orange Juice: This one boosts immunity and can help in fighting off the malaria parasite. Orange also reduces fever and makes you feel better. - Ginger: Ginger and honey in water can provide relief from nausea and pain during malaria. - Holy Basil: These are quite beneficial for treating malaria symptoms. You can consume them in raw form or make a concoction to drink regularly. Other than these, you can also use grapefruit, lemon and apple cider vinegar to fight the infection and reduce the painful symptoms. Prevention of Malaria While the treatment of malaria is easily available, you must follow preventive measures to ensure the infection does not occur in the first place. This becomes much more important if you live or are travelling in an area where malaria is common. Below are some of the common steps and precautions you can take to avoid malaria: - Cover the Skin: Always wear fully-covered clothes with long sleeves to avoid mosquito bites. - Use Mosquito Repellent: There are various insect repellents and sprays available that keep the mosquitoes away. - Use a Net: You can use bed nets while sleeping to prevent mosquito bites. - Preventive Medicine: In case you are travelling to an area where malaria is common, consult your doctor for a medicine prescription that can prevent the disease. Malaria is a preventable disease, and with more attention and resources, its prevention, control and treatment are possible. Most individual and family health insurance plans cover malaria in the basic plan that takes care of the medical expenses incurred on its treatment.
The Benefits of Learning Music Engaging children in music from a young age nurtures creativity and promotes self-expression. As their musical talent grows, children build up essential skills which can easily be transferred into the classroom to improve performance: - Self-discipline – Learning an instrument requires regular practise, teaching children to enjoy the process as much as the final result. - Perseverance – Pupils learn to persevere when things get tricky, whether that’s when faced with a difficult piece of music or a challenging homework task. - Confidence – Performing in front of an audience builds confidence and belief in one’s own abilities. Miss Pattison says, “We give every single child a solo. By doing this, we normalise performance and provide an introduction to performance nerves.” - Improved focus and concentration – Studying music can enhance concentration, helping pupils to maintain focus in lessons. - Positive wellbeing – Research has shown that listening to and creating music can reduce stress levels and increase cognitive ability. Miss Pattison says, “Learning an instrument requires tenacity and commitment, an especially important lesson for children in a world where we are all accustomed to instant gratification at the touch of a button. The journey to proficiency on a musical instrument may be challenging, but it’s also more rewarding and fulfilling.” The Elms Offering Music at The Elms inspires every day with pupils benefiting from specialist teaching in our dedicated Music School. In addition to a varied music curriculum, pupils can develop their instrumental skills in a fun and social environment alongside other pupils by joining one of our 29 school ensembles. “It is the vast opportunities to play and perform music, as well as our school culture, that really helps children to progress,” Miss Pattison explains. “Performances are essential for encouraging and celebrating pupil development.” A Musical Curriculum All Elms pupils from Reception to Year 6 enjoy a weekly music lesson, however the focus of these lessons varies according to age. - Reception – At the start of their musical journey, we encourage your child to find their singing voice with repetitive rhymes and action songs. They will begin to sense pulse and link music to movement. - Year 1 – Your child will be encouraged to play percussion instruments, both to improve their performance skills and explore ways of making music. Early work on beat, pitch and rhythm lays the foundations for learning a musical instrument. - Year 2 – Your child will begin learning their first instrument, the recorder, and access formal notation for the first time. - Years 3-6 – Music lessons in the Junior School will strengthen your child’s rhythmic, pitch and dynamic awareness. The widening vocal range facilitates singing in more than one part. They will explore the works of great composers, develop their appraisal skills by analysing different sounds and explaining their ideas and feelings using theoretical vocabulary. Your child will develop their creativity by selecting, combining and organising musical ideas within musical structures. Passionate musicians also have the opportunity to advance their instrumental skills through one-to-one private tuition delivered onsite during school hours. Music for Every Child Inclusivity is at the heart of our music provision, which is why we sponsor up to one year of instrumental tuition for every child. This orchestral initiative is available to all pupils when they reach Year 3 as well as new starters in Years 3 to 6; it is our way of ensuring that every child can experience the joy of creating music. For many pupils, their participation in the programme sparks a life-long love of music and an affinity to their chosen instrument. Your child can choose to receive tuition in one of the following instruments: - Double bass - French horn All instrumentalists take external music examinations in Years 4, 5 and 6, and our excellent pass rate in these exams is the result of an unwavering commitment to musical excellence at The Elms. By the time pupils finish Year 6, many have reached Grade 4 in their instruments, an impressive achievement considering Grade 5 is the benchmark of the performance aspect at GCSE level. We believe that music is an experience best shared. Our pupils enjoy making music together and learn to work collaboratively with one another in the process. A variety of performance opportunities throughout the school year enable musicians to share their progress with parents and the school community. “Music at The Elms is a real team effort, with parents, teachers and pupils all in it together,” Miss Pattison says. With a reputation for excellent music provision at regional and national level, The Elms helps aspiring musicians go further. Learn more about how your child could flourish at The Elms by booking a private tour of our school with our Admissions team.
Pediatric Brain Tumors What are brain tumors? A brain tumor occurs when there is a genetic alteration in the normal cells in the brain. The alteration causes the cells to undergo a series of changes that result in a growing mass of abnormal cells. Primary brain tumors involve a growth that starts in the brain, rather than spreading to the brain from another part of the body. Brain tumors may be low grade (less aggressive) or high grade (very aggressive). The cause of primary brain tumors is unknown, although some tumors have germ line mutations and tend to be hereditary. The majority result from somatic mutations and are not hereditary. Central nervous system tumors (tumors of the brain and spine) are the most common solid tumor in children. There are approximately 4,500 new brain tumors each year, and they are the most common cause of cancer deaths. Read Mary's Story The majority of pediatric tumors are in the posterior fossa (60 percent). The most common tumors, in decreasing frequency, are: Medulloblastoma, juvenile pilocytic astrocytoma (JPA), ependymoma, diffuse intrinsic pontine glioma (DIPG), and atypical teratoid rhabdoid tumor (ATRT). The other 40 percent of pediatric brain tumors are in the cerebral hemispheres of the brain. These include astrocytomas, gangliogliomas, craniopharyngiomas, supratentorial primitive neuroectodermal tumors (PNET), germ cell tumors, dysembryoplastic neuroepithelial tumors (DNET), oligodendrogliomas, and meningiomas The most common type of brain tumor at all ages is a glioma. Gliomas consist of glial cells, which form the supportive tissue of the brain. The two major types of glial tumors are astrocytomas and ependymomas. - Astrocytoma. Astrocytomas are the most common type of childhood glioma and favor the nervous system. They typically occur in the cerebellum, a part of the brain that coordinates voluntary muscle movements and maintains posture, balance and equilibrium. The majority are curable by surgery. Astrocytomas may arise in the optic nerve, especially in children with neurofibromatosis. Children may also suffer from gliomas in the brainstem, at the base of the brain. - Malignant Gliomas. These tumors, including the anaplastic astrocytomas and glioblastomas, can develop anywhere in the brain and are much more aggressive than astrocytomas. They are never cured by surgery alone and require combination therapy with radiation and chemotherapy. - Ependymoma. This type of glial tumor usually arises from the cells lining the ventricles — the cerebrospinal fluid-filled cavities in the brain. Often slow growing, glial tumors may reoccur after treatment. Mixed neuronal-glial tumors Tumors containing a mix of glial cells (most commonly astrocytes) and neurons (ganglion cells) occur more often in children than in adults. They may develop anywhere in the nervous system but most typically appear in the cerebrum, an area of the brain involved in motor function and personality. Surgery to remove mixed neuronal-glial tumors often is effective. - Ganglioglioma. This is the most common of the mixed neuronal-glial tumors and generally appears in childhood or the early teen years. The majority are benign and can usually be treated successfully by surgery. - Subependymal giant cell tumor. These tumors are common in children who have a genetic condition called tuberous sclerosis. These tumors are rarely malignant. - Pleomorphic xanthoastrocytoma. These tumors are most commonly seen in teens or young adults; most are benign. Up to 25 percent of nervous system tumors that occur in infants and children are tumors made up of poorly-differentiated neuroepithelia cells. When the nervous system develops, neuroepithelia cells are those that differentiate into glial (supportive tissue) and nerve cells. The two main types of embryonal tumors are: - Primitive neuroectodermal tumor (PNET). This most common embryonal tumor can arise anywhere in the nervous system but typically appears in the cerebellum. When this happens, it is called medulloblastoma. New advances in therapy have made treatment more effective for these tumors. - Atypical teratoid/rhabdoid tumor. Ninety percent of patients with these tumors are age 2 or younger. Approximately 90 percent of these tumors have a chromosomal abnormality involving chromosome 22. The tumors may arise anywhere in the nervous system but typically appear in the cerebellum. They may also appear in the kidneys of infants. At the time of diagnosis, about one-third of these tumors have spread throughout the nervous system. Choroid plexus papilloma/carcinoma These tumors may also be found in ventricles. They may be benign or malignant, and may spread throughout the nervous system. Choroid plexus papillomas/carcinomas are filled with blood vessels (vascular), making them difficult to remove because of their tendency to bleed. Tumors arising from non-neuroepithelial tissue The intracranial (inside the skull) and intraspinal (within the spine) cavities contain tissues and structures that may give rise to tumors, a number of which are more common in children than adults. These tumors include: - Craniopharyngioma. These benign tumors are thought to originate from residual tissue left behind following the development of the head. Because they occur at the front base of the brain near the pituitary gland and optic nerves, they may cause serious neurological and endocrine problems. Surgery may not be able to completely remove them. - Pineal region tumors. These tumors can arise near the pineal gland at the base of the skull. The most common type, germinoma, is treated with radiation. The brain and spinal cord are covered with membranes called dura mater, arachnoid and pia mater. Tumors called meningiomas may develop in these membranes, but are more common in adults than children. Signs and symptoms The symptoms of a pediatric brain tumor vary according to the size, type and location of the tumor. Symptoms may occur when a tumor presses on a nerve or damages certain parts of the brain. They may also occur when the brain swells or there is fluid buildup in the skull. The most common symptoms include: - Headaches (usually worse in the morning) - Nausea or vomiting - Changes in speech, vision or hearing - Problems balancing or walking - Changes in mood, personality or ability to concentrate - Problems with memory - Muscle jerking or twitching (seizures or convulsions) - Numbness or tingling in the arms or legs After taking a complete medical history and doing a physical examination of your child, we may use the following diagnostic tests to determine if a brain tumor is present: - Neurological exam. Your child's physician will test reflexes, muscle strength, eye and mouth movement, coordination and alertness. - Computerized tomography scan (CAT scan or CT scan). This imaging procedure uses a combination of X-rays and computer technology to produce cross-sectional images (called 'slices') — both horizontal and vertical — of the bones, muscles, fat and organs. - Magnetic resonance imaging (MRI). This imaging procedure uses a combination of large magnets, radiofrequencies and a computer to produce detailed images of organs and structures within the body. - X-ray. This imaging test uses invisible electromagnetic energy beams to produce images of internal tissues, bones and organs onto film. - Bone scan. This imaging test takes X-rays of the bones after a dye has been injected that is absorbed by bone tissue. - Angiogram. This imaging test uses a dye to visualize all the blood vessels in the brain to detect certain types of tumors. - Lumbar puncture/spinal tap. For this procedure, a special needle is placed into the lower back and into the spinal canal around the spinal cord. A small amount of cerebrospinal fluid, which surrounds the brain and spinal cord, can be removed and sent for testing. Surgery for pediatric brain tumors Surgery is usually the first step in treating brain tumors in children. Our goal within the Pediatric Surgical Oncology Program is to remove all or as much of the tumor as possible while maintaining neurological function. Pediatric brain tumor patients have a particular advantage when coming to CHOP because of the extensive experience of our neurosurgeons and the close collaboration between neurosurgery, neuro-oncology, radiation oncology and diagnostic radiology. Certain types of brain tumors located near the bottom of the skull, also called skull-base tumors, can be removed through the nose using tools called endoscopes. Because the base of the skull is close to the nostrils and roof of the mouth, your child’s surgeon can access the tumor more easily and safely with endoscopic endonasal surgery by going through the nostrils, minimizing the need for more invasive procedures. Surgery is also performed for a biopsy — a sample of tissue taken to examine the types of cells found in the tumor. This helps establish a diagnosis and treatment plan. This is frequently done when the tumor is surrounded by sensitive structures that may be damaged by surgical removal. At Children's Hospital of Philadelphia, every patient being treated for a brain tumor is offered the opportunity to have their tumor entered into a CHOP-led international tumor bank to help accelerate discovery and improve care. Watch: About Proton Therapy Other therapies used to treat brain tumors include: - Chemotherapy (cancer drugs) - Radiation therapy (high-energy rays that kill or shrink cancer cells) - Proton therapy (a precise form of radiation therapy that is less damaging to surrounding tissue) - Steroids to treat and prevent swelling in the brain - High-dose chemotherapy, stem-cell rescue and blood and marrow transplantation - Supportive care for the side effects of the tumor or treatment - Rehabilitation to regain lost motor skills and muscle strength - Continuous follow-up care to manage disease, detect recurrence of the tumor and manage late effects of treatment As with any cancer, prognosis and long-term survival vary greatly from child to child. Prompt medical attention and aggressive therapy are important for the best prognosis. Continuous follow-up care is essential for a child diagnosed with a brain tumor, because the side effects of radiation and chemotherapy — as well as second malignancies — can occur in survivors of brain tumors. Rehabilitation for lost motor skills and muscle strength may be required. Children's Hospital speech, physical and occupational therapists specialize in the unique needs of children undergoing this type of rehabilitation. Late effects/cancer survivorship Some children treated for a pediatric brain tumor may develop complications years later. Our Cancer Survivorship Program provides information about the potential long-term effects of the specific treatment your child received, including ways of monitoring and treating these effects. Reviewed by Phillip B. Storm, MD
Biomass, or bioenergy, creates energy by burning living materials like plants and trees. The wood pellet industry uses trees to make wood pellets. It then ships them to Europe and Asia where they’re burned in power plants to create electricity. Wood pellet plants are as dirty and problematic as coal plants. Burning trees (via burning wood pellets) is not a climate solution. Here’s why. What do forests provide? Southern forests provide many benefits to humans. They provide us with oxygen to breathe, clean water, and homes for animals. Forests also provide us with food, medicine, and materials for clothing and shelter. Forests are a vital part of our planet and are necessary for our survival. Forests provide clean water There are so many examples of how forests influence our access to clean water that it’s hard to choose a few. But here are the highlights: - In areas with clearcuts, changes in hydrology mean that flooding spreads more quickly and is more dangerous. - Forests can decrease flooding after a hurricane or storm. - Clearcutting can cause leaching of nitrogen pollutants into water runoff. - Salvage logging (logging after a fire) has a negative effect on regulating ecosystem services like water quality and soil quality. Forests provide carbon storage Forests play a critical role in the global carbon cycle by storing carbon in their trees and soils. Deforestation, however, can release stored carbon back into the atmosphere, which contributes to climate change. - Logging can cause a carbon deficit for up to 200 years as the trees regrow. - Converting old forests to young plantations in western Oregon and Washington has released 1.5-1.8 million metric tonnes of carbon to the atmosphere. - Around 60% of the carbon lost through deforestation and harvesting from 1700 to 1935 has not yet been recovered in our forests. - Logging causes significant soil carbon loss, especially in the forest floor soil layer. - As the number of species (biodiversity) of trees increases, so does carbon storage. Forests help keep regional temperatures steady and safe Forests play an important role in keeping regional temperatures steady and safe. By absorbing carbon dioxide and releasing oxygen, forests help keep the Earth’s atmosphere in balance. They also provide shade and cooler temperatures in the summer. They also buffer against extreme weather conditions. - A healthy forest can lower average temperatures in the region. - Forests may buffer the impacts of global warming on plants and animals that live within. - Some forests may lose their buffering ability when faced with drought. - Old-growth forests mediate temperatures, which can help rare and endangered bird species. Forests provide homes for wildlife Forests provide homes for wildlife in many ways. One way is that forests provide shelter and food for animals. Forests also shield animals from predators, the impacts from humans (like development and roads), and even from rapidly changing temperatures. When we lose forests, animals lose their refuge. - Deforestation can displace predators like coyotes and bears, leading to increased human-wildlife conflict and car collisions. - Wildlife conservation via habitat conservation has the potential to provide sustainable jobs and conserve species at the same time. - Deforestation causes declines in many rare, threatened, and endangered species, while species that thrive in edge habitats may flourish. What is the scale of forest degradation in the Southeastern United States? Forest degradation is the process of harming or diminishing the quality of a forest. This can be caused by a variety of factors, such as logging, climate change, and more. Forest degradation can impact benefits that forests provide in a number of ways. In the US, forest degradation is one of the most common impacts on forests. - The US South is known as the “wood basket of the world.” In 2011, the US South was responsible for harvesting 63% of the total timber volume in the US. - A focus on timber extraction has led to forest degradation across the South. Over half of forest stands are less than 40 years old. - In the South, the most recent USFS numbers reveal that 226 million dry tons of wood are extracted annually in 14 states. The number one cause of carbon emissions from US forests is logging Based on the news, you would think that forest carbon loss was mostly coming from tearing down forests for homes or watching them burn in wildfires. But, the number one cause of carbon emissions from US forests is logging. Logging destroys forest carbon stocks and many other benefits. - 85% of carbon emissions from forests were attributed to logging. This is five times more than carbon lost through fires, insects, disease, drought, or development combined. - Over half of forest tree cover loss was due to logging in North America. The number creeps higher when you focus in on the US South. - Scientists agree that our forests need to absorb more carbon and quickly if we’re to avoid the worst impacts of climate change. Burning trees for energy is destroying forests The US South exports more wood pellets than anywhere else in the world. These wood pellets are usually burned alongside coal in the name of “renewable” energy. - For each ton of wood pellets produced, 24,000 acres of forest are destroyed. Read the fact sheet here. - Over a million acres in the US have already been cut for wood pellets. Planting trees does not help forest regrowth While landowners may replant destroyed trees, they are not required to do so by law. Planted trees (tree farms) take a long time to grow and do not offer the same benefits as natural forests. - Across the South, natural forests tend to store more carbon per acre than planted tree stands. - Planting trees may not lead to carbon benefits, even after several decades. How does the biomass energy industry impact the environment? There are now more frequent natural disasters, including hurricanes, floods, heat waves, and wildfires. Scientists have linked these events to increasing carbon dioxide in the atmosphere. The wood pellet industry is bad for the environment, just like coal or natural gas. The biomass energy industry is destroying forests that help protect us from natural disasters. - The IPCC says that if bioenergy competes for land, it would negatively impact all other land use – like for food. - Logging for wood pellets makes forest soils lose carbon. This means that bioenergy harvest can worsen climate change. - Logging can lead to losses in amphibians, small mammals, and other forest-dependent species. The biomass energy industry is not “good” renewable energy Renewable energy industries like wind and solar produce fewer carbon emissions per unit of electricity generation. This is because the carbon emissions associated with wind and solar are mostly during the construction phase. Unlike wind and solar, burning biomass requires a consistent supply of trees. Over time, the carbon debt from biomass energy continues to increase. - Both solar and wind-powered energy have a consistently lower “global warming potential” than energy produced from wood pellets. - While the cost of solar technology decreases every year, the cost of burning wood pellets will always depend on transportation and manufacturing costs. The biomass energy production process uses fossil fuels Harvesting and transporting forest biomass for wood pellets requires fossil fuels. Fossil fuels via diesel and gasoline fuels are used to operate forestry equipment and transport vehicles. Other energy sources, like natural gas, may be used to power machinery at the biomass energy production plants. The amount of energy used to produce wood pellets varies, but can range from 5-20% of the total carbon emissions generated by the entire lifecycle. Many biomass plants will proudly state that their wood pellets are carbon neutral. However, this is based on several key assumptions about the manufacturing process. First, many lifecycle analyses assume that biomass plants only use tops and limbs from trees being harvested for other purposes. While this may have been true many years ago, since then, the demand for wood pellets has increased exponentially. Biomass plants are using whole trees and trunks to make wood pellets – not just tops and limbs. Second, lifecycle analyses assume that trees somewhere in the same region will grow enough to offset the carbon lost through clearcutting. But it’s not fair to expect someone else’s trees to offset the carbon lost through your forest clearing. Instead, lifecycle analyses should be focusing on the areas where the clearcutting occurred. Through that lens, it takes 40-100+ years for the carbon to be reabsorbed by new trees. Unfortunately, we don’t have 40-100+ years to combat climate change. - Burning wood pellets produces about 1.5x the amount of carbon dioxide that burning coal does. - It can take between 40-200 years for trees to grow enough to offset a previous logging event. The biomass industry unfairly targets environmental justice communities As the renewable energy boom continues to grow across the world, we must examine the true implications of energy projects like these. Environmental racism is when specific communities, like BIPOC (Black, Indigenous, or People of Color) neighborhoods, are subjected to more environmental harms than non-BIPOC communities. Learn more about this environmental racism. In the US South, wood pellet facilities that use forest biomass are often placed in low-income communities of color. These communities are more likely to experience environmental injustice. They often already have other sources of pollution in or near their communities like: - Natural gas pipelines or compressor facilities - Industrial train stations for transporting goods, not people - CAFOs – concentrated animal feeding operations - Other large manufacturing facilities - Coal ash or coal power plants These other polluters contribute to hazardous air pollutants and even greenhouse gas emissions. Given this pattern of injustice, why does the biomass industry continue to operate in these communities? The biomass industry helps companies burn more coal Many wood pellets are “co-fired” alongside coal in traditional power plants. Power plants can burn wood pellets alongside coal to get renewable energy credits. In other words, they can get credit for “green” energy while still burning coal. When power plants are allowed to “offset” greenhouse gas emissions by cofiring dirty coal with wood pellets, they’re putting more and more carbon into the atmosphere. This is the opposite of the “carbon neutral” companies that they claim to be. If you’re living near a wood pellet plant, you deserve to know your risks. Learn more about the air pollution that biomass causes. Share this fact sheet about the relationship between biomass and forests with your friends.
For about eight months of the year, the Kolyma River is frozen to depths of several meters. But every June, the river thaws and carries vast amounts of suspended sediment and organic material into the Arctic Ocean. That surge of fresh, soil-ridden waters colors the Kolyma Gulf (Kolymskiy Zaliv) dark brown and black. This image from the Operational Land Imager on the Landsat 8 satellite shows the “blackwater” stream on June 16, 2019. Note that the East Siberian Sea remains covered with ice. The Kolyma is the largest river system underlain with continuous permafrost. It is primarily fed by spring snowmelt and summer rainfall. The largest discharges usually occur in June, after the snow and ice start to thaw. The river has a mean annual discharge of about 136 cubic kilometers of water per year—making it one of the six largest rivers to drain into the Arctic Ocean. Discharge levels and streamflow can be influenced by variations in climate but also human impacts. After the addition of a dam in 1986, researchers noted fluctuations in discharge in different sections of the river, which can affect vegetation patterns, ocean salinity, and Arctic sea ice formation. have also examined the concentration and composition of dissolved organic matter in the Kolyma River and found humic substances—organic compounds that make up the major organic component of soil—during the spring thaw. They also collected samples from two of Kolyma’s tributaries and found carbon-rich permafrost as old as the Pleistocene era. Locally known as yedoma, this permafrost contains large concentrations of organic matter. Permafrost degradation caused by climate change could expose more ancient organic matter to the river system. The Kolyma region has a long history of human activity. Under Joseph Stalin’s rule in the mid-1900s, Kolyma was a notorious Gulag labor camp for gold mining, road building, lumbering, and construction. In the much deeper past, more than 10,000 years ago, the land was occupied by ancestors of Native Americans, according to geneticists and archaeologists. NASA image by Norman Kuring/NASA's Ocean Color Web, using Landsat data from the U.S. Geological Survey. Story by Kasha Patel.
What is Sustainable Living and Lifestyle? Sustainable living refers to a way of life that focuses on reducing our impact on the environment and preserving natural resources for future generations. It involves making conscious choices and adopting practices that promote environmental, economic, and social sustainability. Sustainable living encompasses various aspects of our lives, including our consumption habits, energy usage, waste management, transportation choices, and more. Why is it Important? Embracing sustainable living is crucial for several reasons. Firstly, it helps to mitigate the adverse effects of climate change by reducing greenhouse gas emissions and minimizing resource depletion. Secondly, sustainable living promotes a healthier and safer environment for all living beings, ensuring clean air, water, and land. It also fosters biodiversity and protects ecosystems. Lastly, sustainable living contributes to a more equitable society by addressing social and economic inequalities and promoting social justice. Adopting Eco-Friendly Practices One of the fundamental principles of sustainable living is reducing consumption. This can be achieved by practicing minimalism, buying only what is necessary, and avoiding excessive consumerism. It involves making conscious choices about the products we purchase, considering their environmental impact, and opting for sustainable alternatives such as second-hand or locally sourced items. Home Design & Building Building sustainable homes is essential for reducing our carbon footprint and minimizing energy consumption. This can be achieved through energy-efficient designs, using renewable building materials, and incorporating features like solar panels, rainwater harvesting systems, and green roofs. Sustainable homes also prioritize natural lighting, ventilation, and insulation to reduce the need for artificial heating and cooling. Renewable Energy Sources Transitioning to renewable energy sources is crucial for a greener future. Solar, wind, hydro, and geothermal energy are sustainable alternatives to fossil fuels. Installing solar panels on rooftops, utilizing wind turbines, and investing in community-based renewable energy projects are some ways to embrace renewable energy sources and reduce reliance on non-renewable resources. Sustainable agriculture practices focus on producing food in an environmentally friendly and socially responsible manner. This involves minimizing the use of chemical fertilizers and pesticides, promoting organic farming methods, conserving water, and protecting soil health. Sustainable agriculture also emphasizes biodiversity conservation, crop rotation, and supporting local food systems. Recycling & Waste Management Proper waste management is crucial for sustainable living. Recycling helps to conserve resources, reduce landfill waste, and minimize pollution. It involves separating recyclable materials from general waste and ensuring they are processed correctly. Additionally, composting organic waste can help reduce methane emissions and produce nutrient-rich soil. Conserving water is essential in the face of increasing water scarcity. Sustainable living practices include installing water-efficient fixtures, collecting rainwater for irrigation, and practicing mindful water usage. Simple actions like turning off faucets while brushing teeth or using native plants in landscaping can contribute to water conservation efforts. Addressing climate change is a central aspect of sustainable living. This involves reducing greenhouse gas emissions by transitioning to renewable energy sources, supporting sustainable transportation options, and advocating for climate policies. Additionally, individuals can reduce their carbon footprint by practicing energy conservation, adopting a plant-based diet, and supporting climate-conscious initiatives. The fashion industry is known for its significant environmental and social impact. Embracing ethical fashion involves choosing sustainably produced and ethically sourced clothing, supporting fair trade practices, and promoting the use of organic and recycled materials. It also includes extending the lifespan of clothing through repair, upcycling, and second-hand shopping. Traveling sustainably is crucial for minimizing the environmental impact of tourism. This can be achieved by opting for eco-friendly modes of transportation like cycling, walking, or using public transport. It also involves supporting eco-conscious accommodations, practicing responsible tourism, and minimizing waste generation during travel. Advancements in green technology play a vital role in achieving a greener future. This includes innovations in renewable energy, energy-efficient appliances, smart grids, electric vehicles, and sustainable building materials. Embracing and supporting green technology can significantly contribute to reducing our carbon footprint and promoting sustainability. The Imperative Shift Towards Sustainability The Challenge of Change Transitioning towards sustainable living presents certain challenges. It requires a shift in mindset, breaking away from conventional consumerist habits, and embracing more sustainable alternatives. Additionally, systemic barriers such as limited availability and affordability of sustainable products and services can hinder progress. Overcoming these challenges requires collective action, awareness, and policy changes. Overcoming Systemic Barriers To overcome systemic barriers, it is essential to advocate for policies that support sustainability. This includes promoting renewable energy incentives, implementing waste management programs, and supporting sustainable agriculture practices. Additionally, educating and raising awareness among individuals and communities about the benefits of sustainable living can help overcome barriers and foster change. The Power to Change Every individual has the power to make a difference and contribute to a greener future. By adopting sustainable practices in our daily lives, we can collectively create a significant impact. Whether it's choosing sustainable products, reducing waste, conserving energy, or supporting sustainable companies, every action counts. Small changes at an individual level can inspire others and create a ripple effect towards a more sustainable society. A Greener Future with Sustainable Living Benefits of Adopting Sustainable Practices There are numerous benefits to embracing sustainable living. Firstly, it helps protect the environment by reducing pollution, conserving natural resources, and preserving biodiversity. Secondly, sustainable practices promote better health by reducing exposure to harmful chemicals and creating cleaner living environments. Additionally, sustainable living can lead to cost savings by reducing energy consumption, water usage, and waste generation. The Role of Companies in Achieving Sustainability Companies play a crucial role in achieving sustainability. By adopting sustainable practices in their operations, supply chains, and product offerings, companies can significantly contribute to a greener future. This includes implementing energy-efficient measures, reducing waste, supporting renewable energy, and promoting ethical and sustainable sourcing. Many companies are already taking steps towards sustainability, and their efforts should be recognized and supported. Embracing sustainable living is essential for creating a greener future. By adopting eco-friendly practices in our daily lives, we can collectively reduce our environmental impact, preserve natural resources, and promote a more equitable society. The path to sustainable living involves making conscious choices about consumption, energy usage, waste management, and more. It also requires overcoming systemic barriers and advocating for policies that support sustainability. Together, we have the power to create a positive change and pave the way towards a sustainable and thriving future.
Flow Control in Python What are Control Flow statements? We often come across situations in which we need to divert or change the usual sequential flow of execution. The flow control statements can be classified into Conditional Statements and Iteration Statements. The Conditional Statements selects a particular set of statements for execution depending upon a specified condition. While the Iteration Statements repeatedly executes a block of statements with respect to some condition. This article assumes that you have basic knowledge in programming languages like C. If you haven’t yet started with Python, please read the article Getting Started with Python. if … elif … else … statements The most popular conditional control statement is ‘if else’, so let’s see how it works in python. if condition: statement1 statement2... elif condition: statement1 statement2... else: statement1 statement2... statementx You might have already spotted the changes when compared to other programming languages. The changes improve the flexibility and ease of programming in python. All the conditional statements are always followed by a colon “:”. The codes under a block can be identified by the indentation. Thus maintaining proper indentation is critical in flow control statements. Let’s see an example. x = int(input("Please Enter and an Integer : ")) if x > 0 : print ("The Number is Positive") elif x == 0 : print ("The Number is Zero") else : print ("The Number is Negative") print ("The End") The above program checks whether a number is positive, negative or zero and the output is shown below. For is an iterative control statement, which repeatedly executes a set of statements depending up on some specified conditions. Let’s see how it works names=['ABC','BCD','EFG','FGH'] for i in range(0,4): print (names[i]) The range function can be used in many ways. As shown below.. - range(n) generates 0, 1, 2, 3 ….. n-1 - range(a,b) generates a, a+1, a+2 ….. b-1 - range(a,b,c) generates a, a+c, a+2c, …. - range(10) generates 0, 1, 2, 3, 4, 5, 6, 7, 9. - range(1,5) generates 1, 2, 3, 4. - range(0,10,3) generates 0, 3, 6, 9. break and continue Statements Similar C programming, break statement is used to stop execution of the smallest enclosing loop, while the continue statement is used to skip the current iteration in a loop and continue with the next iteration. Let’s see it with a simple example. for i in range(10): if (i==5): continue print(i) print('\n') for i in range(10): if (i==5): break print (i) In the first loop when the value of ‘i’ is equal to 5 the current iteration is skipped and the loop continues execution from next iteration ie, values from 6 to 9. Whereas in the second loop when i becomes equal to 5 the break statement is executed and thus the execution of that loop is terminated as shown below. As in other programming languages the while loop is an entry control iterative looping statement. The difference lies in the fact that the inner statements are written with an indentation of one ‘tab’ space. A simple program is illustrated here and the output also is shown.
3D printing is also referred to as additive manufacturing, is a process of making three dimensional objects from a computer based design. The creation of a 3D printed object is achieved using additive processes. 3D printing is the opposite of subtractive manufacturing which is cutting out / hollowing out a piece of metal or plastic with for instance a milling machine. 3D printing enables you to produce complex shapes using less material than traditional manufacturing methods. In an additive process an object is created by laying down successive layers of material until the object is created. The term “3D printing” covers a variety of processes in which material is joined or solidified under computer control to create a three-dimensional object, with material being added together (such as liquid molecules or powder grains being fused together), typically layer by layer. One of the key advantages of 3D printing is the ability to produce very complex shapes or geometries, and a prerequisite for producing any 3D printed part is a digital 3D model or a CAD file. How does 3D Printing Work? Computer Based Design or Modeling 3D printable models may be created with a computer-aided design (CAD) package, via a 3D scanner, or by a plain digital camera and photogrammetry software. 3D printed models created with CAD result in reduced errors and can be corrected before printing, allowing verification in the design of the object before it is printed. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D scanning is a process of collecting digital data on the shape and appearance of a real object, creating a digital model based on it. Slicing: From 3D Model to 3D Printer Slicing is dividing a 3D model into hundreds or thousands of horizontal layers and is done with slicing software. Some 3D printers have a built-in slicer and let you feed the raw .STL, .OBJ or a CAD file. When your file is sliced, it’s ready to be fed to your 3D printer. This can be done via USB, SD or internet. Your sliced 3D model is now ready to be 3D printed layer by layer. 3D printing technologies There are several different 3D printing technologies. The main differences are how layers are built to create parts. SLS (selective laser sintering), FDM (fused deposition modeling) & SLA (stereolithograhpy) are the most widely used technologies for 3D printing. Selective laser sintering (SLS) and fused deposition modeling (FDM) use melted or softened materials to produce layers. A 3D printer is unlike your standard, 2D inkjet printer. On a 3D printer the object is printed in three dimensions. A 3D model is built up layer by layer. Therefore the whole process is called rapid prototyping, or 3D printing. The resolution of the current printers is around 328 x 328 x 606 DPI (xyz) at 656 x 656 x 800 DPI (xyz) in ultra-HD resolution. The accuracy is 0.025 mm – 0.05 mm per inch. The model size is up to 737 mm x 1257 mm x 1504 mm. The biggest drawback for the individual home user is still the high cost of 3D printer. Another drawback is that it takes hours or even days to print a 3D model (depending on the complexity and resolution of the model). Besides above, the professional 3D software and 3D model design is also in a high cost range). Alternatively there are already simplified 3D printers for hobbyist which are much cheaper. And the materials it uses is also less expensive. These 3D printers for home use are not as accurate as commercial 3D printer. Difference between a basic rapid prototyping machine and a 3D printer 3D printers are the simple version of rapid prototyping machines. It is lower lost and less capable. Rapid prototyping is a conventional method that has been used by automotive and aircraft industries for years. In general 3D printers are compact and smaller than RP machines. They are ideal for use in offices. They use less energy and take less space. They are designed for low volume reproduction of real objects made of nylon or other plastics. That also means 3D printers make smaller parts. Rapid prototyping machines have build chambers at least 10 inches on a side, a 3D printer has less than 8 inches on a side. However a 3D printer is capable of all the functions of rapid prototyping machine such as verifying and validating design, creating prototype, remote sharing of information etc. Consequently 3D printers are easy to handle and cheap to maintain. You can buy one of those DIY kit in the market and build up yourself. It is cheaper than the professional rapid prototyping, for $1000 or less you can have one 3D printer. While the professional rapid prototyping cost at least $50,000. 3D printers are less accurate than rapid prototyping machines. Because of its simplicity the material choices are also limited. Future of 3D Printing 3-D printing is an high potential industry. Not too long ago, the printing speed and limited output of 3-D printers made them suitable only for rapid prototyping. In the coming years, 3-D printers will be at the heart of full-scale production capabilities in several industries, from aerospace to automotive to health care to fashion. Manufacturing as we know it will never be the same. Decades of innovation have led to the 3-D printing revolution. Recent advancements in speed, printing technology and material capabilities are now aligned, and together they will push the entire industry forward. Along with growing competition and investment in the 3-D printing industry, these new capabilities will reshape custom manufacturing. The 3-D printing industry is on the verge of another tipping point, and here are the reasons why. Innovations In Direct-Metal Printing Direct-metal printing is getting faster and more capable, and many new technologies are now coming into play. The number of metal alloys that can be 3-D printed is on the rise, and they have exceptional performance characteristics. You can get high-performance light-weighting and complexity that is impossible with traditional design and manufacturing processes. With these changes, imagine the possibilities: Complex and highly detailed products critical for the aerospace, automotive and mechatronic industries will soon be available for production at a fraction of the cost. If your car needs a repair, you’ll soon be able to purchase a shift knob or fuel-door hinge pin that was quickly and inexpensively printed. Aerospace engineers will use the same technology to produce jigs and fixtures for spacecrafts. The implications are staggering. What’s more, there are fully automated, multi-station direct-metal printers that are essentially an entire factory in a box. With automated parts changes and material replenishment, they can literally operate around the clock. GE is a major player in the direct-metal printing field, and the company predicts that its metal business alone will surpass $1 billion in annual revenue in the next couple years. And early-stage companies such as Desktop Metal and Markforged have received several hundred million dollars in funding to deliver entry-level direct-metal systems that print from a metal-plastic filament. Innovations In Printing Speed For decades, 3D printing has been capable in terms of geometric precision and accuracy, but printing speeds remained very slow. But this too is changing. The CLIP technology from Carbon 3D and similar technologies from companies like one of mine, Nexa 3D, are capable of continuously printing photo-curable polymers at very high speeds — 40 or 50 times the speed of a conventional stereolithography 3-D printing system. With the ability to print a centimeter every minute, these systems will no doubt play a major role in the design-to-manufacturing cycle. While a similar job used to take days, now you can get parts in your hand in five, 10 or 15 minutes. It will also unleash work with completely new chemistries, particularly within several variants of urethane. You can make the output rigid, semi-rigid or flexible, and these kinds of variations in chemistry open up a lot of end-use applications, from athletic shoes to vehicle interiors to apparel. Innovations In Selective Laser Sintering Selective laser sintering (SLS) is the ability to produce parts from a variety of nylon materials. It’s been practiced in a few industries for decades now. Take the F-18 fighter jet: Every F-18 in service today has been flying with SLS parts for air ducts, electronics covers and many other components for a couple of decades. So that’s not headline news. But as of late, companies such as HP have entered the market with technology that speeds up production of selective laser-sintered parts. Again, we’re talking dozens of times faster than traditional SLS machines. It will help bring selective laser sintering into the mainstream instead of just confining it to hyper-specific applications like the F-18. With the advent of these faster machines, companies can increase manufacturing speed and manufacturing scale with significantly more affordability. It’s no longer reserved for exotic military, defense and aerospace applications. It’s ready to go mainstream. How These Innovations Will Shape The Future When you combine all these advancements with infinite computing power in the cloud, IoT connectivity, big data and next-generation robotics, you arrive at the realization of Industry 4.0: a truly cognitive, adaptive and largely self-optimizing factory. And this huge development will largely be catalyzed and fueled by additive manufacturing. In some industries, 3-D printing has been considered or utilized for several decades. However, the use cases have primarily been in design and prototyping. Industries that have a great deal of familiarity with this technology such as aerospace and automotive are beginning to unlock these capabilities so that others can create working tools, fixtures, jigs and end-use parts. They are just beginning to scratch the surface of 3-D printing for outright manufacturing. Many companies will also invest in continuous photopolymer systems. Gartner expects the growth in photopolymer 3-D printers over the next couple years to be in the neighborhood of 75%. In sectors such as health care and personalized medical devices, more and more 3-D printers are being used to create hearing aids, dental fixtures, hip replacements, medical implants and surgical tools. High-speed photopolymer systems are ideal for dental applications, while direct-metal and SLS solutions are a better fit for certain implants. Perhaps most importantly, another trend that is taking shape is that 3-D printers are now becoming mainstream presences in schools and public libraries. Children aren’t just learning how to use 3-D printing hardware and design software; they’re expecting to be able to use it. An industry once considered a gimmick is proving itself to be a formidable giant. Make no mistake: 3-D printing will be a force that upends nearly every industry over the coming decade, and its influence is exciting and unstoppable.
Young children learn and acquire new skills at a remarkable pace, as they learn how to navigate and interact with the world around them. As they play and explore, children are constantly developing their motor, cognitive, and social skills, all of which will aid them throughout their lives as they go on the study, work, and form relationships with others. Encouraging your little ones in practicing their skills can be of enormous benefit to their development in many ways, helping them to quickly assimilate and implement the fundamental skills they will use in everything they do. Arts and crafts are perfect for helping children to develop skills across a range of developmental areas. Painting and building models both help them to tap into their creative sides, while creating jewelry and patterns using colorful beads assists them in developing their fine motor control and planning skills. Joining in with these activities and planning arts and crafts projects they can enjoy with their friends also helps them to develop their social and communication skills. Here are the six main developmental benefits of crafting and beading for children that allow them to practice and strengthen their most essential skills. Fine Motor Skills Almost all arts and crafts activities give young children an excellent opportunity to develop their fine motor skills and improve their manual dexterity. Given how much we use our hands in everything we do during our day-to-day lives, improving these skills is an excellent way of building independence and confidence. Activities such as holding a paintbrush or pencil, cutting shapes, and threading beads all require fine motor coordination and, therefore, encourage children to exercise and refine these skills. Very young children are most likely to benefit from larger objects, such as paintbrushes, think markers, modeling clay, and larger beads, as these are easier to manipulate using the “three-jaw chuck grasp.” Older children can be encouraged to develop their “pincer grasp” with activities that require more precise manipulation, such as creating bracelets or patterns with smaller beads. Crafting, painting, and beading are all highly stimulating to a child’s imagination, allowing them to explore and express their creativity in whichever way they like. This encourages them to examine their likes and dislikes and experiment with a wide range of colors and materials. By prompting children to talk about their creative choice, parents can also aid their language development as children are encouraged to find new ways to talk and write about their projects. Acquiring new descriptive and emotive vocabulary is a vital part of their learning to effectively express themselves—a skill that is fundamental in how well they interact with other people. Not to mention the fact that developing strong vocabulary is important in the development of early writing skills. A project that requires some degree of planning, such as making patterns from beads, choosing colors for a bracelet, or deciding what to make for a craft project, are all fantastic for helping children to develop their cognitive skills. As they decide what materials they want to use, how they want their project to look, and which colors and patterns they like, they employ planning and problem-solving skills that will be essential for them later in life. PBS Parents reported that participating in craft projects can strengthen a child’s critical thinking skills, which are crucial for effective decision-making. Social and Communication Skills By planning craft projects that children can participate in with their friends or siblings, parents can help their child to develop vital social and communication skills. Learning to share materials with other children and to appreciate each other’s work are both excellent ways to help children learn how to bond with others, navigate social situations, and to express admiration for other’s work. By talking about their projects among themselves, and expressing which colors or patterns they like, they can boost their vocabulary skills and practice their self-expression. When children exchange ideas as they craft, they can also learn to appreciate other people’s preferences and gain new inspiration and ideas for themselves. Bilateral coordination, or the act of using both hands simultaneously to complete tasks, is a skill we exercise daily in almost everything we do. From tying shoelaces to opening things, to using a computer, most essential activities involve both hands moving cooperatively and effectively. Craft projects are the best way to get in some early practice of this for young children, as activities like cutting paper with scissors, gluing things together, and threading beads all involve the use of both hands. By finding something fun and absorbing for children to engage in, parents can encourage them to spend hours developing and honing this vital skill. Visual Motor Skills Projects that involve drawing or painting specific things can help children to develop their visual motor skills as they learn to replicate what they see on paper. This allows children to further develop other important skills, such as handwriting, as they strengthen their visual-motor coordination. These activities can also encourage children to practice more precise manipulation of their paintbrush or pencil so they can more effectively write or draw, further developing their fine motor control skills Crafting and beading are excellent ways to assist children in developing essential core skills, and these activities are unique in that they strengthen children’s abilities across a wide range of areas. Learning to use arts and crafts tools like paintbrushes, crayons, beads, and modeling clay encourages children to practice their fine motor skills, allowing them to manipulate objects with more precision. This aids them in developing their bilateral coordination, manual dexterity, and muscle strength, all of which are important for increasing their independence and self-esteem. By planning slightly more complex projects, such as deciding upon patterns and colors for beaded jewelry, children can also develop their problem-solving and decision-making skills. Encouraging children to talk about these projects also benefits them in building their vocabulary and developing their communication skills. Many arts and crafts projects can be put together with little effort or expense (although a lengthy clean-up process is perhaps to be expected) and are great group activities for children. Often some brightly coloured paints, a handful of beautiful beads and some thread, or some glue and cardboard are all that are needed to capture a child’s imagination, stimulating their creativity and exercising their developmental skills.
News Release, Smithsonian Institutes Since the first Homo sapiens emerged in Africa roughly 300,000 years ago, grasslands have sustained humanity and thousands of other species. But today, those grasslands are shifting beneath our feet. Global change—which includes climate change, pollution, and other widespread environmental alterations—is transforming the plant species growing in them, and not always in the ways scientists expected, a new study published Monday revealed. Grasslands make up more than 40 percent of the world’s ice-free land. In addition to providing food for human-raised cattle and sheep, grasslands are home to animals found nowhere else in the wild, such as the bison of North America’s prairies or the zebras and giraffes of the African savannas. Grasslands also can hold up to 30 percent of the world’s carbon, making them critical allies in the fight against climate change. However, changes in the plants that comprise grasslands could put those benefits at risk. “Is it good rangeland for cattle, or is it good at storing carbon?” said lead author Kim Komatsu, a grassland ecologist at the Smithsonian Environmental Research Center. “It really matters what the identities of the individual species are….You might have a really invaded weedy system that would not be as beneficial for these services that humans depend on.” The new paper, a meta-analysis published in the Proceedings of the National Academy of Sciences, offers the most comprehensive evidence to date on how human activities are changing grassland plants. The team looked at 105 grassland experiments around the world. Each experiment tested at least one global change factor—such as rising carbon dioxide, hotter temperatures, extra nutrient pollution or drought. Some experiments looked at three or more types of changes. Komatsu and the other authors wanted to know whether a global change was altering the composition of those grasslands, both in the total species present and the kinds of species. They discovered grasslands can be surprisingly tough—to a point. In general, grasslands resisted the effects of global change for the first decade of exposure. But once they hit the 10-year mark, their species began to shift. Half of the experiments lasting 10 years or more found a change in the total number of plant species, and nearly three-fourths found changes in the types of species. By contrast, a mere one-fifth of the experiments that lasted under 10 years picked up any species changes at all. Experiments that examined three or more aspects of global change were also more likely to detect grassland transformation. “I think they’re very, very resilient,” said Meghan Avolio, co-author and assistant professor of ecology at Johns Hopkins University. “But when conditions arrive that they do change, the change can be really important.” To the scientists’ surprise, the identity of grassland species can change drastically, without altering the number of species. In half the plots where individual species changed, the total amount of species remained the same. In some plots, nearly all the species had changed. “Number of species is such an easy and bite-sized way to understand a community…but what it doesn’t take into account is species identity,” Avolio said. “And what we’re finding is there can be a turnover.” For Komatsu, it’s a sign of hope that most grasslands could resist the experimentally induced global changes for at least 10 years. “They’re changing slowly enough that we can prevent catastrophic changes in the future,” she said. However, time may not be on our side. In some experiments, the current pace of global change transformed even the “control plots” that were not exposed to experimentally higher global change pressures. Eventually, many of those plots looked the same as the experimental plots. “Global change is happening on a scale that’s bigger than the experiments we’re doing….The effects that we would expect through our experimental results, we’re starting to see those effects occurring naturally,” Komatsu said. The abstract will be available online at www.pnas.org/cgi/doi/10.1073/pnas.1819027116 The Southern Maryland Chronicle is a local, small business entrusted to provide factual, unbiased reporting to the Southern Maryland Community. While we look to local businesses for advertising, we hope to keep that cost as low as possible in order to attract even the smallest of local businesses and help them get out to the public. We must also be able to pay employees(part-time and full-time), along with equipment, and website related things. We never want to make the Chronicle a “pay-wall” style news site. To that end, we are looking to the community to offer donations. Whether it’s a one-time donation or you set up a reoccurring monthly donation. It is all appreciated. All donations at this time will be going to furthering the Chronicle through hiring individuals that have the same goals of providing fair, and unbiased news to the community. For now, donations will be going to a business PayPal account I have set-up for the Southern Maryland Chronicle, KDC Designs. All business transactions currently occur within this PayPal account. If you have any questions regarding this you can email me at [email protected] Thank you for all of your support and I hope to continue bringing Southern Maryland the best news possible for a very long time. — David M. Higgins II
Using a specially designed computational tool as a lure, scientists have netted the genomic sequences of almost 12,500 previously uncharacterized viruses from public databases. The finding doubles the number of recognized virus genera – a biological classification one step up from species – and increases the number of sequenced virus genomes available for study almost tenfold. The research group studies viruses that infect microbes, and specifically bacteria and archaea, single-cell microorganisms similar to bacteria in size, but with a different evolutionary history. Microbes are essential contributors to all life on the planet, and viruses have a variety of influences on microbial functions that remain largely misunderstood, said Matthew Sullivan, assistant professor of microbiology at The Ohio State University and senior author of the study. Sullivan partners with scientists studying microbes in the human gut and lung, as well as natural environments like soils and oceans. Most recently, he reported on the diversity of oceanic viral communities in a special issue of the journal Science featuring the Tara Oceans Expedition, a global study of the impact of climate change on the world’s oceans. “Virus-bacteria and virus-archaea interactions are probably quite important to the dynamics of that microbe, so if researchers are studying a microbe in a specific environment, they’ve been missing a big chunk of its interaction dynamics by ignoring the viruses,” Sullivan said. “This work will help researchers recognize the importance of viruses in a lot of different microbes. “In all of our studies, we’re working with people who know the microbes well, and we help them decide how viruses might be helpful to the microbial system. The projects range from fundamental, basic science to applied medical science.” The research is published in the online journal eLife. Finding a treasure trove of new virus genome sequences has opened the door to using those data to identify previously unknown microbial hosts, as well. These new possibilities are attributed to VirSorter, a computational tool developed by study lead author Simon Roux, a postdoctoral researcher in Sullivan’s lab. The sorter scoured public databases of sequenced microbial genomes, looking for fragments of genomes that resembled virus genomes that had already been sequenced – for starters. VirSorter also “fished” for sequences by looking for genes known to help produce a protein shell that all viruses have, called a capsid. “The idea is that bacteria don’t use capsids or produce them, so any capsid gene should come from a virus,” Roux said. The sorter then associated capsid genes with unfamiliar genes – those considered new, small or organized differently – that are unlikely to be produced by bacteria. “None of these genomic features is really a smoking gun per se, but combining them led to a robust detection of ‘new’ viruses – viruses we did not have in the database, but can identify because they have capsid genes and a viral organization,” he said. Using microbial genomes as a data source meant researchers could link newly identified virus sequences to the proper microbial host. The scientists then tried a reverse maneuver on the data to see if virus sequences alone could be used to identify unknown hosts – and this way of analyzing the sequences could predict the host with up to 90 percent accuracy. “We can survey a lot of environments to find new viruses, but the challenge has been answering, who do they infect?” Sullivan said. “If we can use computational tricks to predict the host, we can explore that viral-host linkage. That’s a really important part of the equation.” Though viruses are generally thought to take over whatever organism they invade, Sullivan’s lab has identified a few viruses, called prophages, which coexist with their host microbes and even produce genes that help the host cells compete and survive. Viruses can’t survive without a host, and the most-studied viruses linked to disease are lytic in nature: They get inside a cell and make copies of themselves, destroying the cell in the process. But the genome sequences revealed in this study suggest that there are many more prophage-like viruses that are different in one important respect: Their genome remains separate from their microbial hosts’ genome. “The extrachromosomal form of this virus type appears quite widespread, and virtually nobody is studying these kinds of viruses,” said Sullivan, who also has an appointment in civil, environmental and geodetic engineering. “That is a really different and largely unexplored phenomenon, and it’s important to understand those viruses’ ability to interact and tie into the function of those cells.” Source: Ohio State University
Nature's last stand This display enables you to explore the vulnerable nature of life. By reflecting on the processes that have affected individual species survival, we can often lay the blame at the feet of humankind. Why have humans been so destructive? What can we learn? And what action needs to be taken to protect what survives? This display discusses these issues through the evolutionary adaptations that made each species unique. In the last 200 years humans have driven over 57 Australian species of Mammals, Birds, Reptiles, Frogs and Fish towards extinction. Action in the form of conservation programs has been introduced to save other species from permanently disappearing. Will they be effective? Only time will tell! Unfortunately to save a species we also need to protect its habitat from destruction. Unfortunately most ecosystems are very fragile and any modification by humans usually causes irreversible damage, superficially this may not be apparent but to the organisms living and affected by our actions dire consequences are the end results. Take for example the Paradise Parrot, Psephotus pulcherrimus. The last sighting of this beautiful parrot occurred in 1922. Its demise was caused by habitat destruction. Over grazing by cattle trampled food grass and disturbed the termite nests in which the parrot often nested. Its demise was furthered by the rampant infestation of the introduced Prickly pear plant, which stopped other food grasses from growing. Disease and over collection were the final blow and we must now accept that this beautiful bird is extinct.
Protestantism is the religious tradition of Western Christianity that rejects the authority of the pope of Rome. Protestantism originated in the Reformation of the 16th century in Christian Europe, and Protestants have been said to share 3 basic convictions: 1) the Bible is the ultimate authority in matters of religious truth; 2) human beings are saved only by God's "grace" (ie, unearned gift); and 3) all Christians are priests; ie, are able to intercede with God on behalf of others and themselves, able to bear witness, able to confess their sins and be forgiven. When a carefully engineered Catholic majority voted down certain reforms at the Diet of Speyer in Germany in 1529, the defeated minority earned the name "Protestant," derived from the Latin phrase meaning "to testify in favour of something." The rejection of Roman Catholic teaching and practice quickly became focused on rejection of the authority of the pope, often referred to as the "Anti-Christ" by Protestants. Repudiation of the papacy has been the only common characteristic of all Protestants at all times. The Shaping of Western Civilization It has been said that Western civilization has been shaped decisively by the other 3 convictions of Protestantism. For example, the veneration of the Bible fostered literacy and popular education. The experience of God's gracious gift paradoxically moved Protestants to insist upon a stern standard of morality, and to work hard (the so-called "Protestant Ethic" described by sociologist Max Weber). The "priesthood of all believers" led to modern democracy, and to worldly activity that ironically favoured the growth of secularism (ie, a standpoint independent of the sacred). While scholars are seriously divided over the validity of these claims, most Protestants have been happy to assert them. In fact, Protestant practice has often obscured the 3 disputed characteristics. If Protestants unite around the authority of the Bible, they frequently interpret it differently and usually give emphasis to different parts of it. Protestants have been known to speak of God's grace, but to act as if everything depended upon their own human effort. And respect for the ordained ministry of word, sacraments and pastoral care has undermined the priesthood of all believers. In Contrast to the Catholics (see Catholicism), Protestants generally celebrate only 2 sacraments (baptism and the Lord's supper) and they emphasize preaching and relative informality in their services of Sunday worship. Protestant congregations often sing in harmony. The high points of their religious calendar are Christmas, Easter and Pentecost (feast of the descent of the Holy Spirit and the founding of the Church). Only in some instances do Protestants (notably Anglicans) include bishops among their clergy, whose ranks usually include women, although not in large numbers. Lay people generally play significant roles in the life of the local congregation, which remains the basic and most characteristic unit of Protestant churches. The early French explorers brought Protestant chaplains with them to Canada, and their violent disputes with Catholic chaplains established a pattern that recurred in the religious history of Canada. By 1659, however, it was clear that Protestants would not be tolerated in New France. Then the British conquest shifted ascendancy to the Protestants and, until about the time of WWII, Protestants exercised hegemony over the culture and institutions of English-speaking Canada. That Protestant hegemony was finally dissipated by the presence of immigrants from Europe, many of whom were Catholics, Jews and Orthodox Christians, and by the secularizing of Canada. Today Protestants constitute 36% of the Canadian population, and just over half of those Protestants are members of the United Church of Canada and the Anglican Church of Canada (see Anglicanism). Protestantism took a distinctive form in Canadian history. In continental Europe, Lutherans played a large part, but not in English-speaking Canada. In Britain, the Anglican Church was the established church, but in Canada Anglicans never achieved dominance. In the US, there were many Protestant denominations, but in English-speaking Canada church unions occurred more readily and a few denominations rose to pre-eminence. Until the 20th century with its non-Protestant immigration and secularization, Canadian Protestants had relatively little to protest. We have seen that historians dispute what the heritage of Protestantism is. Today many wonder what its future will be. Protestants marry non-Protestants with increasing freedom and regularity; Canadian society is increasingly secularized; and religious life itself is more private that it once was. With the new attention that Canadian Catholics are paying to the study of the Bible, to the experience of God's grace in the Charismatic Renewal and to the importance of lay ministry and vocations, the 3 alleged convictions of Protestantism seem less distinctively Protestant. On the Protestant side, the affection and respect which many have shown for some recent popes encourage speculation that the only consistent anchor of Protestantism may be working itself loose.
Given that the archaeological record is often incomplete, how can archaeologists make reliable conclusions about human behavior in the past? Archaeologists employ a variety of approaches to this end, using statistical, interpretive, comparative and even analogical methods to understand their research sites. One especially interesting method in this toolkit involves strategically exploring how humans behave today in order to shed light on how ancient peoples may have acted in the past. In doing so, archaeologists seek to identify common conditions that shape human life in the present and then extrapolate these backward to illuminate the archaeological record. Cross-cultural researchers Melvin Ember and Carol Ember and archaeologist Peter Peregrine developed this method, called archaeoethnology, beginning in the mid-1990’s. Using the archaeoethnological approach, archaeologists research and analyze cross-cultural data on living societies in order to statistically explore the correlations that exist between human activities, environments, social organization, resource usage and more. In order for the method to yield reliable results, researchers should research human behavior across a carefully selected sample of societies. HRAF’s cross-cultural database, eHRAF World Cultures, provides one uniquely valuable tool for doing precisely this sort of research. As in the past, so too today? Using ethnographic analogies The archaeoethnological method can be contrasted with a method known as ethnographic analogy, where archaeologists draw conclusions about how past people behave by exploring the behavior of contemporary descendent communities. In cases where past peoples and living societies share an established historical connection, the method is called the “direct historic approach” to ethnographic analogy (Trigger 2006). In past studies, the direct historic approach has provided a number of important insights into historical processes, cultural evolution, and social change around the world. In the Maya region of Mexico and Central America, researchers have applied this analogical method to understand how ancient Maya peoples lived, worked, and thought about the world (For example see Freidel et al. 1993 Maya Cosmos: Three Thousand Years on a Shaman’s Path). Drawing from the contemporary beliefs and practices of living Maya peoples, archaeologists interpret ancient materials, artifacts, and worldviews (Vadala 2016). Researchers using this approach benefit most from using detailed case studies and making carefully researched comparisons between ancient and living peoples. Problems and assumptions Although the direct historical method of ethnographic analogy may seem alluringly simple and straightforward, it can suffer from unique pitfalls, such as the assumption that people’s lives and behaviors have remained steady over long stretches of social time (Peregrine 2004). This is often a problem when people use hunter-gatherer societies as subject of ethnographic analogy. For example, during the 1950s and 1960s ethnographic work characterized the ǃKung (San) people as a typical peaceful hunter-gatherer society. These characterizations lead to many studies and even movies (“The Gods Must be Crazy”) portraying the San as an unchanging peaceful culture emblematic of all hunter-gatherer societies. Breaking these assumptions, Ember and Ember (1997) point out that San bands were described as frequently engaging in armed combat in the 1920s. In the 1950s, the San appeared to be peaceful, but this was probably due to government forces pacifying the area (Ember and Ember 1997). Furthermore, other studies have shown that the San people often relied on herding. This disputes the assumption that they were always a hunter-gatherer society (Schrire 1984; Wilmsen 1989). To account for similar issues, most contemporary anthropological research considers that behaviors and environments can change even in the short spans of time. Given that dramatic transformations can occur in the span of only a few decades, how can archaeologists reliably assume that living peoples and ancient peoples separated by centuries would share practices, beliefs, and attitudes that align with one another? Thus, while the direct historic approach may indeed work well with a limited number of cultures, this method should be used with great caution. A grounded alternative: archaeoethnology Instead of focusing on direct historical connections, Ember and Ember’s (1995) and Peregrine’s (2004) archaeoethnological method turns to the enormous insights provided by cross-cultural studies of human social variation. Archaeologists can harness the potential of this cross-cultural research by expanding their comparisons to include the broad global patterns in human behavior, activity, and lifeways that cross-cultural anthropologists have already worked hard to establish. Using sound statistical methods and a wide variety of random samples, this form of analysis can produce highly generalized results and the findings can be significant for understanding any society—past or present (Ember and Ember 1995, Peregrine 2004). Social organization and dwellings Dwelling size is one variable that can be determined across a multitude of archaeological sites, and differing shapes and sizes of dwellings have also been associated with particular patterns of social organization in cross-cultural research. These studies have demonstrated that in agricultural societies, large dwellings (>175 square meters of floor space) are strongly associated with matrilocality—or the practice where new families live in the same community as the wife’s family. In contrast, smaller dwellings (<28.6 square meters of floor space) are strongly associated with patrilocal societies, where newlyweds live in the communities of the husband (M. Ember 1973:177-80, Divale 1977: 110-11, Porčić 2010: 413–14, also see HRAF’s dwelling module). With this in mind, archaeologists can infer important information about matrilocal and patrilocal social organizations based on the size footprint of dwellings in the community. How has this method been used? Examining violence and warfare Carol Ember and Melvin Ember’s cross-cultural studies on violence provide examples of research that can be used to understand ancient human behavior and societies across the world. In “Violence in the ethnographic record: Results of cross-cultural research on war and aggression,” Ember and Ember (1997) summarize their cross-cultural approach to explore the correlations between certain types of violence and social organization. Using a sample of 186 societies around the world, a system to categorize violence, and multiple regression analysis, they explored the relationships between violence and other social factors. One of their most important findings is that not only is more war correlated with higher fortitude and aggression training in males but more war also proceeds male aggression training (1994, 1997:7-8). This is an important contrast to the view that societies first reward violence, resulting in a higher likelihood that they will engage in warfare. Their research also shows that warfare can’t be predicted based on how complex or simple a society is; both simple and complex societies are equally likely to commit war. Intriguingly, Ember and Ember (1992, 1997:8) also found that the natural disasters that destroy food supplies strongly predict a high incidence of warfare. They note that even the threat of disasters, that is, when ethnographers report that people worry that a disaster, such as a drought, might come at any time, predicts frequent war as much as the actual occurrence of one or more disasters. Exploring this in an archaeological context in the U.S. Southwest from about 600 to 1600 A.D., Stephen Lekson (2002:615) found that periods of warfare or raiding accompanied periods of “resource unpredictability.” In the dry southwestern context where every drop of rain is important, periods of unpredictability are periods of unpredictable rainfall. The Puebloan people living in the Chaco Canyon region viewed these periods as precursors or potential initiations to long term droughts (Lekson 2002). Horrible, long-term droughts eventually lead most Puebloan peoples near the Chaco Canyon area to abandon their dwellings in the period between 1250 and 1450 A.D. (Fagan 2005). These insights could prove useful to archaeologists exploring ancient societies that have had a history of warfare. First, since complexity is not a predictor of warfare, archaeologists have to consider the possibility of warfare of societies of varying complexities. Archaeologists may also need to take the possibility of warfare into account in societies where evidence suggests that social values underwent change from valuing less aggressive to more aggressive males. Furthermore, archaeologists working in regions where natural disasters that destroy food supply (droughts, floods, insect invasions, etc.) are common may find value in exploring the possibility of warfare in the region (Ember and Ember 1997:17). Using eHRAF World Cultures ethnographic collections in combination with eHRAF Archaeology collections, and The Outline of Archaeological Traditions (Peregrine 2001) scholars can get comparative studies up and running quickly. Furthermore, given that eHRAF Archaeology and eHRAF World Cultures offer such an abundant and varied dataset of cultural information, researchers can produce rich correlative studies. Peter Peregrine, Carol Ember and Melvin Ember (2004) have argued that there are “universal patterns in cultural evolution” (2004:145) based on both ethnographic and archaeological data. Their approach uses a technique called “Guttman scaling.” Such scaling evaluates whether traits can be ordered in a hierarchical fashion such that a trait higher on the scale can predict a trait lower on the scale. For example, when studying prejudice, if a person says they would be fine if their child married someone of a different ethnic group, they would almost assuredly be fine with someone from that group living in their neighborhood, or living in their country. With regard to cultural evolution, Peregrine, Ember and Ember built on previous cross-cultural work by Freeman and Winch (1957) who examined 52 ethnographically-described cultures and came up with 11 ordered traits they believed represented a transition from “folk” to “urban” societies. Peregrine, Ember, and Ember used a random sample of 20 archaeological traditions from eHRAF Archaeology to see if traits from the Freeman-Winch scale (those that could be measured archaeologically) replicated the Guttman scale. They did. Then the team looked to see if the Guttman scale fit 8 archaeological civilization sequences. It did. This evidence, replicated in contemporary and prehistoric societies, strongly suggests these patterns of cultural evolution are near universals (Peregrine, Ember, and Ember 2004:147). In 2007, building on this previous work, Peregrine, Ember and Ember used these scales to investigate how complex states might emerge out of simpler political organizations. After further testing the validity of their scales, they used cluster analysis to demonstrate that no singular cause or correlate determined the cultural evolution of states. Instead of a single cause or a “prime mover,” their research suggested key evolutionary processes were linked or clustered together. For example, their models show that the emergence of metal production, social classes, and towns with populations greater than 400 people were evolutionarily linked and probably co-dependent. They argued that further research should explore these clusters and why they are linked together. In conclusion, archaeoethnography is an empirically grounded method for understanding ancient societies, and it also offers a data-driven alternative to ethnographic analogy. When used together in a systematic fashion, ethnographic and archaeological research can shed light on human conditions, practices, and forms of social organization across time and space. HRAF’s research collections and databases provide a valuable entree into such research, as eHRAF Archaeology, eHRAF World Cultures, and the open-access database Explaining Human Culture contain information on cultures, topics, and time periods around the world. With these tools, researchers can understand and explore sociocultural correlations which continue to shape history and human life. Ember, Carol R., and Melvin Ember. “Resource unpredictability, mistrust, and war: A cross-cultural study.” Journal of Conflict Resolution 36, no. 2 (1992): 242-262. Ember, Carol R., and Melvin Ember. “War, socialization, and interpersonal violence: A cross-cultural study.” Journal of Conflict Resolution 38, no. 4 (1994): 620-646. Ember, Carol R., and Melvin Ember. “Violence in the ethnographic record: Results of cross-cultural research on war and aggression.” Troubled times: Violence and warfare in the past (1997): 1-20. Ember, Melvin. “An Archaeological Indicator of Matrilocal Versus Patrilocal Residence.” American Antiquity 38.2 (1973): 177–82. https://doi.org/10.2307/279363. Ember, Melvin, and Carol R. Ember. “Worldwide cross-cultural studies and their relevance for archaeology.” Journal of Archaeological Research 3.1 (1995): 87-111. Divale, William T. “Living Floor Area and Marital Residence: A Replication.” Behavior Science Research 26.2 (1977): 109–15. https://doi.org/10.1177/106939717701200202. Fagan, B. M. (2005), Chaco Canyon: Archaeologists Explore the Lives of an Ancient Society, Oxford University Press (published May 1, 2005), ISBN 978-0195170436 Freidel, David A., Linda Schele, Joy Parker, and Justin Kerr. Maya cosmos: Three thousand years on the shaman’s path. Nueva York: W. Morrow, (1993). Freeman, Linton C., and Robert F. Winch “SocietalComplexity: An Empirical Test of a Typology of Societies.”American Journal of Sociology 62:46 (1957).1–466. Lekson, Stephen H. War in the Southwest, war in the world. American Antiquity, 67.4(2002): 615. Peregrine, P. N. Outline of archaeological traditions. HRAF. (2001) Peregrine, Peter N. “Cross-cultural approaches in archaeology: comparative ethnology, comparative archaeology, and archaeoethnology.” Journal of Archaeological Research 12.3 (2004): 281-309. Peregrine, Peter N., Carol R. Ember, and Melvin Ember. “Universal patterns in cultural evolution: An empirical analysis using Guttman scaling.” American Anthropologist 106.1 (2004): 145-149. Peregrine, Peter N., Carol R. Ember, and Melvin Ember. “Modeling state origins using cross-cultural data.” Cross-Cultural Research 41.1 (2007): 75-86. Porčić, Marko. “House Floor Area as a Correlate of Marital Residence Pattern: A Logistic Regression Approach.” Cross-Cultural Research 44.4 (2010): 405–24. https://doi.org/10.1177/1069397110378839. Schrire, Carmel . Wild Surmises on Savage Thoughts. In Schrire, C. (ed.), Past and Present in Hunter-Gatherer Studies, Orlando” Academic Press. (1984). Trigger, Bruce G. “A History of Archaeological Thought. Cambridge University Press.” DOI: https://doi. org/10.1017/CBO9780511813016 (2006). Vadala, Jeffrey Ryan. Analysis of Ancient Maya Caching Events at Cerro Maya (Cerros), Belize: Assemblages of Actor-Networks, Temporality and Social Fields in the Late Preclassic Period. PhD diss., University of Florida, (2016). Wilmsen, Edwin N. . Land Filled with Flies: A Political Economy of the Kalahari. Chicago: University of Chicago Press. (1989).
<!--intro-->Using genetic engineering techniques, researchers have created artificial hemoglobin that could someday alleviate perennial blood bank shortages. The achievement is reported in the November 21 issue of Biochemistry, a peer-reviewed journal of the American Chemical Society, the world's largest scientific society. <!--/intro--> Hemoglobin - the vital component that carries life-supporting oxygen through the body - could be used in artificial blood transfused during surgeries and transplants, said Chien Ho, lead researcher from Carnegie Mellon University in Pittsburgh, Pa. The method described in the research was used to produce small amounts of hemoglobin in a laboratory and needs to be improved, said Ho. But he believes the product is likely to be part of an eventual blood substitute. Neither artificial blood nor its components are currently available. Several potential blood substitutes are being investigated, all of which incrementally advance development of an oxygen carrier, necessary for synthetic blood that could be safely used by people, Ho said. Because the population is aging and demand for blood is increasing - for surgeries, transfusions and to treat blood disorders - the need for a substitute is becoming urgent. "There is an SOS for blood right now and that demand will only grow in the future," Ho said. "I am very excited about this research as a potential candidate in a blood substitute system. It shows great potential as a successful oxygen carrier, and is something that could realistically be used in people one day." The researchers overcame problems that have plagued previous attempts to create oxygen carriers by building mutations into the hemoglobin molecule to enhance functioning. They allow it to act just like the hemoglobin molecule in regular human blood, Ho said. But like human blood, artificial hemoglobin would have to be replenished frequently. Another approach to making artificial hemoglobin is needed to provide sufficient amounts for use in people. For example, pigs or other animals could be used to produce hemoglobin in bulk, Ho said. Different types of hemoglobin can be designed using the same techniques, Ho said. For example, they might be tailored to meet specific medical requirements for afflictions like sickle-cell anemia and other blood-related disorders. Some 4 million Americans receive transfusions of whole blood annually, including an estimated 3 million surgery patients, according to the American Association of Blood Banks. Approximately 13 million units of blood are donated each year, according to 2000 statistics compiled by the association.
Brown adipose tissue or brown fat is one of the two types of fat or adipose tissue abundant in hibernating animals and newborn humans. Compared to white adipose tissue or white fat, which contains a single lipid droplet, brown fat contains numerous smaller lipid droplets and a higher number of iron-containing mitochondria. It is worth mentioning that scientists initially thought that the function of brown adipose tissue in adult humans is negligible. Recent studies revealed that it plays a critical role in metabolism, especially in the consumption of energy. What is brown adipose tissue? This specialised tissue is responsible for non-shivering thermogenesis—a process of generating heat and raising body temperature by burning calories instead of shivering. In their study that identified B cell factor-2 or Ebf2 as a protein responsible for the development, differentiation, and function of fat cells, researchers Sona Rajakumari et al explained that brown fat burns excess energy while white fat stores them. Rajakumari et al performed an experiment that involved overexpressing the Ebf2 protein in precursor white fat cells to mature into brown fat cells, Rajakumari et al further demonstrated the difference between the two types of adipose tissue. The induced brown adipose tissue accordingly consumed higher amounts of oxygen, had a greater number of mitochondria, and had an increased expression of genes involved in heat production. While brown adipose tissue is abundant in children, its amount decreased as an individual transitions to adolescences and further into adulthood. Researchers Lindsay Robinson et al mentioned that most adults only have 50 to 100 grams of brown fat. However, the capacity of this tissue to generate is 300 times greater than any other tissues in the body. Combating obesity by exploiting brown fat Because brown adipose tissue burns calories at rest, researchers are exploring the role of this tissue in weight loss and in preventing obesity. Researchers Laurie Goodyear et al noted that white fat is associated with increased body mass and obesity. On the other hand, because brown adipose tissue is associated with lower body mass index and high-energy consumption using glucose and fatty acids as fuel, they believe that it plays a fundamental role in the maintenance of a leaner and a more metabolically healthy phenotype. They also hypothesised that a brown fat transplant could be used as a therapeutic tool to combat obesity and metabolic disease. To test their hypothesis, Goodyear et al performed brown fat transplants in mice that were further fed with either a normal diet or a high-fat diet. After eight to 12 weeks of transplantation, recipient mice had demonstrated improved glucose tolerance, increased insulin sensitivity, lower body weight, decreased fat mass, and a complete reversal of insulin resistance induced by high-fat diet. The transplanted brown adipose tissue also secreted several hormones, including IL-6, which mediated body throughout the body. Brown fat transplants could be a possible solution to manage weight and combat obesity. However, the separate studies of Rajakumari et al and Junko Sugatani et al also suggest the use of protein targeting through drug therapy. Take note that the study of Rajakumari et al had identified Ebf2 protein as responsible for the development and maintenance of brown fat cells. However, the researchers reminded that this protein is not a readily druggable target. They still suggested that it is possible to pharmacologically bloc or stimulate the interaction of Ebf2 with a partner protein. Sugatani et al also identified platelet-activating factor receptors or PAFR gene deficiency in the development of obesity due to impaired thermogenesis activity. In an experiment that involved knocking down the PAFR gene in mice, they found out that the deficiency resulted in brown fat dysfunction as characterised by impairment of thermogenesis function. Cold climates and brown fat activity Other studies also suggest the use of natural mechanisms to stimulate the energy-consuming activity of brown adipose tissue. For example, A. C. Carpentier et al enrolled six healthy adult men in a study that involved controlled cold exposure conditions. All subjects demonstrated substantial nonesterified fatty acid and glucose uptake upon cold exposure. The findings also demonstrated cold-induced activation of oxidative metabolism in brown fat but not in adjoining skeletal muscles and adjoining subcutaneous adipose tissue. This activation was associated with an increase in total energy expenditure. However, the researchers also found that exposure to warm temperature did not result in similar energy expenditure and activations. The findings of Carpentier et al are also echoed in another study by Paul Lee et al that involved enrolling five healthy adult men and subjecting them under temperature acclimation that lasted for four months. Results of the study revealed that long-term exposure to cold environments can stimulate brown fat growth and activity by about 30 to 40 percent. Prolonged exposure to warm climates decreased the amount of this tissue below that of baseline. Tore Bengtsson et al explained how cold environment activates brown adipose tissue. Accordingly, when the body encounters cold temperatures, the sympathetic nervous system activates adrenoceptors on the surface of brown fat cells to stimulate glucose uptake from the bloodstream. Brown fat cells then use this glucose as a fuel source to generate body heat. An accompanying commentary on the study of Carpentier et al mentioned that increasing the amount of brown adipose tissue in a person is unlikely to make an individual leaner. The trick is to make sure that this tissue is active and burns calories. Stress and brown fat activity Stress also appears to induce brown fact activity. Researchers Labros S. Sidossis et al studied burn trauma patients. They initially hypothesised that burn trauma provides a unique model of severe and prolonged stress in which adrenaline-release is massively increased for several weeks following the injury. After enrolling 72 patients that had sustained severe burns over approximately 50 percent of their bodies and a comparison group composed of 19 healthy individuals, the researchers took samples of white fat. They measured the metabolism of the samples and the composition of the fat cells, while also taking note of the resting metabolic rate of the participants. Take note that they specifically took samples of white fat in burn trauma patients at different time points following their injury. Findings revealed that in burn trauma patients, there was a gradual shift in molecular and functional characteristics of white fat to a more brown fat phenotype over time. This suggested that progressive browning of white fat in response to a burn injury. Sidossis et al further explained stress can result in the browning of white fat. Accordingly, brown fat cells express a protein called UCP1. This protein prompts mitochondria to burn calories without making any chemical energy—just heat. Adrenaline from stress due to burn trauma activates this protein. They concluded that browning of white fat is possible and that their study can helped paved the way for the development of drugs that can mimic the effects of burn trauma. But severe adrenergic stress is not the only mechanism for stimulating brown fat. Mild tress can directly stimulate the activity of existing brown fat. Robinson et al enrolled five healthy lean woman. They subjected them under short math tests in the first run and had them watched a relaxation video in the second run. To assess stress response, they measured cortisol level in the saliva. On the other hand, to measure brown fat activity, the researchers used infrared thermography to detect changes in temperature of the skin overlying the main area of brown fat. Although the actual mathematics tests did not elicit an acute stress response, the anticipation of being tested did, and led to raised cortisol and warmer brown fat. Both were positively correlated, with higher cortisol linked with more fat activity and thus more potential heat production. The researchers concluded that their study might open new techniques to exploit brown adipose tissue for weight management and combatting obesity. These techniques would involved inducing mild stress and incorporating them in dietary and/or environmental intervention programs. Further details of the study of Rajakumari et al are in the article “EBFT Determines and Maintains Brown Adipocyte Identity” published in April 2013 in the journal Cell Metabolism. Further details of the study of Robinson et al are in the article “Brown Adipose Tissue Activation as Measured by Infrared Thermography by Mild Anticipatory Psychological Stress in Lean Healthy Females” published in February 2016 in the journal Experimental Psychology. Further details of the study of Goodyear et al are in the article “Brown Adipose Tissue Regulates Glucose Homeostasis and Insulin Sensitivity” published in December 2012 in The Journal of Clinical Investigation. Details of the study of Sugatani et al are in the article “Anti-obese Function of Platelet-Activating Factor: Increased Adiposity in Platelet-Activating Factor-Deficient Mice with Age” published in October 2013 in The FASEB Journal. Further details of the study of Carpentier et al are in the article “Brown Adipose Tissue Oxidative Metabolism Contributes to Energy Expenditure during Acute Cold Exposure in Humans” published in January 2012 in The Journal of Clinical Investigations. Further details of the study of Lee et al are in the article “Temperature-acclimated Brown Adipose Tissue Modulates Insulin Sensitivity in Humans” published in June 2014 in the journal Diabetes. Details of the study of Bengtsson et al are in the article “Glucose Uptake in Brown Fat Cells is Dependent on mTOR Complex 2-Promoted GLUT1 Translocation” published in November 2014 in The Journal of Cell Biology. Further details of the study of Sidossis et al are in the article “Browning of Subcutaneous White Adipose Tissue in Humans after Severe Adrenergic Stress” published in August 2015 in the journal Cell Metabolism.
Why Flame Retardant-Free? To meet certain flammability standards, flame retardant chemicals are added to a wide range of products, including computers, couches, hospital beds, waiting room chairs, and hospital privacy curtains. Unfortunately, many of these flame retardant chemicals do not remain in the product and slowly off-gas into the air, dust, and water, eventually entering the food chain and building up in our bodies. Many flame retardants are linked to a range of negative health effects. Levels of toxic flame retardants in people have already reached levels of concern. Depending on the flame retardant, effects include reproductive, neurocognitive, and immune system impacts, among others. Three common flame retardants appear on California’s Proposition 65 list as human carcinogens. Brominated flame retardants are used as additives to products to reduce the risk and inhibit the spread of fire. Testing has shown that brominated flame retardants are toxic and have the potential to disrupt fetal development. It has also been demonstrated that these brominated chemicals bioaccumulate in the human body. Levels of toxic flame retardants in people have already reached levels of concern. Recent research on one class of brominated flame retardants, polybrominated diphenyl ethers, or PBDEs, shows that PBDE exposure can interrupt brain development in mice, permanently impairing learning and movement. So far, scientists have not identified safe levels of exposure that do not produce damage. Additionally, both PCBs and PBDEs are found in humans, and their effects on brain development may be additive. When selecting upholstery fabric you should seek to avoid the use of added flame retardants whenever possible. EnviroLeather™ passes standard industry testing for flammability without the use of flame retardants. See article below for more detail. MADSEN, LEE, AND OLLE / ENVIRONMENT CALIFORNIA RESEARCH AND POLICY CENTER, MAR 2003 Travis Madsen, Susan Lee, and Teri Olle [Dangerous Chemicals – Editorial / SF Chronicle June 11, 2003]
Telephony is a term denoting the technology that allows people to have long distance voice communication. It comes from the word ‘telephone’ which, in turn, is derived from the two Greek words “tele,” which means far, and “phone,” which means speak, hence the idea of speaking from afar. The term’s scope has been broadened with the advent of new communication technologies. In its broadest sense, the terms encompass phone communication, Internet calling, mobile communication, faxing, voicemail and even video conferencing. It is finally difficult to draw a clear line delimiting what is telephony and what isn’t. The initial idea that telephony returns to is the POTS (plain old telephone service), technically called the PSTN (public switched telephone network). This system is being fiercely challenged by and to a great extent yielding to Voice over IP (VoIP) technology, which is also commonly referred to as IP Telephony and Internet Telephony. Voice Over IP (VoIP) Voice over IP (VoIP) refers to a way to carry phone calls over an IP data network, whether the public Internet or an organization’s own internal network. One of the primary attractions of VoIP is its ability to help companies reduce expenses because telephone calls travel over the data network rather than the phone company’s network. There are various protocols that can be used to implement IP telephony including: - Session Initiation Protocol (SIP) - Real-timeTransport Protocol (RTP) - Real-Time Transport Control Protocol (RTCP) - Secure Real-time Transport Protocol (SRTP) - Session Description Protocol (SDP) What are IP phone systems? IP phone systems work together with IP phones in order to send and receive digital signals. IP phone systems are composed of three essential parts that allow them to work and send and receive IP voice digital signals. An IP phone system is comprised of an IP phone (also known as a VoIP phone), and an IP PBX, or a VoIP private branch exchange. The way that these systems work is that they are connected to a VoIP service provider through a Local Area Network (LAN). IP Phone systems work by transmitting telephone calls over the Internet, in contrast to the way that traditional telephone systems work via circuit-switched telephony systems. The IP Communications solution delivers feature-rich communications built specifically for small offices and enterprises. The Key features include: - Easy-to-manage, plug-and-play functionality that makes adding new users and components quick and easy. - Single box, which offers unified management, simplified installation, integrated mobility with dual mode and single-number reach, and fewer products to manage. - Default system configuration immediately available. - Integrated voice mail – including the Integrated Messaging with industry e-mail client applications. - Power fail-over support. - Wireless support – voice and data. - Support for up to three sites. - Integrated firewall. - Support for Power over Ethernet (PoE).
Our body depends on a normal blood pressure range to ensure all the body organs and tissues receive adequate supply of oxygen and nutrients. Blood pressure is the measure of the force that the heart uses to pump blood around the body. Blood pressure is measured in millimeters of Mercury (mmHg) and is given by the doctor as two readings, - Systolic Pressure - Diastolic Pressure The first of the two readings, i.e. Systolic Blood Pressure, is a measure of the pressure in the blood vessels when the heart contracts and exerts maximum pressure on the blood vessel walls while pushing the blood out. The second reading, i.e. Diastolic Blood Pressure, measures the pressure in the blood vessels when the heart is at rest, i.e. between contractions. The normal systolic blood pressure is considered to be 120 by most doctors, and the normal diastolic blood pressure is considered to be 80. Blood pressure normally increases throughout life, right from infancy to older adulthood. For most adults, regardless of their age, the normal BP range is considered to be 120/80 or less. What is the Normal Blood Pressure Range? As mentioned earlier, blood pressure increases with age, beginning from infancy to older adulthood. Since most healthy babies and children are typically not at risk for blood pressure problems, most doctors do not check their blood pressure routinely. But, the normal BP range for all adults, regardless of their age, is considered to be lesser than 120/80. What is the Normal BP Range For Adults? The normal BP for all adolescents, adults, and older adults is considered to be 120/80. Adolescents mean persons aged between ages 10 and 19. For this age group, the systolic blood pressure from 120 to 136, and the diastolic blood pressure between 82 to 86 is considered to be normal. The normal BP range may vary from 126/82 among adolescents aged 11 to 13, 136/86 among those aged between 14 to 16, and 120/85 for adolescents aged between 17 to 19. The difference in the normal blood pressure range for this age group is caused by the increase in age, and it also depends on the individual’s physical activity, diet and weight. As blood pressure increases with age, it is very common for adults aged between 20 and 60+ to show variations in BP at different ages. The chart below outlines the blood pressure that is considered normal in people belonging to different age groups. Normal BP Range For Adults By Age |Age Group||Normal Systolic BP||Normal Diastolic BP| Understanding the normal BP range with age can help the doctor and you to estimate your cardiovascular health. Blood pressure levels can fluctuate significantly from one reading to the next and it is important to remember that just one abnormally high reading does not signify that you have high blood pressure. Doctors usually use an average of multiple blood pressure readings taken over a period of several days to arrive at a diagnosis of high blood pressure. Prehypertension and Hypertension An adult is considered to have prehypertension if his/her systolic blood pressure is constantly above 120 but below 140, or if the diastolic blood pressure is above 80 and below 90. People with prehypertension will definitely progress to hypertension if they do not make lifestyle changes that will help lower their blood pressure. Hypertension is actually of two types, one is called Primary Hypertension or Essential Hypertension, and this type of hypertension develops gradually over the years as a person ages, and it has no known causes. The other type of hypertension is called Secondary Hypertension, and this type is caused by different diseases, and a person’s lifestyle factors, like diet, sedentary lifestyle, alcohol intake etc. A person consistently showing blood pressure higher than 140/90 over several readings is considered to have hypertension. Doctors advise these people to make effective lifestyle changes to help lower their blood pressure, such as maintaining a healthy weight, including exercise in their daily routine, limiting salt and alcohol intake, and quitting smoking. The doctors will also recommend medication for hypertension depending on how much higher the BP is as compared to the normal blood pressure range and any other health problems that the patient faces. Causes of Secondary Hypertension: - Smoking: Tobacco contains the hormone adrenaline that makes the heart beat faster and thus raises the blood pressure. - Obesity: Being overweight increases the chances for a person to develop hypertension. - High Salt Intake: Too much of sodium in the bloodstream causes higher water retention in the body and causes the blood pressure to increase. - High Alcohol Intake: Heavy alcohol intake, especially on a daily basis, can be another cause for the blood pressure to increase. - Inadequate Physical Activity: Lack of exercise and a sedentary lifestyle is a very common cause for secondary hypertension. - Stress: Heavy emotional or mental stress can also cause the BP to rise. - Kidney Diseases - Diabetes: Type 2 diabetes usually causes the narrowing of the blood vessels in the body and causes high BP. - Family History: A person with a family history of hypertension is more likely to develop hypertension than a person without any family history of hypertension. What is Normal Blood Pressure for Men and Women? The healthy blood pressure range for men and women remains the same across all age groups. Normal BP Range for Female in India – Below 120/80 mm Hg and Above 90/60 mm Hg in an adult female. Normal BP Range for Male in India – Below 120/80 mm Hg and Above 90/60 mm Hg in an adult male. However, when it comes to hypertension, it is important to recognise the differences between the two genders. High BP is more common among men below the age of 50 than women of the same age, and the chances of developing hypertension increases in women as compared to men after the age of 55. Hypertension causes complications such as heart attack and stroke, and these complications are less likely to occur in women who have undergone menopause than men of the same age. When comparing the complication risks of hypertension between men and women aged between 40 and 70 years, it is seen that men are at a higher risk of developing complications than women. These findings suggest that regular BP screening should be conducted for young and middle-aged men, once they enter the 20s, and the same applies to women who have passed the menopause stage. Normal Blood Pressure Range For Children By Age and Gender |Age Group||BP range for Males in mmHg||BP range for Females in mmHg| The normal bp range changes continuously for children throughout childhood, and the blood pressure is lowest in infancy and steadily increases till the child turns 10. Also, it is important to note that the normal BP is different for girls and boys belonging to the same age group. As mentioned earlier, it is least likely for children to suffer from blood pressure problems unless they have an underlying condition such as a kidney disease or diabetes, and therefore, doctors rarely check the BP in children during their regular checkups. Determining the normal blood pressure range in children is a little complicated, and it all depends on the size and age of the child. One rule of thumb that doctors use to determine BP troubles in children is that, a child is considered to be suffering from Prehypertension. If he/she has a blood pressure higher than that of 90% of the children of the same age and size. The child is said to have hypertension if he/she has a blood pressure higher than that of 95% of the children of the same age and size. Medically Reviewed By I am an experienced Medical/Scientific writer with a passion for helping people live a happy healthy life. My thirst for writing has followed me throughout the years – it is there when I wake up, lingering at the edges of my consciousness during the day, and teases me at night as I go to sleep.
What is Sexual Violence What is Sexual Violence? As defined by the University’s Policy on Sexual Violence and Sexual Harassment, sexual violence is “any sexual act or act targeting a person’s sexuality, gender identity or gender expression, whether the act is physical or psychological in nature, that is committed, threatened or attempted against a person without the person’s consent”. The scope of sexual violence is broad and includes a range of behaviours. For example, it can be physical in nature such as forced kissing or it can be non-physical, such as harassing someone because of their gender identity. Sexual violence can occur in private, in public or online, and among any two or more people regardless of their gender or sexuality. A defining feature of sexual violence is the absence of consent. This means that the behaviour has not been discussed or agreed to by all parties, and that there is at least one person in the situation who has not said yes, either verbally or through physical gestures and behaviour, to the act in question. What constitutes sexual violence depends on the circumstances and there are many possible examples. For example, sexual violence can include: - inappropriate and unwelcome physical contact of a sexual nature - sexual harassment - sexual abuse - sexual assault (including assault by a partner or marital partner) - indecent exposure - degrading sexual imagery - cyber sexual harassment - stealthing (removing a condom without consent during sex)
Characteristics of resonance - The contributing structures do not have real existence. These are only imaginary structures, proposed to explain the properties of the molecule. None of these 'resonance structures' can be prepared in the laboratory. Only the resonance hybrid is the real molecule structure. - Because of resonance, the bond lengths in resonating structures become equal. For example, both the O-O bond lengths in O3 are equal. All the C-C bonds in benzene are equal. Resonance structure of ozone Resonance hybrid structure of ozone Both the O-O bond lengths are equal and intermediate of single and double bond. All the C-C bonds in the resonance hybrid structure are intermediates of single and double bonds and the bond lengths are equal. - The resonance hybrid has lower energy and thus greater stability than any of the contributing structures. - Greater the resonance energy, greater is the stability of the molecule. - Concept of resonance is theoretical. Conditions for writing resonance structures The contributing structures should: - Have the same atomic positions. - Posses the same number of unpaired electrons. - Have nearly the same energy. - Written in a way that negative charge is present on an electronegative atom and positive charge is present on an electropositive atom. - Not place the like charges on adjacent atoms. The resonance structures of a few more molecules and ions are given below:NO-3 ion: The three possible resonance structures for the nitrate ion (NO-3) are Resonance energy is the difference between the actual bond energy of the molecule and that of the most stable of the resonating structures (having least energy). For example, the resonance energy of carbon dioxide is 138 kJ mol-1. This means that the actual molecule of CO2 is about 138 kJ more stable than the most stable structure among the contributing structures. 7. Identify the atoms which do not obey the octet rule in the following compounds and draw their Lewis structures? SO2 or SF6 Here all atoms obey the octet rule. Oxygen and sulphur atoms have six electrons in their outermost shells. The Lewis structure of SO2 is, S is attached to one O atom through a double bond while to the other O atom through a coordinate bond. All atoms have eight electrons in their outer most shells. The octet rule is obeyed. S does not obey the octet rule. The Lewis Structure of SF6 gives 12 electrons (6 pairs of electrons) around S. Thus, S does not obey the octet rule. (Only the sharing electron of F is shown) 8. Write and explain the Lewis Structure for (I) H2SO4 (II) H3PO4 (III) BCl3 compounds. (I) Sulphur atom has six valence electrons. The two hydrogens in this compound are present as OH groups. The possible Lewis structure of H2SO4 is, (II) Phosphorous has 5 valence electrons. The three hydrogens are present as three OH groups. The possible Lewis Structure of H3PO4 is (III) Boron atom (B) has 3 electrons in its outermost shell, and Cl has 7. B does not obey the octet rule. The possible Lewis structure for BCl3 is Thus, there are only six electrons around B. The octet rule, thus is not obeyed by B.
Rabies is a viral disease of mammals most often transmitted through the bite of an infected animal. The rabies virus infects the central nervous system, and ultimately the brain, causing death. In Mississippi bats are the primary reservoir for the rabies virus. Bats with rabies continue to be identified in the state and, while human cases of rabies are rare in Mississippi, bats are the primary wild animal responsible for transmitting the disease to humans. This is why any contact with a bat whether a bite is identified or not, puts an individual at risk for rabies infection. In 2013 1.84% of the bats tested for rabies in Mississippi were positive compared to the national percentage of 5.87% positive of all bats tested. Any mammal can be infected with rabies; however, like human rabies, land animal rabies is rare in Mississippi. Since 1961 only a single case of land animal rabies (a feral cat in 2015) has been identified in our state. In the United States raccoons, skunks, and foxes are the most commonly identified land animals with rabies. Rabies is not typically seen in rodents such as mice, rats, squirrels, chipmunks, guinea pigs, hamsters, or rabbits. Birds, turtles, lizards, fish and insects do not contract or carry rabies. Once symptoms appear, rabies is almost always fatal. Symptoms include convulsions, paralysis, and finally death. If you are bitten or exposed to an animal that carries rabies, early treatment before symptoms appear is important. - Do not handle or touch live or dead feral animals, or wild animals such as raccoons, bats, skunks, foxes and coyotes that can carry rabies. - If you see an animal with unusual or aggressive behavior, stay away and contact your local Animal Control officials. - Vaccinate animals when your dog or cat has reached 3 months of age, one year later, and every three years thereafter (using a vaccine approved with 3 year immunity), as required by state law. Tips to Prevent Rabies - Vaccinate dogs and cats against rabies as required by law. - Keep dogs and cats under control. Roaming pets are more likely to be exposed to rabies. - Leave stray or unknown dogs and cats alone. Keep pets away from strays, too. - Leave wild animals alone. Do not keep wild animals as pets. - Make your property unattractive to wild animals. Cap chimneys and seal off any openings in attics, under porches and in basements. Feed your pets indoors and keep trash cans tightly closed. If You are Bitten, Scratched, or Have Contact with an Animal - Obtain the owner’s name, address, and telephone number if possible. - Immediately wash the wound thoroughly, cleaning and flushing with plenty of soap and water for several minutes. - Get prompt medical attention. Call your family doctor or go to the nearest emergency room. - You may call the health department (at 601-576-7725 or after hours at 601-576-7400) with questions or to get information about having the animal tested for rabies. Mississippi state law requires the rabies vaccination to be given by a licensed veterinarian to all dog and cats over three months of age, again at one year of age, and at least every three years thereafter. Read the summary here »
When Clark attended graduate school, his Ph.D. work focused on the roles of sexual selection and flight performance in shaping hummingbird tail morphology. In 2008, he published a paper titled "The Anna's Hummingbird Chirps With It's Tail," which received wide publicity and helped launch his current research focus. Clark's paper described how Anna's hummingbirds (Calypte anna) make a loud sound with their tail feathers during courtship displays rather than vocally. After completing his doctorate, Clark and his advisor, Richard Prum, were awarded a grant from the National Science Foundation to study the physics of the sounds feathers make. Clark traveled to Latin America, where he recorded the courtship displays of a number of hummingbird species that produce distinctive sounds with their tail feathers including sheartails and woodstars. He then took his research into the lab, where he used a wind tunnel to reproduce the sounds feathers make when the birds are in flight in the wild, and studied how feathers produce sounds over a range of air speeds. [Research supported by National Science Foundation grant IOS 09-20353.] (Date of Image: unknown) Credit: Christopher Clark, Yale University
Neptune is the eighth planet and the farthest planet away from the Sun. Neptune also has thirteen moons that are known so far. I decided to do make a movie of its biggest moon, Triton, rotating around Neptune. Individual images are centered on Neptune. Notice how stars drift up and to the left (north and east) relative to Neptune over time due to Neptune's orbital motion. Triton circles Neptune in a counterclockwise fashion. Analyzing Triton's movement we can tell the period of the moon orbiting around Neptune along with its angular separation. Using this and the distance to Neptune we can calculate the mass of Neptune using Kepler's Third Law Revised by Newton. Specifically, in the first graph we show the measured orientation of Triton with respect to Neptune as a function of time. Approximating this with a straight line, we found the rate of angular motion to be 61.107 degrees per day, corresponding to an orbital period of 5.89 days. In ths second graph we show the separation of Triton from Neptune as a function of time in pixels on the camera. A typical number is 11 pixels, corresponding to a linear separation of 308,000 km (using the known distance to Neptune and the plate scale of 1.31 arcseconds per pixel). This would be a good estimate of Triton's orbital size if the orbit is face on. In fact it is an underestimate as the orbit is tilted. The computations below show 1) Kepler's third law revised, 2) how to calculate the orbital period, 3) how to calculate the orbital separation, and 4) how to calculate Neptune's mass by putting these things together.
On the playground A whole-school approach to preventing bullying behaviour on the playground and other non-classroom environments. On the playground An effective approach requires attention to the whole-school environment. Areas such as the playground, hallways, the canteen and toilets are less structured environments than classrooms and usually have lower levels of teacher supervision. As a result bullying behaviour is more likely to occur in these areas. The way spaces are organised and used can either encourage or discourage positive student behaviour and interactions. Having clear expectations for student behaviour and planning for areas around the school to be well supervised can also minimise the likelihood of bullying behaviour. It is important to consider whether there are areas in the school where bullying is more likely to occur. Teachers and students working together to identify such areas and planning together to address any issues helps to build a good school culture and positive school climate. All school community members should have the opportunity to identify bullying issues and provide input into how they can be effectively addressed. More information can be found in the NSW Department of Education's literature review, Anti-bullying interventions in schools - what works? (PDF 4741.34KB). Steps to create safe environments Consider the following steps in developing safer non-classroom environments. Step 1 - Identify what is currently happening Identify what is currently happening Use data already available in the school to identify patterns of problem behaviours, such as bullying, and the locations where these behaviours occur. This could include incident reports, behaviour referrals, teacher observations and anecdotal accounts. Staff and student surveys can also be used to provide information about perceptions of school climate, safety, bullying behaviours and location ‘hot spots’. See Mapping bullying behaviour below, to see how students can contribute. In some cases, anonymous surveys may provide more accurate information. Step 2 - Identify contributing factors Identify contributing factors Identify factors that may be contributing to problem behaviours such as the physical structure of buildings making supervision difficult, lack of appropriate activities and equipment, staff not actively supervising areas, too many students in one particular area or unclear boundaries. Step 3 - Consider improvements Which physical factors can be modified? Making changes to the physical environment can make areas safer for students. For example, consider: - altering the boundaries of areas to improve safety and supervision - increasing staff on duty in areas identified as problematic - staff moving around identified ‘hot spots’ and engaging with students (such as speaking with students, addressing inappropriate behaviour quickly) - increasing equipment, games or activities available for students - providing students with more supported activities at recess and lunchtime (such as supervised games, access to the library or organised activities). Revise the rules and expectations Collaborate with staff, students and parents to revisit rules and expectations of student behaviour in the playground. Rules and expectations need to be clearly defined and communicated widely. Students need to be explicitly taught expected behaviours including routines such as transitioning between different areas of the school, noise levels, lining up before entering classes and accessing and using playground equipment. Display reminders of expected behaviours in all areas (such as the playground, hallways, canteen, toilets and classroom areas). Actively support and teach students the skills to keep themselves and others safe. Teach students strategies Teach students strategies to use if they feel safe to try to stop the bullying. Also teach students why and when it is best to walk away and get help. For more information about strategies students can use, visit Bullying. No Way!. Teach students when and how to report bullying. It is important that students are taught how to seek help and what they can expect from the school when they ask for help. Support individual students If a student is being bullied or is bullying others, the school should have clear and consistent processes of support such as: - Provide safe and highly supervised areas during school breaks for children who feel vulnerable (such as the library, specific areas of the playground, organised activities). - Refer students to the school learning and support team or wellbeing team, where appropriate. - Use teaching and learning programs to develop students communication, social, resilience, assertiveness and coping skills. - Use buddy systems to increase social connections and friendships. - Increase supervision of students at particular times or in particular places. - Apply evidence based, responsive approaches such as direct sanctions, restorative practices, mediation, support group method, method of shared concern. Active supervision strategies in out of classroom areas All school staff have a duty of care to take reasonable measures to protect students from injury or harm. Teachers being observant and responsive in all areas of the school help to maintain a positive school climate. Students are less likely to engage in inappropriate behaviour if they can see that staff are vigilant in monitoring the physical environment, address inappropriate behaviour, and take action when required. Staff can help organise and supervise games during breaks and praise prosocial, inclusive behaviour. Providing a range of games or activities will help ensure that students with different needs and skills can join in. School leaders should regularly engage staff in conversations about their responsibility to provide active supervision in the playground, monitor ‘hot spots’ of student bullying and interact with students. This includes: - Be visible and responsive to situations in the playground. - Engage in positive interactions with students. - Constantly scan and proactively intervene to avoid potential problems. - Remind students about expectations, rules and responsibilities. - Manage inappropriate behaviour quickly and consistently. - Ensure the correct use of playground equipment. - Encourage a clean, safe environment. - Implement the anti-bullying policy consistently. Step 4 - Monitor and review Monitor and review Regularly collect behaviour data, such as the number of reports to teachers, to evaluate the effectiveness of the whole-school organisational procedures for the playground and respond quickly to identified concerns. Also look for opportunities to celebrate good student behaviour. Mapping bullying behaviour To identify locations where students may be bullied, walk around the playground with the class or a small group of students. Each time students get to an area where bullying may happen, they should say ‘stop’. Students can record the information on a school map as they walk around. Return to the classroom and have students work in groups of four to discuss the findings. Ask students: - What is it about the areas we identified that make us feel unsafe? - What are some ideas on improving these areas? Have students look at their maps of the school and put a circle around the areas they feel the safest and have the most fun. Do they have different places that feel safe? Why? Have students also look at their maps of the school and put a circle around the areas where they feel least safe and tend to avoid. Why do they feel less safe in these areas? Use this information as a basis to consider improvements to school practices, with the input of students. Some of this content has been adapted and reproduced with permission from Bullying. No Way!
Finding The Big Picture to Make A Mark on Their World. Students coupling what they know with interests to make language a meaningful and reciprocal part of their lives. We are here, in class, right now. We are speaking Spanish. We are writing in Spanish. We are listening to Spanish. We are reading Spanish. Now what? What is the bigger picture? It must be bigger than the Excel vocabulary spreadsheet hidden in binders. It must be bigger than memorizing grammatical structures in order to maintain a GPA. Well, at least I hope the picture is bigger than that. What step do we take next? We are here, now what? Imagine a bullseye, the center is the comfort zone for students. What I have seen in my classroom is students trying to hit the bullseye. Which, great- I am able to see students succeed at something they are comfortable doing. But learning begins to cease. And instead of development, I see students playing it safe. Students stay in their comfort zone. When asked, ¨What did you do this weekend?¨ Student response may be, “I slept.” Really, I slept. That is it? Certainly you did more than that. Use your words, I want to hear more! ¨Have you ever seen a wild animal?¨ — ¨Yes, one time a lizard.” Come on! Tell me more! What kind of lizard? Where were you? What were you doing? What happened? As a teacher of Spanish 4, I know that my students are capable of so much more. The other area of the bullseye is the outer edges. The danger zone. When students start to wander too far from their comfort zone. When students start to lean on translators and websites, they distance themselves from learning. I see them getting closer and closer to not hitting the target at all and sailing by. I need to find a place for students to move outside of their comfort zone and stay away from danger. So, here is my question? How do I build student confidence in using their own words and knowledge? In reading Vocabulary Their Way, I found an intriguing quote. “Our students learn vocabulary their way when they are focused on a topic of keen interest, trying to figure out the what, why and how of it all.” One word jumped out at me — their. This quote has changed my view on gaining student interest. To move away from the numerous vocabulary lists that I have helped create and granting students some control. In using their words for unit design, students will have increased focus, invested interest, and desire to figure it out. Right? Why will I do this? Because my students are capable of so much more. They are capable of communication! So why not take a twist? Why not hand over some of the control to them? When we learn English, we certainly don’t have lists of words to memorize. Does it really matter which words magically make it to the list? OR is communication, meaningful communication, the bigger picture? Once they are able to have meaningful communication, they will be more apt to venture out into the real world and spread their wings. In searching and searching and racking my brain about vocabulary lists, I have come across a few ideas to help construct what I imagine as a “perfect vocabulary list” Is it out there? Can this really exist? Sometimes I feel that the current list inside of the binder, the one that some students don’t even fill out, may only catch glimpses of sunlight. I am on a quest to find this list that is natural and connects to students. Good-bye meaningless lists of words. Hello natural. Hello thriving students wanting to learn and communicate in the real world. Templeton, Shane, et al. Vocabulary Their Way. Pearson, 2010. Word Study with Middle and Secondary Students.
What Is a Patent? A patent is a right granted to an inventor by the federal government that permits the inventor to exclude others from making, selling or using the invention for a period of time. The patent system is designed to encourage inventions that are unique and useful to society. Congress was given the power to grant patents in the Constitution, and federal statutes and rules govern patents. The U.S. Patent and Trademark Office (USPTO) grants patents for inventions that meet statutory criteria. The following provides a general overview of what a patent is. There are three different kinds of patents: utility patents, design patents and plant patents. - Utility Patents: The most common type of patent, these are granted to new machines, chemicals, and processes. - Design Patents: Granted to protect the unique appearance or design of manufactured objects, such as the surface ornamentation or overall design of the object. - Plant Patents: Granted for the invention and asexual reproduction of new and distinct plant varieties, including hybrids (asexual reproduction means the plant is reproduced by means other than from seeds, such as by grafting or rooting of cuttings). Determining What is Patentable: The Basics For an invention to qualify for a patent, it must be both "novel" and "non-obvious." An invention is novel if it is different from other similar inventions in one or more of its parts. It also must not have been publicly used, sold, or patented by another inventor within a year of the date the patent application was filed. This rule reflects the public policy favoring quick disclosure of technological progress. An invention is non-obvious if someone who is skilled in the field of the invention would consider the invention an unexpected or surprising development. Naturally occurring substances and laws of nature, even if they are newly discovered, cannot be patented. Abstract principles, fundamental truths, calculation methods, and mathematical formulas also are not patentable. A process that uses such a formula or method can be patented, however. For example, a patent has been granted for an industrial process for molding rubber articles that depends upon a mathematical equation and involves the use of a computer program. A patent cannot be obtained for a mere idea or suggestion. The inventor must have figured out the concrete means of implementing his or her ideas in order to get a patent. A patent also will not be granted for an invention with no legal purpose or for an unsafe drug. An inventor applying for a utility patent must prove that the invention is useful. The invention must have some beneficial use and must be operable. A machine that will not operate to perform its intended purpose would not be called useful, and therefore would not be granted a patent. A useful invention may qualify for a utility patent only if it falls into one of five categories: a process, a machine, a manufacture, a composition of matter, or an improvement of one of these. A process is a method of treating material to produce a specific physical change in the character or quality of the material, generally an industrial or technical process. A machine is a device that uses energy to get work done. The term manufacture refers to a process in which an article is made by the art or industry of people. A composition of matter may include a mixture of ingredients or a new chemical compound. An improvement is any addition to or alteration of a known process, machine, manufacture, or composition. Examples of Patentable Items These categories include practically everything made by humans and the processes for making the products. Examples of things that are patentable include: - Computer software and hardware; - Chemical formulas and processes; - Genetically engineered bacteria, plants, and animals; - Medical devices; - Furniture design; - Fabrics and fabric design; and - Musical instruments. Applying for Patent Protection Unlike a copyright, a patent does not arise automatically; an inventor must apply for a patent. The inventor must apply within one year of publicly disclosing the invention, such as by publishing a description of the invention or offering it for sale. An inventor, or his or her attorney, generally makes a preliminary patent search before applying for a patent to determine if it is feasible to proceed with the application. The application and a fee are submitted to the U.S. Patent and Trademark Office, where it is reviewed by a patent examiner. If a patent is granted, the inventor must pay another fee, and the government publishes a description of the invention and its use. Only a patent attorney or patent agent may prosecute patents before the PTO. Before a person may be licensed as a patent attorney or patent agent, she must have a degree in certain technical or scientific fields. Utility and plant patents last for 20 years from the application date; design patents last for fourteen years. If the owner of a utility patent does not pay maintenance fees, the patent will expire earlier. After a patent expires, the invention becomes public property and can be used or sold by anyone. For example, after the patent on Tylenol expired, other pharmaceutical companies began producing a generic version of the drug. If an inventor thinks someone has used his or her patented invention without permission, he or she may bring a lawsuit against the infringer. If the court agrees, it may award the patent holder costs, attorney's fees, damages in an amount equal to a reasonable royalty, and an injunction (an order prohibiting another person from infringing the patent). An action for infringement can be time-consuming and costly, so infringement cases often are settled. Patent Law is Complicated: Contact an Attorney If you have an invention that you would like to have protected, it's a good idea to get acquainted with patent law and intellectual property law in general. With a patent, you can license to other companies or go into business yourself; but failure to properly register your patent can end your dreams. Make sure you contact a patent law attorney if you need legal assistance patenting your novel invention. See FindLaw's Patents section for extensive coverage of this topic.
Women's Suffrage Cartoon = personification of Votes for Women Credit: 1915 cartoon by Hy Mayer Public domain image, originally published in Puck, February 20, 1915 - courtesy Library of Congress The 19th Amendment, granting suffrage to women, was approved by Congress in 1920. It was some thirty years previously, however, that Wyoming had entered the Union as the first state to grant women full voting rights. The next eight states to grant full suffrage to women were also Western states: Colorado (1893); Utah and Idaho (1896); Washington (1910); California (1911); and Oregon, Kansas, and Arizona (1912). Why was the West first? Can students explain with a unified theory why Western states anticipated the rest of the nation by so many years on this issue? Or did "women's suffrage succeed… in the West for reasons as diverse as the people and places of the West itself?" Focused on efforts in support of women's suffrage in Western states, this lesson can be used either as a stand-alone unit or as a more specialized sequel to the EDSITEment lesson, Voting Rights for Women: Pro- and Anti-Suffrage, which covers the suffrage movement in general. The latter lesson also contains activities and resources for learning how the movement to gain the vote for women fits into the larger struggle for women's rights in the nineteenth and early twentieth centuries. Why were the Western states the first in the nation to grant full voting rights for women? After completing the lessons in this unit, students will be able to Establish an anticipatory set by sharing with the class the list Suffrage Firsts, available on the EDSITEment-reviewed website Women of the West Museum. If possible, allow students to view the online interactive map Map: Woman Suffrage on Women in American History, an online exhibit of Britannica.com, a link from the EDSITEment resource Internet Public Library. It demonstrates quite dramatically the progress of full voting rights for women. Another option is to share the timeline Voting Rights in America on the EDSITEment-reviewed website Women of the West Museum. Do students have any theories about why women would achieve the vote in the West first? Distribute to students the poll "How the West Was First: Why Did Suffrage Succeed?", on page 1 of the PDF which lists a number of possible hypotheses. Students can complete the poll independently or the teacher can lead the class through it. Which reasons were most frequently cited by students as the most likely theories? Share with your students the following texts illustrating connections between suffrage movements in the West and those in the Northeast: Susan B. Anthony campaigned vigorously in the West, giving many lectures. The following excerpt from Albina L. Washburne, "Annual Meeting, American Woman Suffrage Association: Colorado Report," Woman's Journal, 7 (October 7, 1876), pp. 327, 328, available on the EDSITEment-reviewed website Women and Social Movements in the United States, indicates some of the cooperation that took place. The introduction to the excerpt explains: The Colorado Woman Suffrage Association, founded in 1876, gave organizational shape to suffrage sentiment in the state. The state association affiliated with the American Woman Suffrage Association (AWSA), one of two national organizations dedicated to achieving votes for women. The AWSA, which began in November 1869, sought to pass state laws granting women the right to vote, making it the logical affiliation for the Colorado Woman Suffrage Association. American Woman Suffrage Association (AWSA) was led by Lucy Stone with the aid of her husband Henry Blackwell, Mary Livermore, Julia Ward Howe, Henry Ward Beecher, Antoinette Brown Blackwell, Thomas Wentworth Higginson and others; it endorsed the Fifteenth Amendment while working for woman suffrage as well. You may also wish to share this excerpt from Washburne's 1876 report with students: Previous to last year there had been very little agitation on the subject of Woman Suffrage in Colorado, and a few of us waiting ones were glad to receive a visit from Mrs. Margaret W. Campbell, of Massachusetts, a tried friend and worker in the Suffrage cause, who arrived in Colorado about the middle of November, 1875. Anxious to avail ourselves of her valuable assistance we Suffragists, then scattered and unknown to each other, gave her a warm welcome and proceeded to agitate a little, and feel the public pulse. Mrs. Campbell lectured in nearly all the principle towns of Colorado, finding many interested, and devoting herself untiringly to presenting the claims of Woman to legal rights, to the popular comprehension, when a call was made for a Convention of the friends to be held at Denver, January 10th, 1876, which was responded to by a few from a distance, and others more numerous from the city. Four sessions were held, an organization effected of the Colorado Woman Suffrage Association… How did the movement in Colorado benefit from this contact with an activist from the Northeast? Share with the class the Introduction to the West on the EDSITEment-reviewed PBS website New Perspectives on the West. It introduces the landscape, myth, and history of the West. What attracted people to the American West? What experiences of the West do your students have (from watching Western movies or visiting the Grand Canyon, for example)? What values have become associated with the exploration and settlement of the vast landscapes of the West? Does the myth of the West, as discussed in "Introduction to the West," help explain why women achieved full voting rights there first? To help students explore this question, briefly share some or all of the following images from the EDSITEment resource American Memory that exemplify the myth of the West. As you show the images, ask students to jot down one to three words they associate with each. Discuss each image briefly and allow students to add a word or two: Ask students, now working in small groups, to share their lists and then attempt to come up with a statement describing the myth of the West that uses some of the list words, and especially those that were repeated. Reconvene in a whole-class setting and share descriptions. If desired, choose one group's definition (or use ideas from various groups) to stand as a class statement on the nature of the myth. Are there aspects of this myth that help explain why women got full voting rights in the West first? On the other hand, are there aspects of the myth that seem to contradict the fact that women's suffrage came to the West first? Who came to the West and why? In this activity, students explore the various motivations of those who migrated to the American West. Might the motivations of those who migrated to the West help us to understand the region's early granting of voting rights to women? As you share the following with the class, ask students to invent a character (who, for example, could be in a realistic historical fiction work about the settling of the West) inspired by the materials presented. Every character should have a name, age, reason for coming West, a home place, and a brief story to tell about him or herself: When you are finished reviewing the material, have each student—in character—share the basic information about him or herself. Then ask the "character" to state an opinion about voting rights for women. Students should be ready, if asked, to provide evidence--either details from a character's story or reasoning and inference--supporting the likelihood that their character would hold such an opinion. As a class, discuss and collate the results of presentations of individual characters. Discuss the central question of whether women won voting rights in the West because of the nature of those who wanted to migrate to that region. (Be alert not only to details that seem to support the region's openness to women's suffrage, but also to details that make the early granting of the vote to women seem surprising.) Is the answer connected to the nature and experiences and characters of the women who came to the West? In this activity, students—working individually or in pairs—will learn about a pioneer woman and compose a free verse poem that highlights the details of her life. As a model, share with students the poem "Lucinda Matlock" by Edgar Lee Masters, available on the EDSITEment resource The Academy of American Poets. Though not about a pioneer woman of the West, this poem gives the details of an entire life history in a few lines. Read the poem aloud in class. After the first reading, ask students what "jumped out at them" from the poem. Now distribute copies of the poem to students and have a volunteer give a second reading. After the reading, ask students to point out concrete details from the life of Lucinda Matlock. Point out to students that the poem does not rhyme—a key characteristic of free verse. Now assign subjects to students from the following list or other sources: Students should compose poems from what they learn about their subject. When students have finished writing, conduct a classroom reading of the poems. Having heard all of them, students should identify any commonalities that exist among these women. Can their personalities and experiences explain why women in the West were the first to be granted full voting rights? Share the following graphics regarding suffrage movements in three states, available on the EDSITEment resource Women of the West Museum: Do these graphics give the impression that the motivations behind the various suffrage movements were similar or different? Students will explore this question working in small groups to research eight of the first nine suffrage states. Divide the class into eight groups. Students should use the following articles and graphics from the EDSITEment-reviewed website Women of the West Museum, as well as other available classroom and/or library resources, to research eight Western states. (NOTE: Click on the state name for a summary article. Additional links to graphics and biographies are provided in each article.) Once students have completed their research, each group should present its findings to the class. If desired, have students fill in the table "Is It Something Particular for Each State?", on page 2 of the PDF as groups present their information. After all groups have made their presentations, reconsider the poll results from Activity 1. Would students' answers change now? Should other hypotheses be added to the poll? Which hypothesis would be most frequently chosen now as the most likely? Is there a unified theory for why the West gave women full voting rights first? 6-8 class periods
The eastern newt is reported to have the most variable life history of all North American amphibians (9), with most populations having four main life stages: egg, larvae, eft and adult (2) (3) (4) (7) (9). While the adults and larvae are aquatic, the intermediate eft stage is typically terrestrial (2) (3) (4) (9). Efts are both diurnal and nocturnal, and are known to be more active on rainy days or nights when the ground is moist (9). The eastern newt is carnivorous at all stages of its life (9). Feeding at night (9), the larvae of this species feed on whatever is most accessible (3), including snails, beetle larvae, clams, mites and crustaceans (2) (3) (9). Eastern newt larvae have also been reported to occasionally eat algae (3). Adult eastern newts use vision and chemical cues to locate prey that can be swallowed whole (9), and eat small aquatic invertebrates (3) such as molluscs, crustaceans, mayflies, worms and leeches (2) (3) (7) (9). In addition, the eastern newt feeds on the eggs and larvae of other amphibians (2) (3) (7) (9), as well as small fish and fish eggs (9). Eastern newt efts also feed on a variety of invertebrates (2) (3). As a means of avoiding predation, the eastern newt produces toxic secretions from special glands in its skin (2) (3) (6). This toxin is present during all life stages (3), but efts tend to be more toxic than adults (2). Interestingly, the eastern newt carries out a rather spectacular warning display known as the ‘unken reflex’, which involves the amphibian closing its eyes and retracting them inwards before bowing its head and tail upwards so that they almost meet. In this posture, the brightly coloured belly is exposed (3) (9), which warns potential predators of its toxicity (2) (3) (9). Turtles, snakes and large frogs tend to be the main predators of adult eastern newts (2) (3), while raccoons and certain hawk species are known to avoid them (3). Reproduction in the eastern newt is aquatic (9). The timing of breeding in this species varies with location, usually occurring during the winter and spring (2). The male eastern newt approaches a female and performs a short display which involves undulating his body and tail. If the female is receptive, it will nudge the male’s tail with its snout. This encourages the male to deposit a spermatophore, which the female then picks up with her vent (3). If the female is unresponsive, the male may grab her in amplexus, and fan its tail to waft secretions through the water toward the female (2). This may last several hours before the male dismounts and deposits a spermatophore for the female to pick up (2). Egg laying in the eastern newt occurs in the spring in many parts of the species’ range (3) (4) (9), but may start in early winter in more southerly populations and carry on into July in northern populations (9). The female eastern newt deposits each egg individually (2) (3) (9), and lays several eggs per day (2) (9) over a period of a few weeks (3) (9), laying between 200 and 375 eggs in total (9). Each egg measures about 1.5 millimetres in diameter (9), and is attached to aquatic plants or other submerged vegetation (1) (2) (3) (4). The eggs incubate for a period of between three and five weeks (2) (3) (9) depending on the water temperature (3), after which time the larvae hatch out (2) (3) (9). The length of the larval stage of the eastern newt varies across this species’ range, but usually lasts between two and five months, after which time it transforms into an eft (9). Efts migrate away from aquatic habitats to live in forested areas (9), and may spend up to seven years in this stage, although in some areas transformation occurs within two years (2). Efts then migrate from their terrestrial habitats back to aquatic habitats where they become sexually mature and breed (4) (9). In some populations, there is no eft stage, and the larvae develop directly into adults. These individuals are known as neotenic adults (5) (9). Most eastern newts are thought to live for between 3 and 8 years, although they may live for up to 15 years (9).
When NASA's next rover, now dubbed Curiosity, arrives at Mars on 6 August, its prime target will be the base of Mount Sharp, the 5-kilometer-high mound of sediments in the middle of Gale crater. Mars researchers have no idea how those sediments got there. But a pair of geologists is now suggesting that at least the top third of Mount Sharp is volcanic ash that fell out of the sky surprisingly early in Mars's history. The so-called Medusae Fossae Formation (left) covers a third of the martian equatorial region, some patches of it being near Gale crater. Orbital imaging suggests it is ash from massive eruptions. It bears a striking resemblance to layered deposits high up Mount Sharp (right), the researchers note online today in Science. By counting accumulated impact craters, the team has also found that the two deposits were laid down at about the same time, 3.8 billion years ago. If they are indeed one in the same deposit, Curiosity could probe beneath the thin coating of dust that obscures all of the deposit and confirm its true nature. That's assuming the rover survives long enough to range far up the mysterious mound. See more ScienceShots.
It is with this quote, and this particular lens on memory, that we began our first major assignment in English class. The quote comes from the short story, “Funes the Memorious,” by Jorge Luis Borges. Bores, a staple of Argentine culture, is famous for being a pioneer of magical realism, a truly South American genre of literature, where he explored the themes of memory, dreams, and infinity. The goal of this lesson had two parts. The first was to help students truly understand Borges complex theme where the thought process of a character with a perfect memory is explored through conversation. The character, whose memory is so perfect, he can create new symbols for every number, learn languages perfectly from simply reading a book, and reconstruct entire days perfectly in his mind. However, after much conversation, it becomes apparent that, according to our narrator, “he [was] not very capable of thought. To think is to forget differences, generalize, make abstractions.” In true Borges fashion, the perfection, memory, becomes an infinite perfection, leading the reader to ponder the many possibilities and tragedies associated with an infinite memory. In doing so, Borges leads readers to consider how we should remember our pasts, with what type of lens and with what level of perfection. This question is one that we will constantly reconsider as we learn about Argentine culture and past through literature. “My memory, sir, is like a garbage heap.” The second goal was to help students become better readers and writers of complex sentences. Borges used very complex sentence structures throughout his work to emphasize his themes and create tone, and although these sentences make his work difficult to read, they created a perfect learning opportunity for us. To achieve these learning goals we set about to write our own Borges style stories, stories where a narrator comes into contact with a character with an ironically tragic perfection. Additionally, to prove their mastery of complex sentence structures, students were required to include examples of sentences using appositive phrases. Lucky for us, the actual café where Borges used to write is only four blocks from our school! How could we pass up an opportunity to write our stories in the same setting as the famous author himself. The café, La Biella, even has a life-size statue of Borges himself sitting in the café. We began our first day at the café with a conversation on why Borges may have used so many long sentences in his work. Students were able to determine that it must have something to do with his theme. They worked through the first paragraph noticing that he uses repetition of the word “memory,” a work that he states, he “has to right to utter.” Why would he repeat this? After throwing around some ideas and gaining to input from me, they were able to conclude that he does it to reveal that the narrator has been affected in some way by his meeting with Funes, the character with perfect memory. The narrator is attempting to capture Funes’ style of memory within his own narration of this meeting. Also, the narrator uses long sentences with appositive phrases as a structure to include as many details as possible, similarly to the way in which Funes may have remembered events. Additionally, students noted that Borges creates a “dramatic” tone with his constant use of punctuation to break the sentences’ flow. After some practice with appositive phrases, we set about to work on our own stories, stories that began, as Borges did with, “I him (I have no right to utter this sacred verb, only one man on earth had that right and he is dead),” and ending by revealing the tragedy of this perfection by stating, “I suspect though, he was not very capable of .” Below you will find links to the original in addition to the student voted, best stories form the 9th and 10th grade classes. In their own way, the students themselves are doing as Borges suggests, by forgetting his original story and molding their own stories to fit his format, they are thinking. They are forgetting the original and making a new meaning from the memory of his work. I hope you like them as much as I did.
For the first time ever, a scientific study has estimated the percentage of species that will go extinct if we disturb all the remaining tropical forests on Earth, and the results are higher than expected — around of 40% of species (although it varies by group) will be lost if humans continue to destroy or degrade the Earth’s remaining tropical forests. Although there has been a growing awareness among scientists and even among the mainstream media that we are now entering the Earth’s 6th Mass Extinction Event, until recently there has been little scientific analysis of how bad this extinction event is going to be or how close we are to implementing measures to avoid it. This study, performed by a scientist at Macquarie University in Sydney, used real-world data of species richness (i.e. # of species) at 875 locations around the world of 11 groups of organisms (i.e. trees and 10 groups of animals from different insect types to large and small mammals). He then plotted their species richness (i.e. total number of species) against the % of habitat disturbed (e.g. logged, burned or otherwise disrupted). He found that if all remaining habitat is disturbed, we are likely to lose 30% of tree species and 8 to 65% of the 10 animal groups. These results are even worse expected, since disturbed habitats do often still contain forest in a damaged state — which the author thought would harbor a higher abundance of life. Above: A graph showing the last 5 Mass Extinction Events (large font) over the past 0.5 billion years from the perspective of marine biodiversity. Extinction intensity refers to the % of genera (each genus is a group of species) that were completely wiped out (i.e. not a single species made it through the extinction event). Image copyrighted under Creative Commons (CC By-SA 3.0 license). The author also regards his estimates as conservative, because he did not take into account other factors beyond habitat destruction such as hunting and he added the possibility that a mass extinction event could have already happened and we don’t even know it: “A mass extinction could have happened right under our noses because we just don’t know much about the many rare species that are most vulnerable to extinction,” Alroy said. “To figure out whether this is true, a lot more field work needs to be done in the tropics. The time to do it is now.” The reality is we aren’t able to assess this possibility because we still don’t have a good grasp of the total biodiversity of tropical forests or planet Earth (to date, we have identified 1.75 million species out of an estimated 3 to 100 million species on Earth). This is all the more reason to do everything in our power to avoid or greatly minimize the impending 6th mass extinction event, since there is still so much of this world’s biodiversity that is unexplored or which we know very little about — and should be protected both for the sake of maintaining a healthy ecosystem on Earth and for the future benefits it could bring mankind. An increasing number of scientists and activists have been supporting a new plan that has been fashioned to directly deal with the threat of the 6th global extinction. The plan, which is “Half-Earth” or “Nature Needs Half,” involves protecting half the Earth’s habitat in order to save upward of 80% of all species and respecting the rights of indigenous and local communities to effectively manage large areas of natural habitat (which they are already doing, but often face opposition from governments that don’t respect their rights). Recent scientific analysis has also shown that for many parts of Earth’s surface this goal is still realistically within reach. Essentially the world’s governments and communities are currently adding protected areas (i.e. including community-managed reserves) at an increase of a 4% every decade, but we would have to double this to reach the 50% protection goal by 2050. We are also gaining increasingly sophisticated technologies (e.g. Global Forest Watch) to monitor the world’s forest and oceans. However, achieving protection of the world’s habitats is going to require far more than just new technologies, new laws or new protected areas. It is going to require us to step outside of our comfort zones and change our way of thinking about conservation and how we relate to one another — most especially by supporting indigenous and local community management and ownership of natural areas, which is already successfully occurring in some parts of the world but hasn’t truly taken off yet. Ultimately, we need to start seeing humans as the solution and not just the cause of the 6th extinction crisis. We are an incredibly influential species but there are an incredibly large number of people around the world who are committed to protecting their local environment while making a sustainable livelihood. We need to recognize and support their right to do so—maybe acknowledging and supporting the other half of humanity will help us save half of nature too. Alroy, J. (2017). Effects of habitat disturbance on tropical forest biodiversity. Proceedings of the National Academy of Sciences of the United States of America. doi:10.1073/pnas.1611855114
A blood clot develops when proteins in your blood bind your blood cells, or platelets, that are stuck together, forming a solid mass. When cuts or scrapes occur, these clots are beneficial. But ones in your blood vessels can block your circulation. Those forming in arteries or your heart can halt vital blood flow and cause a heart attack. If a clot clogs your brain’s blood vessels, a stroke may follow. Prolonged inactivity, pregnancy, and dehydration can increase your deep vein thrombosis (DVT) risk. Platelets adhere to damaged blood vessels in a vein that’s deep inside your body. Most often, DVTs impede blood flow in lower legs and thighs, causing pain, swelling, and reddish warm skin. An embolism is a clot that breaks away, moves to a different body area, and blocks blood flow to one of your major organs. Severe damage and death are possible. Like over 2 million patients, you may need a daily blood thinner to break up harmful blood clots, stop them from enlarging, or prevent their development. Lifesaving treatments include antiplatelet drugs like oral Clopidogrel, generic Plavix. Prefilled Lovenox (Enoxaparin) syringes contain anticoagulants. Many people are concerned about how these maintenance prescriptions that increase bleeding risks will impact their lifestyles. The Agency for Healthcare Research and Quality recommends the four BEST ways to make taking your blood thinner a safe and easy daily habit. e more careful. at the right foods. tick to your medication routine. est your blood regularly. 1. Be More Careful: Make Personal Safety a Priority Extra caution is crucial because various work and household duties, hobbies, and sports may lead to accidents that can cause bleeding. To prevent indoor injuries: - Handle sharp objects like scissors and knives carefully. - Replace a razor that has sharp blades with an electric one. - Use a soft-bristle toothbrush. - Choose dental floss with a wax coating. - Avoid toothpicks. - At home, wear house or street shoes to avoid falls. - Cut your fingernails and toenails carefully. Prevent outdoor injuries by: - Wearing closed shoes. - Manipulating sharp tools in gloved hands. - Using a protective helmet when bike riding. - Avoiding activities and that might cause injuries. - Wearing protective gloves and sturdy shoes for gardening and yard work. Continue favorite hobbies if you protect yourself from accidents. Typical safe activities include walking and swimming. Get your doctor’s approval before starting any new physical activities. Consider wearing a medical alert necklace or bracelet. It will advise health care professionals that you’re on a blood thinner if you sustain an injury that renders you speechless. 2. Eat the Right Foods: Modify Your Diet Some foods and beverages can reduce or increase the blood-thinning properties of your antiplatelet or anticoagulant drug. Your doctor can advise which dietary choices to cut back or eliminate based on your specific medication. Reduce or avoid green tea, alcohol, cranberry juice, and cranberries. This fat-soluble vitamin supplies proteins that are necessary to form blood clots, so high amounts may counteract your medicine’s benefits. The recommend daily vitamin K amount is 65 to 80 mcg for adults. Exceeding that quantity may increase your bleeding risk. Keep your limited portion size and frequency consistent like every Sunday, Tuesday, and Thursday dinner. Cooking may increase vitamin K content. Reduce your consumption of these foods with medium to high vitamin K levels: - Brussels sprouts - Greens including collard, turnip, and mustard - Green leaf lettuce - Green onions Safe low-vitamin-K alternatives: Enjoy healthy vegetables including iceberg lettuce, tomatoes, carrots, cauliflower, peppers, cucumbers, squash, potatoes, and sweet potatoes. The salicylate content in paprika, curry, thyme, rosemary, pickles, ketchup, and mustard thins your blood by blocking vitamin K. Ingesting this chemical with some medications multiplies their blood-thinning power. Omega-3 fatty acids: Despite the heart-health benefits of fatty, cold-water fish like salmon, halibut, mackerel, and sardines, omega-3s can increase your bleeding risk. Other sources are walnuts, flaxseeds, pumpkin seeds, soybeans, and oils including canola, flaxseed, and soybean. Other safety measures: Consult your doctor before making any significant diet or weight-loss changes. Contact him if you can’t eat for multiple days or you contract a fever, infection, or flu. Also call whenever diarrhea or vomiting goes beyond 24 hours. Your physician may need to adjust your medication dosage when your health and routine are in flux. 3. Stick to Your Medication Routine Taking your blood-thinning medicine according to your doctor’s directions is vital. Some require administration at the exact same time every day. Avoid skipping or repeating any dose by mistake. If you ever miss one, replace it quickly. If you discover your lapse the following day, ask your physician for instructions. When he isn’t available, resume your regular schedule with your next dose. Make a list of every missed dose. Using a divided pillbox with daily sections and smartphone alarms can help increase your medication compliance. 4. Test Your Blood Regularly Your doctor will use a special blood test to measure how quickly your blood clots and determine your appropriate medication dosage. Repeating it periodically will allow him to make any necessary adjustments. Too much medication can cause excess bleeding while not enough increases your blood clot risk, so taking the correct amount is critical.
Ethics is the major branch of philosophy that encompasses proper conduct and good living. It is significantly broader than the common conception of ethics as the analyzing of right and wrong. A central aspect of ethics is "the good life", the life worth living or that is simply satisfying, which is held by many philosophers to be more important than moral conduct. Morality (from the Latin moralitas "manner, character, proper behavior") has three principal meanings. In its first, descriptive usage, morality means a code of conduct which is held to be authoritative in matters of right and wrong. Morals are created by and define society, philosophy, religion, or individual conscience. An example of the descriptive usage could be "common conceptions of morality have changed significantly over time." In its second, normative and universal sense, morality refers to an ideal code of conduct, one which would be espoused in preference to alternatives by all rational people, under specified conditions. In this "prescriptive" sense of morality as opposed to the above described "descriptive" sort of sense, moral value judgments such as "murder is immoral" are made. To deny 'morality' in this sense is a position known as moral skepticism, in which the existence of objective moral "truths" is rejected. In its third usage, 'morality' is synonymous with ethics, the systematic philosophical study of the moral domain. Ethics seeks to address questions such as how a moral outcome can be achieved in a specific situation (applied ethics), how moral values should be determined (normative ethics), what morals people actually abide by (descriptive ethics), what the fundamental nature of ethics or morality is, including whether it has any objective justification (meta-ethics), and how moral capacity or moral agency develops and what its nature is (moral psychology). In applied ethics, for example, the prohibition against taking human life is controversial with respect to capital punishment, abortion and wars of invasion. In normative ethics, a typical question might be whether a lie told for the sake of protecting someone from harm is justified. In meta-ethics, a key issue is the meaning of the terms "right" or "wrong". Moral realism would hold that there are true moral statements which report objective moral facts, whereas moral anti-realism would hold that morality is derived from any one of the norms prevalent in society (cultural relativism); the edicts of a god (divine command theory); is merely an expression of the speakers' sentiments (emotivism); an implied imperative (prescriptive); falsely presupposes that there are objective moral facts (error theory). Some thinkers hold that there is no correct definition of right behavior, that morality can only be judged with respect to particular situations, within the standards of particular belief systems and socio-historical contexts. This position, known as moral relativism, often cites empirical evidence from anthropology as evidence to support its claims. The opposite view, that there are universal, eternal moral truths are known as moral absolutism. Moral absolutists might concede that forces of social conformity significantly shape moral decisions, but deny that cultural norms and customs define morally right behavior.
Geologic Record: Rocks Record Changes Rocks tell a story of environmental change. Changes Institute scientists study the chemical composition and structure of rocks to understand the conditions in which they formed. Many rocks contain fossils of organisms that were alive when the rock formed. The structure, position, and location of rock can reveal clues about glaciers, floods, seismic events, and continental movements. Some rocks can even be dated.
There are many ways and methods into analyzing the problem and perspective. First step is to identify the problem and the nature of the problem. Once the problem is identified, the problem is then analyzed further through various means, namely the nature of the problem and how it arises. This step is important as it allows further insight into the problem. Usually in scientific experiments, the problem is first identified before the next steps are carried out. Scientist carry out experiments first based on a problem they encounter or on a proposed theory or hypothesis. Without such a catalyst, the experiment can't be carried out. Experiments are to examine the problem that arise or to prove the hypothesis, if it can't be proven then a new theory comes up, this is how it works in the world of science. The next step comes the perspective. This part is not distinct from the problem but actually forms an extension of it. Perspective can be defined as a point of view on the subject at hand. This can also be understood when looked at a vantage point, any angle or view from that is considered a perspective, albeit right or wrong. With this meaning, we can now see the connection that a problem and its perspective has. One has to come with the other, they are quite inseparable. Get your grade or your money back using our Essay Writing Service! The way we did all this was simple. We simply used what available tools we could access, which is the internet, journals, newspapers and magazines of any kind. The internet presented a big help to us in that because the internet houses tons and tons of information of various kind. All we needed to do was narrow down our search and look for the right sources to carry out our job, then the rest was just accumulating, dissimilating and presenting the data/info/findings in a fashionable way. Below shows a flowchart of that very process. Flowchart 1: The process in a randomized fashion.Journals/Books For the information part, our main source would be the internet. The internet represents somewhat like a virtual library storing terabytes and terabytes of information. In recent times, the internet has been an important tool in helping students gather their needed information. The internet can be used to help in assignments as well. Most of the information these days are digitalized, so searching information can be done through a computer/laptop with internet connection with the benefits of being at home. This also saves a lot of time, stress and unnecessary trouble like going out to a public library to search for information. Journals can also be found on the internet, aiding students in their search. One of the setbacks is, some journals can't be accessed. Some requires membership, payment or a student of that particular university. Therefore, our information is mostly based on websites, journals and books that are available. Picture 10: Shows how the world is connected to the internet. This depicts how much of today is digitalized.http://endthelie.com/wp-content/uploads/2012/03/ISP-surveillance.jpg Methods of gathering data In this part, we had to thoroughly deliberate the methods we were going to employ in gathering the data. In that course, we had set up a meeting to discuss this matter after college hours, and after careful consideration we had decided to use survey forms in gathering the data. We chose this method of gathering data because a questionnaire type of survey is easy to hand out and it doesn't take much time to complete. Another reason we chose this method is because when doing a face to face interview, most people feel nervous, including us as well, and that might affect our performance and the way the information is delivered. When nervous, a person might forget to say something or say something totally irrelevant to the topic or question asked. In view of all this, the survey type of gathering data was picked with no objections and with complete compliance. Personal reactions and biases In carrying out the survey, we have chosen an age group of between the ages of 15 and 30. We have chosen this age group because people in this age group usually consumes the most fast-foods like burgers compared to other age groups. We have found that people in age 30 and above hardly consume burgers or even fast-foods, mainly because they prefer other sources of food, and because they grew up in an era where fast-foods was practically new and foreign to them. With today's generation, fast-foods are common to them and readily acceptable, comprising a high percentage of their diets compared to older generations. If we were to take older generations when conducting the survey, our data collected would be insignificant, because these generations hardly consume any, therefore data obtained can't be analyzed, compared, and discussed. Always on Time Marked to Standard As well, to make it fair, we have divided the 50 people surveyed into 25 females and 25 males. This is fair to everyone and our assignment because there's no gender favouritism or gender discrimination. This decision has been agreed on by all 3 members of the group. 25 females and 25 males for the survey is to supply balance to our data and not a one-sided information that would bring down a certain gender. Furthermore, when being surveyed, the participants took part willingly and happily after explaining to them that this is for assignment purposes and that their information given in the survey forms like their names and number won't be exploited for selfish or other reasons. The participants felt more relaxed and assured hearing the explanation. Participant's privacy and comfortness has been established. We thanked the participants with warm-hearts and gratitude once they have finished. This gesture leaves both the participants and us feeling good. A small portion of the survey was carried out through the internet. Survey forms were sent out and replied back with answers from the participants. This method also includes the element of un-inhibited answering as there's pressure of being watched when answering the survey. Data obtained from other reseachers 1. Dioxin and dioxin-like compounds Dioxins are a group of chemically-related compounds that are persistent environmental pollutants. Dioxins are found throughout the world in the environment and they accumulate in the food chain, mainly in the fatty tissue of animals. More than 90% of human exposure is through food, mainly meat and dairy products, fish and shellfish. Dioxins are highly toxic and can cause reproductive and developmental problems, damage the immune system, interfere with hormones and also cause cancer. Because burger patties have high fat content and the fats ability to retain dioxin and dioxin like compounds, this poses a threat to consumers worldwide. Not only is burger consumption a norm in fast developing countries, it is also a frequently craved food in all over the world. Studies have shown that consumption of burgers have more than tripled in the last decade, marking a significant health threat to all. The United States Environmental Protection Agency (EPA) has done a lot of research into this chemical compounds and it has taken them more than 27 years for that completion!! On why it took so long for the EPA to publish its findings, the EPA said, "For more than 25 years, different segments of the regulated industry (pulp and paper, chemical, food and agriculture) have pushed back and generated tremendous pressure on EPA not to release this assessment. The science of the reassessment has been very consistent and only strengthened since a draft was released in 1994. The agency has used the latest scientific methods and followed its published guidelines to determine the risks from exposure to dioxin. These guidelines and methods were peer-reviewed and open to public comment prior to being finalized (they are also regularly updated). But none of this matters to the industry that has been well organized and financed. They will never be satisfied with the science of EPA because they do not like the bottom line since it affects their operational costs and profits, not because there is something wrong with EPA's science". Now we can clearly see why if this report was published a long time ago, it would have affected the fast-food industry and its profit making methods. This can also be said of the pharmaceutical industry where selling drugs is its aim and not selling the cure, inadvertently, they create long-term customers rather than healthy individuals. Nearly all of us are exposed to dioxin by eating meat and dairy products. According to EPA over 90% of human exposure to dioxin occurs through our diet. Dioxin is most prevalent in meat, fish, dairy, and other fatty foods. Our exposure begins as crops are contaminated by airborne dioxins settling onto plants, which cows and other animals eat. The exposures are compounded when animals are given fat laden feed contaminated with dioxin. At each step, dioxin accumulates in the fat portion of the animal. We then ingest dioxin by eating meat and dairy. This is startling information revealed by the EPA because the majority of the world's population consumes meat and dairy products, partly because of its availability in the market. In the long run, this scenario is creating a list of diseased individuals. This Essay is a Student's Work This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work The research done by EPA shows that dioxin can cause a wide range of non-cancer effects including reproductive, developmental, immunological, and endocrine effects in both animals and humans. Animal studies show that dioxin exposure is associated with endometriosis, decreased fertility, the inability to carry pregnancies to term, lowered testosterone levels, decreased sperm counts, birth defects, and learning disabilities. In children, dioxin exposure has been associated with IQ deficits, delays in psychomotor and neurodevelopment, and altered behavior including hyperactivity. Studies in workers have found lowered testosterone levels, decreased testis size, and birth defects in offspring of Vietnam veterans exposed to Agent Orange. Dioxin is also a human carcinogen. Suffice to say, not only is consuming burger patties with high fat content dangerous, consuming other fatty meat products and dairy products are also dangerous. Picture 11: Logo for EPA.File:Environmental Protection Agency logo.svg 2. Acrylamide in burger buns/bread This compound has been found in burger buns or breads that have been cooked at a high temperature. Nowadays, it's common to find burger buns cooked at high temperatures on hot pans to increase crispiness and flavor of the buns. This is usually done by adding butter or margarine on the hot pan or without as well. The bun is then cooked until a light golden or light brown color is obtained, removing the bread before a darker color is produced which signifies the bun is over-cooked and it is generally undesired by customers. Un-knowingly, many consumers have been consuming a chemical known as acrylamide from this process as a result and unaware of its consequences. Acrylamide is usually formed by using high heat to cook starchy foods such as potato chips, french fries and even cereal products which has the highest level of acrylamide. Suffice to say, burger is also one of them. Acrylamide is known to cause cancer in animals. Also, certain doses of acrylamide are toxic to the nervous system of both animals and humans. In April 2002 the Swedish National Food Authority reported the presence of elevated levels of acrylamide in certain types of food processed at high temperatures. Since then, acrylamide has been found in a range of cooked and heat-processed foods in other countries, including The Netherlands, Norway, Switzerland, the United Kingdom and the United States. A research done by the World Health Organization said that The levels of acrylamide found in some foods are much higher than the levels recommended for drinking-water, or levels expected to occur as a result of contact between food and food packaging (from paper) or use of cosmetics. This data indicates that drinking water is much safer in terms of acrylamide consumption, however, there are also other chemicals found in untreated water which is detrimental to our health also. Some scientist are still uncertain how acrylamide is formed in the food, however some scientist says that as asparagine has some credit to the formation of this chemical. Asparagine is an amino acid that is found in many vegetables, with higher concentrations in some varieties of potatoes. When heated to high temperatures in the presence of certain sugars, asparagine can form acrylamide. High-temperature cooking methods, such as frying, baking, or broiling, have been found to produce acrylamide. This causes neurological damage in the long-run. Picture 12: 2-D skeletal picture of acrylamide.File:Acrylamide-2D-skeletal.png 3. Polycyclic aromatic hydrocarbons (PAHs) PAHs are naturally found in biofuel or coal products, however it is also prevalent in meat cooked in high heat such as grilling and barbecuing in the making of burger. This poses a threat to human health as it is a known carcinogen. The carcinogen is mainly benzo[a]pyrene (structure is shown), though other polycyclic aromatic hydrocarbons (PAHs) and heterocyclic amines (HCAs) are present and can cause cancer, too. PAHs are in smoke from incomplete combustion. A key factor in PAH toxicity is the formation of reactive metabolites. Not all PAHs are of the same toxicity because of differences in structure that affect metabolism. Another factor to consider is the biologic effective dose, or the amount of toxics that actually reaches the cells or target sites where interaction and adverse effects can occur. CYP1A1, the primary cytochrome P-450 isoenzyme that biologically activates benzo (a) pyrene, may be induced by other substances [Kemena et al.1988; Robinson et al. 1975]. The mechanism of PAH-induced carcinogenesis is believed to be via the binding of PAH metabolites to deoxyribonucleic acid (DNA). Some parent PAHs are weak carcinogens that require metabolism to become more potent carcinogens. Diol epoxides-PAH intermediate metabolites-are mutagenic and affect normal cell replication when they react with DNA to form adducts. A theory to explain the variability in the potency of different diol epoxides, "the bay theory," predicts that an epoxide will be highly reactive and mutagenic if it is in the "bay" region of the PAH molecule (Figure 1) [Jerina et al. 1976 and 1980; Weis 1998]. The bay region is the space between the aromatic rings of the PAH molecule. PAH-induced carcinogenesis can result when a PAH-DNA adduct forms at a site critical to the regulation of cell differentiation or growth. A mutation occurs during cell replication if the aberration remains unrepaired. Cells affected most significantly by acute PAH exposure appear to be those with rapid replicative turnover, such as those in bone marrow, skin, and lung tissue. Tissues with slower turnover rates, such as liver tissue, are less susceptible. Benzo(a)pyrene diol epoxide adducts bind covalently to several guanine positions of the bronchial epithelial cell DNA p53 gene, where cancer mutations are known to occur. Processing of food (such as drying and smoking) and cooking of foods at high temperatures (grilling, roasting, frying) are major sources generating PAH (Guillen et al., 1997; Phillips,1999). Levels as high as 200 µg/kg food have been found for individual PAH in smoked fish and meat. In barbecued meat, 130 µg/kg has been reported whereas the average background values are usually in the range of 0.01-1 µg/kg in uncooked foods. Besides that, foods can also be affected by outside factors, these include PAH that are present in air (by deposition), soil (by transfer) or water (deposition and transfer), examples are; Stubble burning (Ramdahl and Moller, 1983) and spreading of contaminated sewage sludge on agricultural fields (Hembrock-Heger and Konig, 1990; cited by IPCS, 1998). Exhausts from mobile sources (motor vehicles and aircrafts). Close to an emission source such as a motorway, very high concentrations of PAH were detected in the surface layer, but soil at a depth of 4-8 cm was two times less contaminated (Butler et al., 1984; cited by IPCS, 1998). Close to highways, concentrations of PAH in the soil in the range of 2-5 mg/kg can be found whereas in unpolluted areas, the levels are in the range of 5-100 µg/kg. The distribution and concentration of PAH in soil, leaf litter, and soil fauna depend broadly on the distance from the roadside. Industrial plants (e.g. aluminum foundries, incinerators). Domestic heating with open fireplaces. Levels of PAH in the atmosphere appear to be higher in the winter than in the summer period. Burning of automobile tires or of creosote treated wood releases considerable amounts of PAH. Forest fires and volcanic eruptions (Hites et al., 1980; cited in IPCS, 1998). As we can see from the examples given above, there are numerous ways PAH can enter the food-chain and not only through cooking meat using high heat. Without even the consumer realizing, he/she is consuming levels of PAH way more than anticipated, and the onset of cancer by the carcinogenic effects of PAH is brought on even faster. PAH are lipophilic and generally have a very poor aqueous solubility. PAH accumulate in lipid tissue of plants and animals. This spells bad news for us as it makes PAH harder to excrete from the human body as they accumulate in the fatty region. Unless we have a way of removing fat, we can be sure PAH will accumulate in the body and bring on it bad effects. PAH formation during charcoal grilling was shown to be dependent upon the fat content of the meat, the time of cooking and the temperature. For example a heavily barbecued lamb sausage contained 14 µg/kg of carcinogenic PAH (Mottier et al., 2000). 3. Oxidized fats/Saturated fats Most of us have heard of saturated fats and oxidized fats which has another name called trans-fats. Most of us are readily exposed to these terms as they're circulating widely in the market today, we see products which claim they're fat free or no trans-fat at all. Most of the fats from animals are saturated fats, unsaturated sources usually come from plant-based or oil-based. In addition to that, when these saturated fats are cooked at a high temperature, they become oxidized fats because the attack of oxygen molecules on its molecular structure, changing its molecular structure entirely. The high intake of these fats have been shown to increase cardiovascular diseases such as heart attack. Trans fat is usually formed in the processing process of the burger patty. The reason why trans fat are used in the food industries is because they're more stable compared to other fats which are susceptible to high heat damage, UV light damage, and rancidity. Trans fatty acids are formed through the process of hydrogenation, during which the bonds in a fatty acid chain become bonded in a trans configuration, with the atoms of hydrogen bonding on opposite sides of the chain. As the hydrogen atoms are on opposite sides of the chain, this does not allow the fats to bend, meaning they are hard at room temperature. These trans fatty acids are more stable, and less likely to become damaged, at room temperatures than unsaturated fats and, for this reason, food manufacturers prefer to use these in foods.  Trans fatty acids are found in many foods, including margarines, pizzas, cakes, biscuits, breakfast cereals and a variety of other processed foods. Trans fatty acids have, however, been found to be harmful to health in many respects. Studies by Mozaffarian et al. (2006) and Clarke and Lewington (2006) have linked trans fatty acids to coronary heart disease. As shown by Mensink et al. (2003), trans fatty acids are related to changes in the ratio of total:HDL cholesterol with this increasing the risk of cardiovascular problems and coronary heart disease. Brouwer et al. (2010) support this finding, showing that trans fatty acids raise the ratio of LDL cholesterol to HDL cholesterol, increasing the risk of cardiovascular problems. Other studies, such as that by Lopez-Garcia et al. (2005) and Mozaffarian (2004) have linked trans fatty acids to inflammation and to adverse effects on endothelial function. Morris et al. (2003) has linked trans fatty acids with the onset of Alzheimer's disease, showing that whilst a high intake of unsaturated, unhydrogenated fats is thought to be protective for Alzheimer's disease, an intake of saturated fats or trans fatty acids may increase the risk of developing Alzheimer's disease. The negative effects of trans fatty acids on health are, thus, many and varied. Picture 13: Blockage in right coronary artery http://graphics8.nytimes.com/images/2007/08/01/health/adam/9377.jpg The intake of all these fats have also been linked to high cholesterol, bringing many heart complications with it. There isn't a general symptoms, but can be found through have blood test regularly. High cholesterol can increase the chance of getting atherosclerosis and stroke. There are a few symptoms of atherosclerosis like angina. This is a pain in the chest due to the narrowing of the arteries. (Robson.D. 2005) Others can experiences pain in the leg during exercising like walking and running. (Robson.D. 2005). Another sign is in some cases known as xanthoma which is a yellow patch made of cholesterol that normally form around the eyelids on the skin and can also occur on other parts on the skin. (Robson.D. 2005). Plaques can be ruptured allowing the platelets to form blood clot. This can occur in one of the arteries that supply the blood to the heart and can lead to heart attacks. This condition is known as thrombosis. Stroke is a sudden event that doesn't give warning to the person. This cuts the blood supply to the brain. Mini stroke can occur due to blood clots and ruptured blood vessels. Impact and significance of data obtained This section reviews the survey questions and discusses the answers given by the participants, consisting of ages 15-30 of 25 males and 25 females. Question 1: Do you like burgers? Table 1:Tthe answers to Question 1. As shown above in the table, the majority likes burgers which comprises of 45 participants out of 50, and only 5 of them does not like burgers. This data implicates that burgers have penetrated the market widely, providing customers with much satisfaction about the product. Question 2: Do you like the taste of burgers? Table 2: The answers to Question 2. In question 2, participants have been asked whether they liked the taste of burgers and 47 of them answered yes while the remaining 3 answered no. This data is important as it shows burger sellers or fast-food restaurants have marketed a product which has a taste that is accepted by the majority of people. This sensory factor is important as it provides a continual stream of customers coming back for more. As we know, sensory factors plays an important role in determining one's choice of food. Naturally, if you liked the taste of a food, consumption and preference for that food increased significantly compared to undesirable food products. How often do you consume burger? Once a week Once a month Table 3: The answer to Question 3. In this question, the data reveals that most of them consume burgers primarily once a week which is 38%, and once a month which is 44%. The other 18% comprises of people who consumes burger once a year or either according to their moods. When comparing both male and females, the majority of females consumes it once a month. This can be easily deducted as girls are more concerned about their physical appearance like their weight and body figure. Most girls considers burgers as fatty food, which, when consumed, is hard to burn off or lose it. The media has portrayed that woman who does not fit into the socially accepted body figure or bodily measurements are deemed as fat, undesirable and even repulsive. This explanation easily explains why woman are so concerned with their physical appearance. As a contrast, in the man category, most of them consume their burgers once a week. Man generally do not care about their physical appearance as compared to woman, so less emphasis is given on how often they consume their burgers. However, this trend is beginning to change as the numbers of man consuming burgers once a month is beginning to rise as shown by the data above. 32% of man consume it once a month while 52% consume it once a week. This shows that the number of man consuming it once a month is on the rise. This can be implicated as health awareness has taken place on the population, we are living in a world where information is easy to obtain and highly accessible. The population is more educated now and are more health conscious compared to older generations. Nowadays, the media has also placed important emphasis on certain male body figure types which is deemed attractive and desirable like having hard-rock 6 packs or a lean muscular body low in fat. Most man are aiming for this body types and are eliminating foods high in fat. This gives a boost in confidence as testosterone as woman find them more attractive, this scenario can be applied to woman as well. This topic also touches on psychology and how human or peer attention affects the emotional well-being of an individual. The human need for attention is humungous, often feeding the ego, which also helps to shape the human personality up to certain extends. Attention also serves as a motivational factor for most people, often leading them to work harder and harder just to maintain that level of attention or even increase it. According to your knowledge, is eating one a balanced nutritional option? Table 4: The answer to Question 4. In this question, the participants are questioned whether they think that eating burger can provide them a balance of nutrition required by the body, and 78% of them answered no while the remaining 22% answered yes. This goes to show that most people don't believe that eating a burger could provide them all the necessary nutrients in a single serving meal. This also implies that after eating a burger, most people would buy something else to eat to substitute the lack of nutrient or just buy another burger. In fact, studies have shown that the bad effects of eating a burger far outweighs its benefits, albeit only a little. Eating a burger does more harm than good. Even if it offers a balanced nutrition, do you think it's a healthy food option? Table 5: The answers to Question 5. For females, it is not surprising that 88% of them consider it as not healthy. This shows that most women are quite well-informed about their food. As for guys, 48% of them consider it as healthy food whereas 52% of them do not consider it as healthy food. Now this is a close call, but nonetheless shows the mindset of the man. One category of man consider it as an healthy option because it has all the 3 macro-nutrients such as carbohydrates from the bun, fats and proteins form the meat patty, with extra nutrients coming from the vegetables like cabbage and tomatoes. Another group of man consider it as not healthy because of the way it is cooked or how the meat patty is prepared. In this Information Age, it is easy to obtain information on most anything and it has been revealed by the media that most patty production includes animals leftover or unwanted animals products such as skins, bones, organs and even feces! This helps the product company on saving costs as less animals are required to be farmed or raised and easy to obtain leftover provides a cheaper way of producing patties, this saves the company a lot of money!! It is a win-win situation for them, cut costs and rake in profits. Would you still consume one, even after knowing it contains grinded and mixed leftovers like animal skin, bones, organs, added plastic and the sort? Table 6: The answers to Question 6 For woman, 72% of them answered no as the obvious choice while the remaining answered no. This is easy to see why, as after finding out the ingredients to the meat patty, most of them felt disgusted and rebuked. As for the man, 56& answered no while the other 44& answered yes. This is startling as 44% of them answered yes, which is almost half of 100%. This can be explained by man being more open as to the ingredients of the patty and generally more accepting compared to woman, or either, they just don't care and just love their burgers, no matter what the ingredients or health warnings. Do you think the habit of eating a burger has become a prevalent or prominent culture throughout the world's population? Table 7: The answers to Question 7. The majority of them has answered yes whether the habit of eating a burger has become a prominent culture in today's world. This shows that they are aware of today's trend as fast-food industries are growing and can generally be seen everywhere in today's market. It is important to note the trends of today culture as it also serves as a doorway to rectify the problem and bring it to a minimum. How can one change something without first knowing it exist or the problem? As such, it is important for people to observe and know today's eating trends and cultures and to analyse if it's good or bad. Question 8 asked of the participants thoughts after taking the survey and have been requested to write it out. For this question, the majority of them, above 90% have stated that eating a burger is unhealthy and considers reducing their intake. This awareness have been developed by going through the survey questions, particularly Question 6 which offers the patty ingredients, or the participants have already been exposed to various information earlier before taking the survey. In this question, most of them have stated that although eating a burger is unavoidable in their lives, they try to reduce their intake of it. Most of them still enjoy a good burger even while knowing the bad effects of it. This is an interesting human behavior, even after knowing the bad effects of something, one does not totally avoid the it. This serves as a paradox. It is definitely interesting. In summary of findings In summary, the EPA and WHO have stated the bad effects of chemicals such as dioxin and acrylamide which are found in burgers and its implications on human health. This brings an alarming warning to all of us consumers and we were previously oblivious to these chemicals. Now thanks to the EPA and WHO for their efforts and research, better understanding of these chemicals are delivered and obtained, consumers now are more aware and educated on the subject. The formation of reactive metabolites and the biologically effective dose are key to PAH toxicity. Diol epoxides-PAH intermediate metabolites-are mutagenic and affect normal cell replication when they react with DNA to form adducts. The location of epoxides in the bay region of a PAH predicts reactivity and mutagenicity. DNA adducts, as markers of exposure used in research, can be measured in various biologic media. The ability of CYP1A1 to biologically activate PAHs may be heritable and thus point to genetically susceptible populations at risk of PAH carcinogenesis. The most significant endpoint of PAH toxicity is cancer. Animal studies show that certain PAHs affect the hematopoietic, immune, reproductive, and neurologic systems and cause developmental effects. Continued research regarding the mutagenic and carcinogenic effects from chronic exposure to PAHs and metabolites is needed. The following table indicates the carcinogenic classifications of selected PAHs by specific agencies. U.S. Department of Health and Human Services (HHS) Known animal carcinogens International Agency for Research on Cancer (IARC) Probably carcinogenic to humans Possibly carcinogenic to humans Not classifiable as to their carcinogenicity to humans U.S. Environmental Protection Agency (EPA) Probable human carcinogens phenanthrene, and pyrene. Not classifiable as to human carcinogenicity Table 9: Shows the carcinogenic classifications of selected PAHs by specific agencies. Concerning trans fat in the United Kingdom, health concerns over trans fatty acids, as reported by the BBC (2007), major UK retailers - including Tesco, ASDA, Boots, Sainsbury's, the Co-op and Marks & Spencer - decided to stop using trans fatty acid in their own-brand products in January 2007. The BBC (2007) reported that this would affect around 5000 products sold by these supermarkets. The BBC (2007) reported that this represented a "scale and pace of change way beyond anything retailers or manufacturers are doing anywhere else in Europe". As the BBC (2007) report, many feel, however, despite this move, that foods should be clearly labeled so that people can make their own choices as to what they eat. As Tickell (2006) suggests, given the many demonstrated links between trans fatty acids and health problems, the Government should be doing more than simply labeling foods as containing trans fatty acids, particularly as many people don't read food labels and because many foods that people eat (such as unpackaged foods in restaurants) contain trans fatty acids, yet these foods are not subject to any labeling requirements. In addition, the use of synonyms for trans fatty acids on food labels is confusing for consumers, meaning that consumers could, unwittingly, be buying foods containing trans fatty acids even if they are trying to avoid consuming them. Trans fatty acids in other words trans fats made by hydrogenation process by heating liquid vegetable oils in the presence of hydrogen which means the liquid is saturated and converted into turned into solid making it more stable. This is for easier transportation hydrogenetead oils can withstand longer and this is ideal for frying process of making foods. Trans fats are even worse than saturated fats because not only they increase LDL but they lower the level of HDL in the blood. The consumption of these saturated fats and most importantly Trans fatty acids causes problems in the cardiovascular system such as causing people to have heart diseases, stroke, diabetes, obesity and other related chronic condition. But what is worse is that in some places, mostly in developing countries people use partially hydrogenated oils because they are cheaper. But these oils are completely different than the oils that are used for cooking, they are mostly like trans-rich partially hydrogenated oils and they are really bad and the main causes of heart diseases. saturated fats boosts cholesterol level by increasing the harmful LDL and but also increases protective HDL. (In this cases unsaturates fats are much prefereable because it lowers the harmful LDL and increases the protective, goods HDL). LDL is low-density lipoproteins. They are responsible for carrying cholesterol from liver to the rest of the body. When going throught the cells in the body, the cell might attach themselves in it and extract fat and cholesterol. So that is why they are referred to as bad and harmful lipoproteins which are detrimental to the human system overall. Implications of findings and future suggestions Because dioxin compounds are found in fatty acids of the patty, trimming fat from meat may decrease the exposure to dioxin compounds. Also, a balanced diet (including adequate amounts of fruits, vegetables and cereals) will help to avoid excessive exposure from a single source. This is a long-term strategy to reduce body burdens and is probably most relevant for girls and young women to reduce exposure of the developing fetus and when breastfeeding infants later on in life. We as aware consumers have the power of reduce our intake levels of dioxin and we should exercise it. Also, as a side note, here are six steps to avoid dioxin in your food:  Eat less animal fat - buy lean meats and poultry - and cut off the fat before cooking.  Eat fat free dairy products - or as low as you can - such as milk, cheese, and yogurt.  Fish is a healthy food choice - but fish are also affected, so avoid fatty fish (such as salmon) and cut the fat off before cooking and eating.  Purchase food products that have been grain or grass fed. Farm animals fed food with animal products that includes other animal's fat increases the amount of dioxin ingested by livestock and increases the amount of dioxin that is in the consumer meat product.  Eat more fruits and vegetables.  Breast feed your babies - breast milk is still the healthiest food for your baby As for acrylamide found in breads, The FDA also does not recommend that you avoid particular foods because of dioxins. The EPA's 2003 draft dioxin reassessment indicates that following the science-based advice in the Dietary Guidelines for Americans will also likely help individuals lower their risk of exposure to dioxins. These guidelines include the recommendations to choose a variety of meat and dairy products that are lean, low fat, or fat free and to increase consumption of fruits, vegetables, and whole grain products. Meat, milk, and fish are important sources of nutrients for the American public and an appropriate part of a balanced diet. Each of these foods provides high quality protein in the diet. Lean meat includes meats that are naturally lower in fat, and meat where visible fat has been trimmed. For fish and poultry you can reduce fat by removing the skin. Reducing the amount of butter or lard used in the preparation of foods and cooking methods that reduce fat (such as oven broiling) may also lower the risk of exposure to dioxin. As for acrylamide, consumption, it is suggested that burger buns which are not cooked on a hot pan is preferred over pan-cooked buns. This reduces the intake acrylamide in the diet from burgers. As elsewhere, consumption of high-heat-cooked starchy foods like French fries, potato chips and cereals are to be reduce as these foods contain high levels of acrylamide. Acrylamide ingestion is mostly from food and cigarettes, and unlikely sources included is water. This is because polyacrylamide is used as a one of a variety of cleaning agents, combining with solid material making it easier to filter/remove unwanted substances from water. According to the WHO Guidelines for Drinking-water Quality, the guideline value (the concentration representing the tolerable risk to the health of the consumer over a lifetime of consumption) is 0.5 micrograms per litre in drinking-water. Concentrations in drinking-water can be controlled by product and dose specification. The European Union's legal limit for drinking-water is 0.1 micrograms per litre of water. Some countries, like the United States and Japan, have regulations on treatment techniques, rather than a water quality standard value for acrylamide. For future research suggestion, although studies in rodent models suggest that acrylamide is a potential carcinogen, additional epidemiological cohort studies are needed to help determine any effects of dietary acrylamide intake on human cancer risk. It is also important to determine how acrylamide is formed during the cooking process and whether acrylamide is present in foods other than those already tested. This information will enable more accurate and comprehensive estimates of dietary exposure. Biospecimen collections in cohort studies will provide an opportunity to avoid the limitations of interview-based dietary assessments by examining biomarkers of exposure to acrylamide and its metabolites in relation to the subsequent risk of cancer. In case of PAH from barbecue, The presence of PAH was studied in several samples of meat and fish that were grilled on two geometrically different gas barbecues. In contrast to a horizontal barbecue, the vertical barbecue prevented fat from dripping onto the heat source, and the PAH level were very low and 10-30 times lower than with the horizontal system (Saint-Aubert et al., 1992). This information could serve as a method to reduce PAH levels from barbecuing. As well, the type of wood used can also determine the levels of PAH. Regarding the generation of liquid smoke flavorings, it has been showed that poplar wood generated the highest number and concentration of both total and carcinogenic PAH, while oak, cherry tree, beech samples were similarly less effective. Hardwoods instead of softwoods have also been recommended, indeed, dry woods generate more PAH because of their higher smoke generation temperature (Guillen et al., 2000). All this information shows that certain conditions plays a very big role in determining PAH levels. Simple practices are known to result in a significantly reduced contamination of foods by PAH (Lijinsky and Ross, 1967; Lijinsky, 1991; Knize et al., 1999) as well as by other undesirable contaminants. This may include selectingA16 preferentially lean meat and fishes, avoiding contact of foods with flames for barbecuing, using less fat for grilling, and, in general, cooking at lower temperature for a longer time. Broiling (heat source above) instead of grilling can significantly reduce the levels of PAH. Actually the fat should not drip down onto an open flame sending up a column of smoke that coats the food with PAH. The use of medium to low heat, and placement of the meat further from the heat source, can greatly reduce formation of PAH. The intensity of flavor is not necessarily associated with the depth of the brown color of grilled foods. It is therefore needless to overcook the food to get the flavour. However, cooking must always remains effective as regards inactivation of any possible contaminating bacteria or endogenous toxins.
Background Information on Self-Determination NLTS2 Fact Sheet – research findings on self-determination for youth with disabilities. NCSET Research to Practice Brief– about self-determination and tips for promoting it. Whose Life Is It Anyway? – different perspectives on self-determination for a transitioning youth. Opening Doors to Self-Determination – a guide for teachers, students, and families. Practice Information and Tips DCDT Fact Sheet – a document about goal-setting for youth with disabilities along with a list of additional resources. Foundations – a toolkit about fostering self-determination for educators. Lesson Plan Examples – on various topics that can be used or adapted when working on self-determination. Fostering Self-Determination – a set of activities and lesson plans to build self-determination skills. Self-Determination Ideas for Paraprofessionals– a set of specific ways to promote self-determination compiled from a survey of paraprofessionals from across WI along with a lists of resources. Resources Geared Toward Students The Speak-Up Guide– a resource book students can use on its own or with additional accompanying materials. The 3R’s of Self-Determination – a student practice guide about rights, responsibilities, and resources for increasing self-determination. LeadershipTips for Youth – a list of ideas for youth interested in gaining self-determination skills through leadership training. Resources Geared Toward Parents Fostering Self-Determination – a parent to parent guide for providing opportunities for children and youth to build self-determination skills along with a list of additional resources for parents and teams.
Multiplication is often described as repeated addition. For example, the product 3 × 4 is equal to the sum of three 4s: 4 + 4 + 4. In talking about multiplication, several terms are used. In the expression 3 × 4, the entire expression, whether it is written as 3 × 4 or as 12, is called the product. In other words, the answer to a multiplication problem is the product. In the original expression, the numbers 3 and 4 are each called multipliers, factors, or terms. At one time, the words multiplicand and multiplier were used to indicate which number got multiplied (the multiplicand) and which number did the multiplying (the multiplier). That terminology has now fallen into disuse. Now the term multiplier applies to either number. Multiplication is symbolized in three ways: with an ×, as in 3 × 4; with a centered dot, as in 3 · 4; and by writing the numbers next to each other, as in 3(4), (3)(4), 5x, or (x + y)(x − y). Common fractions. The numerator of the product is the product of the numerators; the denominator of the product is the product of the denominators. For example, . Decimals. Multiply the decimal fractions as if they were natural numbers. Place the decimal point in the product so that the number of places in the product is the sum of the number of places in the multipliers. For example, 3.07 × 5.2 = 15.964. Signed numbers. Multiply the numbers as if they had no signs. If the two factors both have the same sign, give the product a positive sign or omit the sign entirely. If the two factors have different signs, give the product a negative sign. For example, (3x)(−2y) = −6xy; (−5)(−4) = +20. Factor: A number used as a multiplier in a product. Multiplier: One of two or more numbers combined by multiplication to form a product. Product: The result of multiplying two or more numbers. Powers of the same base. To multiply two powers of the same base, add the exponents. For example 10 2 × 10 3 = 10 5 and x 5 × x −2 = x 3 . Monomials. To multiply two monomials, find the product of the numerical and literal parts of the factors separately. For example, (3x 2 y)(5xyz) = 15x 3 y 2 z. Polynomials. To multiply two polynomials, multiply each term of one by each term of the other, combining like terms. For example, (x + y)(x − y) = x 2 − xy + xy − y 2 = x 2 − y 2 . Multiplication is used in almost every aspect of our daily lives. Suppose you want to buy three cartons of eggs, each containing a dozen eggs, at 79 cents per carton. You can find the total number of eggs purchased (3 cartons times 12 eggs per carton = 36 eggs) and the cost of the purchase (3 cartons at 79 cents per carton = $2.37). Specialized professions use multiplication in an endless variety of ways. For example, calculating the speed with which the Space Shuttle will lift off its launch pad involves untold numbers of multiplication calculations.
From prehistoric times, forests and fire have remained inseparable. The temperate world’s forest ecosystem has been re-generated and rejuvenated with active help of forest fires. Forest fires have become a major cause of concern because it threatens human habitats and deprives humans from accessing forest resources. Full benefits of forest resources can be obtained only if timber (wood) is protected from fire, diseases and insect pests. Forest fire can be classified into three categories: - Natural or controlled forest fire. For example, by lightening striking dry trees. - Forest fires caused by heat generated in the litter and other biomass in summer and dry season. - Human negligence. For example, by carelessly dropping lighted matchsticks or cigarette stubs. Effects of Forest Fire Fires are a major cause of forest degradation and have wide ranging adverse ecological, economic and social impacts: - Loss of valuable timber resources, biodiversity and extinction of plants and animals - Loss of natural vegetation and reduction in forest cover - Fires may also lead to degradation of catchment areas - Other environmental impacts of forest fire are global warming, change in the microclimate of the area with unhealthy living conditions - Soil erosion affecting productivity of soils and depletion of ozone layer Forest fires are also responsible for loss of livelihood for tribal people and other rural poor. Preventive Measures and Management Damage caused due to a forest fire can be controlled by the following means: - Get dry litter (like dying twigs, leaves) removed during summer season. - Call a fire brigade, try to put out the fire by spraying water or digging around the fire zone. - Move farm animals and movable goods to a safe place. - Do not throw smoldering cigarette or leave burning wood sticks around. - Do not enter a forest if it is on fire.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Infective endocarditis occurs when bacteria enter the bloodstream and travel to and attach to previously injured heart valves. Acute bacterial endocarditis usually begins suddenly with a high fever, fast heart rate, fatigue, and rapid and extensive heart valve damage. Subacute bacterial endocarditis gradually causes such symptoms as fatigue, mild fever, a moderately fast heart rate, weight loss, sweating, and a low red blood cell count. Echocardiography is used to detect the damaged heart valves, and blood cultures are used to identify the microorganism causing infective endocarditis. People with artificial heart valves or certain birth defects of the heart need to take antibiotics to prevent endocarditis before they undergo certain dental or surgical procedures. High doses of antibiotics are given intravenously, but sometimes surgery is needed to repair or replace damaged heart valves. Infective endocarditis affects twice as many men as women at all ages. It has become more common among older people. More than one fourth of all cases occur in people older than 60. Infective endocarditis refers specifically to infection of the lining of the heart, but the infection usually also affects the heart valves, and any areas with abnormal connections between the chambers of the heart or its blood vessels (birth defects of the heart Overview of Heart Defects About one in 100 babies is born with a heart defect. Some are severe, but many are not. Defects may involve abnormal formation of the heart's walls or valves or of the blood vessels that enter... read more ). There are two forms of infective endocarditis: Acute infective endocarditis develops suddenly and may become life threatening within days. Subacute infective endocarditis (also called subacute bacterial endocarditis) develops gradually and subtly over a period of weeks to several months but also can be life threatening. Prosthetic valve endocarditis is acute infective endocarditis in a heart valve that has been replaced Overview of Heart Valve Disorders Heart valves regulate the flow of blood through the heart's four chambers—two small, round upper chambers (atria) and two larger, cone-shaped lower chambers (ventricles). Each ventricle has... read more (prosthetic valve). Endocarditis can also be noninfective. In noninfective endocarditis Noninfective Endocarditis Noninfective endocarditis is formation of blood clots on heart valves and the lining of the heart. Symptoms occur when a blood clot breaks loose and blocks arteries elsewhere in the body. Diagnosis... read more , blood clots that do not contain microorganisms form on heart valves and adjacent endocardium. Noninfective endocarditis sometimes leads to infective endocarditis because microorganisms can attach to and grow within the fibrous blood clots. In both infective and noninfective endocarditis, accumulations of blood clots (and bacteria in infectious endocarditis) can break free from the heart wall (becoming emboli), travel through the bloodstream, and block an artery. This may cause a stroke Overview of Stroke A stroke occurs when an artery to the brain becomes blocked or ruptures, resulting in death of an area of brain tissue due to loss of its blood supply (cerebral infarction) and symptoms that... read more or damage to the spleen, kidneys, or other organs. Causes of Infective Endocarditis Bacteria (or, less often, fungi) that are introduced into the bloodstream can sometimes lodge on heart valves and infect the endocardium. Abnormal, damaged, or replacement (prosthetic) valves are more susceptible to infection than normal valves. The bacteria that cause subacute bacterial endocarditis nearly always infect abnormal, damaged, or replacement valves. However, normal valves can be infected by some aggressive bacteria, especially if many bacteria are present. Although bacteria are not normally found in the blood, an injury to the skin, lining of the mouth, or gums (even an injury from a normal activity such as chewing or brushing the teeth) can allow a small number of bacteria to enter the bloodstream. Gingivitis Gingivitis Gingivitis is mild form of periodontal disease characterized by inflammation of the gums (gingivae). Gingivitis results most often from inadequate brushing and flossing but may result from medical... read more (inflammation of the gums) with infection, minor skin infections, and infections elsewhere in the body may introduce bacteria into the bloodstream. Certain surgical, dental, and medical procedures may also introduce bacteria into the bloodstream. Rarely, bacteria are introduced into the heart during open-heart surgery Revascularization Procedures Angina is temporary chest pain or a sensation of pressure that occurs while the heart muscle is not receiving enough oxygen. A person with angina usually has discomfort or pressure beneath the... read more or heart valve replacement surgery Repairing or replacing a heart valve Heart valves regulate the flow of blood through the heart's four chambers—two small, round upper chambers (atria) and two larger, cone-shaped lower chambers (ventricles). Each ventricle has... read more . In people with normal heart valves, usually no harm is done, and the body's white blood cells White Blood Cells The main components of blood include Plasma Red blood cells White blood cells Platelets read more and immune responses Overview of the Immune System The immune system is designed to defend the body against foreign or dangerous invaders. Such invaders include Microorganisms (commonly called germs, such as bacteria, viruses, and fungi) Parasites... read more rapidly destroy these bacteria. However, damaged heart valves may trap the bacteria, which can then lodge on the endocardium and start to multiply. Sepsis Sepsis and Septic Shock Sepsis is a serious bodywide response to bacteremia or another infection plus malfunction or failure of an essential system in the body. Septic shock is life-threatening low blood pressure ... read more , a severe blood infection, introduces a large number of bacteria into the bloodstream. When the number of bacteria in the bloodstream is large enough, endocarditis can develop, even in people who have normal heart valves. If the cause of infective endocarditis is injection of illicit drugs or prolonged use of intravenous lines (sometimes used by doctors to deliver long-term intravenous therapies for people who have serious medical conditions), the tricuspid valve (which opens from the right atrium into the right ventricle) is most often infected. In most other cases of endocarditis, the mitral valve or the aortic valve is infected. An Inside View of Infective Endocarditis This cross-sectional view shows vegetations (accumulations of bacteria and blood clots) on the four valves of the heart. Risk Factors of Infective Endocarditis The highest risk of endocarditis is in people who Inject illicit drugs Have a weakened immune system Have a prosthetic (artificial) heart valve, pacemaker Artificial pacemakers Abnormal heart rhythms (arrhythmias) are sequences of heartbeats that are irregular, too fast, too slow, or conducted via an abnormal electrical pathway through the heart. Heart disorders are... read more or defibrillator Restoring normal rhythm Abnormal heart rhythms (arrhythmias) are sequences of heartbeats that are irregular, too fast, too slow, or conducted via an abnormal electrical pathway through the heart. Heart disorders are... read more People who inject illicit drugs are at high risk of endocarditis because they are likely to inject bacteria directly into their bloodstream through dirty needles, syringes, or drug solutions. People who have a replacement heart valve are also at high risk. For them, the risk of infective endocarditis is greatest during the first year after heart valve surgery. After the first year, the risk decreases but remains slightly higher than normal. For unknown reasons, the risk is always greater with a replacement aortic valve than with a replacement mitral valve and with a mechanical valve rather than with a valve made from an animal. Other risk factors for infective endocarditis are Birth defects of the heart Overview of Heart Defects About one in 100 babies is born with a heart defect. Some are severe, but many are not. Defects may involve abnormal formation of the heart's walls or valves or of the blood vessels that enter... read more (including defects of the heart valves), or major blood vessels, particularly a defect that allows blood to leak from one part of the heart to another Degeneration of the heart valves that occurs with aging Birth defects are risk factors for children and young adults. Damage to the heart by rheumatic fever Rheumatic Fever Rheumatic fever is inflammation of the joints, heart, skin, and nervous system, resulting from a complication of untreated streptococcal infection of the throat. This condition is a reaction... read more during childhood (rheumatic heart disease) is also a risk factor. Rheumatic fever has become a less common risk factor in countries where antibiotics have become widely available. In such countries, rheumatic fever is a risk factor for people who did not have the benefit of antibiotics during their childhood (such as immigrants). One risk factor for older people is degeneration of the heart valves such as a floppy mitral valve Mitral Valve Prolapse (MVP) Mitral valve prolapse is a disorder in which the valve flaps (cusps) bulge into the left atrium when the left ventricle contracts, sometimes allowing leakage (regurgitation) of blood into the... read more (which opens from the left atrium into the left ventricle) or calcium deposits on the aortic valve Aortic Stenosis Aortic stenosis is a narrowing of the aortic valve opening that blocks (obstructs) blood flow from the left ventricle to the aorta. The most common cause in people younger than 70 is a birth... read more (which opens from the left ventricle into the aorta). Symptoms of Infective Endocarditis Acute bacterial endocarditis usually begins suddenly with a high fever (102° to 104°F [38.9° to 40°C]), fast heart rate (> 100 beats per minute), fatigue, and rapid and extensive heart valve damage causing symptoms of heart failure Symptoms Heart failure is a disorder in which the heart is unable to keep up with the demands of the body, leading to reduced blood flow, back-up (congestion) of blood in the veins and lungs, and/or... read more . Subacute bacterial endocarditis may cause such symptoms as fatigue, mild fever (99° to 101° F [37.2° to 38.3°C]), a moderately fast heart rate, weight loss, sweating, and a low red blood cell count (anemia Overview of Anemia Anemia is a condition in which the number of red blood cells is low. Red blood cells contain hemoglobin, a protein that enables them to carry oxygen from the lungs and deliver it to all parts... read more ). These symptoms can be subtle and may occur for months before endocarditis results in blockage of an artery or damages heart valves and thus makes the diagnosis clear to doctors. In both acute and subacute bacterial endocarditis, arteries may become blocked if accumulations of bacteria and blood clots on the valves (called vegetations) break loose (becoming emboli), travel through the bloodstream to other parts of the body, and lodge in an artery, blocking it. Sometimes blockage can have serious consequences. Blockage of an artery to the brain can cause a stroke Overview of Stroke A stroke occurs when an artery to the brain becomes blocked or ruptures, resulting in death of an area of brain tissue due to loss of its blood supply (cerebral infarction) and symptoms that... read more , and blockage of an artery to the heart can cause a heart attack Acute Coronary Syndromes (Heart Attack; Myocardial Infarction; Unstable Angina) Acute coronary syndromes result from a sudden blockage in a coronary artery. This blockage causes unstable angina or heart attack (myocardial infarction), depending on the location and amount... read more . Emboli can also cause an infection in the area in which they lodge and/or block small blood vessels and damage organs. Organs that are often affected include the lungs, kidneys, spleen, and brain. Emboli also often travel to the skin and back of the eye (retina). Collections of pus (abscesses) may develop at the base of infected heart valves or wherever infected emboli settle. Heart valves may become perforated and may start to leak (causing regurgitation Overview of Heart Valve Disorders Heart valves regulate the flow of blood through the heart's four chambers—two small, round upper chambers (atria) and two larger, cone-shaped lower chambers (ventricles). Each ventricle has... read more ) — within a few days. Some people go into shock, and their kidneys and other organs stop functioning (a condition called septic shock Sepsis and Septic Shock Sepsis is a serious bodywide response to bacteremia or another infection plus malfunction or failure of an essential system in the body. Septic shock is life-threatening low blood pressure ... read more ). Infections in arteries can weaken artery walls Overview of Aortic Aneurysms and Aortic Dissection The aorta, which is about 1 inch (2.5 centimeters) in diameter, is the largest artery of the body. It receives oxygen-rich blood from the left ventricle and distributes it to all of the body... read more , causing them to bulge or rupture. A rupture can be fatal, particularly if it occurs in the brain or near the heart. Other symptoms of acute and subacute bacterial endocarditis may include Painful nodules under the skin Tiny reddish spots that resemble freckles may appear on the skin and in the whites of the eyes. Small streaks of red (called splinter hemorrhages) may appear under the fingernails. These spots and streaks are caused by tiny emboli that have broken off the heart valves. Larger emboli may cause stomach pain, blood in the urine, or pain or numbness in an arm or a leg as well as a heart attack Acute Coronary Syndromes (Heart Attack; Myocardial Infarction; Unstable Angina) Acute coronary syndromes result from a sudden blockage in a coronary artery. This blockage causes unstable angina or heart attack (myocardial infarction), depending on the location and amount... read more or a stroke Ischemic Stroke An ischemic stroke is death of an area of brain tissue (cerebral infarction) resulting from an inadequate supply of blood and oxygen to the brain due to blockage of an artery. Ischemic stroke... read more . Heart murmurs may develop, or preexisting ones may change. The spleen may enlarge. Prosthetic valve endocarditis may be an acute or subacute infection. Compared with infection of a natural valve, infection of a replacement valve is more likely to spread to the heart muscle at the base of the valve and can loosen the attachment of the valve to the heart. Alternatively, the heart's electrical conduction system Normal electrical pathway Abnormal heart rhythms (arrhythmias) are sequences of heartbeats that are irregular, too fast, too slow, or conducted via an abnormal electrical pathway through the heart. Heart disorders are... read more may be interrupted, resulting in slowing of the heartbeat, which may lead to a sudden loss of consciousness or even death. Diagnosis of Infective Endocarditis Because many of the symptoms are vague and general, doctors may have difficulty making a diagnosis. Usually, people suspected of having acute or subacute infective endocarditis are hospitalized promptly for diagnosis as well as treatment. Doctors may suspect endocarditis in people with a fever and no obvious source of infection, especially if they have Characteristic symptoms such as reddish spots on fingers or the whites of the eyes A heart valve disorder A replacement heart valve Recently had certain surgical, dental, or medical procedures Injected illicit drugs Development of a heart murmur Physical Examination The medical history and physical examination can suggest that a person has a heart or blood vessel disorder that requires additional testing for accurate diagnosis. When doctors "take a medical... read more or a change in a preexisting heart murmur further supports the diagnosis. To help make the diagnosis, doctors usually do echocardiography and obtain blood samples to test for the presence of bacteria. Usually, three or more blood samples are taken at different times on the same day. These blood tests (blood cultures Culture of Microorganisms Infectious diseases are caused by microorganisms, such as bacteria, viruses, fungi, and parasites. Doctors suspect an infection based on the person's symptoms, physical examination results,... read more ) may identify the specific disease-causing bacteria and the best antibiotics to use against them. In people with heart abnormalities, doctors test their blood for bacteria before giving them antibiotics. Echocardiography Echocardiography and Other Ultrasound Procedures Ultrasonography uses high-frequency (ultrasound) waves bounced off internal structures to produce a moving image. It uses no x-rays. Ultrasonography of the heart (echocardiography) is one of... read more , which uses ultrasound waves, can produce images showing heart valve vegetations and damage to the heart. Typically, transthoracic echocardiography (a procedure in which the ultrasound probe is placed on the chest) is done. If this procedure doesn't provide enough information, the person may undergo transesophageal echocardiography (a procedure in which the ultrasound probe is passed down the throat into the esophagus just behind the heart). Transesophageal echocardiography is more accurate and detects smaller bacterial deposits, but it is invasive and more costly. Computed tomography (CT) is used occasionally when transesophageal echocardiography does not provide enough information. Positron emission tomography (PET) is being used more often for the diagnosis of infective endocarditis of prosthetic heart valves and other devices placed in the heart. Sometimes bacteria cannot be cultured from blood samples. Special techniques may be needed to grow the particular bacteria, or the person may have taken antibiotics that did not cure the infection but did reduce the number of bacteria enough to be undetectable. Another possible explanation is that the person does not have endocarditis but has a disorder, such as a heart tumor Overview of Heart Tumors A tumor is any type of abnormal growth, whether cancerous (malignant) or noncancerous (benign). Tumors in the heart may be Primary (noncancerous or cancerous) Metastatic (always cancerous) Primary... read more , that causes symptoms very similar to those of endocarditis. Prognosis of Infective Endocarditis If untreated, infective endocarditis is always fatal. When treatment is given, the risk of death depends on factors such as the person's age, duration of the infection, the presence of a replacement heart valve, the type of infecting organism, and the amount of damage done to the heart valves. Nonetheless, with aggressive antibiotic treatment, most people survive. Prevention of Infective Endocarditis As a preventive measure, people at high risk of infective endocarditis are given antibiotics before certain surgical, dental, and medical procedures. People at high risk include those with Some birth defects of the heart A transplanted heart that has an abnormal valve A previous episode of infective endocarditis Consequently, surgeons, dentists, and other health care practitioners need to know if a person has such risk factors. People who simply have an abnormal heart valve alone do not require antibiotics. Treatment of Infective Endocarditis Antibiotics given by vein (intravenously) Sometimes heart surgery Treatment usually consists of at least 2 weeks and often up to 8 weeks of antibiotics given by vein (intravenously) in high doses. Antibiotic therapy is almost always started in the hospital but may be finished at home with the help of a home nurse. Some people with certain types of infection may be able to switch to antibiotics taken by mouth after a period of intravenous treatment. Antibiotics alone do not always cure an infection, particularly if the valve is one that has been replaced. One reason is that the bacteria that cause endocarditis in a person with a replacement valve are often resistant to antibiotics. Because antibiotics are given before heart valve replacement surgery to prevent infection, any bacteria that survive this treatment to cause infection are probably resistant. Another reason is that it is generally harder to cure infection on artificial, implanted material than in human tissue. Heart surgery may be needed to repair or replace damaged valves, remove vegetations, or drain abscesses if antibiotics do not work, a valve leaks significantly, or a birth defect connects one chamber to another. Dental treatment to eliminate any sources of infection due to mouth or gum disease is usually needed. Doctors usually also remove any devices (such as catheters) that may be a source of infection. Doctors may use a series of echocardiography examinations to ensure that the infected area is decreasing. They may also do echocardiography at the end of treatment to have a record of the appearance of heart valves because infective endocarditis may recur. Because of the risk of recurrence, ongoing dental care and good skin hygiene (to prevent any bacteria from entering the body through sores or wounds) is needed.
With only simple guideance and instruction students will discover facts about their teacher through their own observations. Students will identify the principle of student discovery and learning Student Journals and pen/pencil The teacher has the students line up in pairs or threes at the back of the classroom near the teacher’s personal office. The teacher instructs the students to go into his office in pairs/threes and look at it carefully for about 15-20 seconds; “Learn as much as you can about me in those 15-20 seconds, just from the things you observe. Then return to your seat and make notes about your observations in your study journal”. After each group has made its observations and notes, spend the next five minutes discussing and answering the following two questions: (make a list of the students’ answers on the board as you go) - What did you learn about me? - How did you know that? After the discussion you should have a surprising amount of facts about the teacher listed on the board. Now, help the students realize what has just happened by discussing the following questions - How much of what you have learned about me did I tell you? - How much did you discover on your own and how where you able to do it? Have the students reflect on their experience today in class and ask them to define as a class what they think the role of the teacher and the role of the student are. (List their definitions on the board and refer to them throughout the year) Finish with discussion on this topic: - Why is learning best effectuate through student discovery and teacher guidance?
This module addresses the problem of setting up a name server on a UNIX computer. The UNIX name server software (and most other implementations of name server software) derives from the package known as Berkeley Internet Name Daemon (BIND). The BIND software is free, and the latest version is always available on the Internet. See the course Resources page for more information on where you can get BIND. By the end of this module, you will be able to: Zone files use several record types, including: - SOA (Start of Authority) - NS (Name Server) - MX (Mail eXchanger, which identifies a mail server in the domain) - A (host name to Address mapping) - CNAME (Canonical Name, which defines an alias for a hostname in an A record) - PTR (Pointer, which maps addresses to names) It is not necessary to try to memorize or understand these record types at this point. You will have ample opportunity to use them as we dig deeper into this subject. Name resolution systems provide the translation between alphanumeric names and numerical addresses, alleviating the need for users and administrators to memorize long strings of numbers. There are two common methods for implementing name resolution: - A static file on each host on the network, containing all the name-toaddress translations (examples include the HOSTS and LMHOSTS files). - A centralized server that all hosts on the network connect to for name resolution. The two most common name resolution systems are Domain Name System (DNS) and Windows Internet Name Service (WINS). WINS was used in Microsoft networks to translate IP addresses to NetBIOS names, and is mostly deprecated. DNS is heavily utilized on the Internet and on systems such as Active Directory.
What is a time connective? Time connectives are words that join phrases or sentences together to help us understand when something is happening. Words such as before, after, next, just then, shortly, afterwards, last, eventually, firstly, secondly, and thirdly, are all-time connectives. What is a connective In a speech? Connectives are words or phrases that join the thoughts of a speech together and indicate the relationship between them. Each speech should contain the following four connectives: transitions, internal previews, internal summaries, and signposts. What type of connective is moreover? moreover is a sentence connective. It links two sentences. moreover is not used to make a link inside a sentence. What are connective devices? Connectives are devices used to create a clear flow between ideas and points within the body of your speech–they serve to tie the speech together. What is the difference between Connective and conjunction? Connectives join two separate ideas in two sentences or paragraphs. They usually come at the start of a sentence. and Conjunctions join two ideas in the same sentence. Is even though a connective? Although/though are subordinating conjunctions used to connect a subordinate clause to a main clause, like after, as, before, if, since, that, even though, even if. Do you put a comma before a connective? (This also works as its own sentence.) When a conjunction joins two standalone “sentences” (i.e., independent clauses), a comma is required before the conjunction (in this example, the conjunction is “but”). Is often a time connective? Often, time connectives are used in conjunction in long sentences to explain a series of events, for example: Time connectives are also used in this way in instructions to help us know what order something needs to be done in. What is it called? ‘It’s called’ is a contraction of ‘it is called’, which is passive voice. The subject is being called by someone else. This is often used to give the name of someone or something. ‘It called’ means the subject is doing the calling, whether literally or figuratively. That’s active voice. What is this & symbol called? How do you text a typo? Type the correct spelling of the word immediately after the asterisk. For example, if you entered “I cleaned the besement,” you would notice that you misspelled “basement.” On the next line, enter an asterisk and correct the spelling by entering “*basement.” Where do you place an asterisk? As for the explanation at the bottom of the page ( e.g. author’s, editor’s or translator’s note), place the asterisk immediately before the explanation. Notes referenced by an asterisk or other symbol should come before any numbered footnotes in the list at the bottom of the page. Does Asterisk mean multiply? In mathematics, the asterisk symbol * refers to multiplication. For example, consider the following expression: 7 * 6. How do you text an asterisk? The asterisk is made on your keyboard by holding the SHIFT key and pressing the 8 on the top number line. We use the asterisk in English writing to show that a footnote, reference or comment has been added to the original text. What does the as asterisk sign mean in Python? The asterisk “*” is used in Python to define a variable number of arguments. The asterisk character has to precede a variable identifier in the parameter list. If the function is called without any arguments, the value of x is an empty tuple. What is a * in Python? The asterisk (star) operator is used in Python with more than one meaning attached to it. For numeric data types, * is used as multiplication operator >>> a=10;b=20 >>> a*b 200 >>> a=1.5; b=2.5; >>> a*b 3.75 >>> a=2+3j; b=3+2j >>> a*b 13j.
The following are some strategies, principles and offer some proven tips that can make the student-centered learning environment a reality, and success, in your classroom. Turn your classroom into a community In a traditional classroom, the teacher speaks, the students listen. In a student-centered classroom, the students speak, the teacher listens, interjects and facilitates conversation when needed, and then thanks the students for their participation. By involving students directly in the education process, and by enabling them to interact with one another, students begin to feel a sense of community. More importantly they are shown that what they feel, what they value, and what they think are what matter most. In the student-centered classroom, the teacher acts not only as educator, but as both facilitator and activator. Develop trust and communication A student-centered classroom or learning environment can not exist without trust and open communication. Trust and open communication are achieved by always being fair with students, listening to them, and allowing them speak. Seem like a tall order? Well, it is. And it may not happen over night. However, it’s much easier to develop a student-centered classroom if you get started right away at the beginning of the year. Getting started at the beginning of the year sets the tone and lets students know what’s expected of them the rest of the year. At the beginning of the each new school year, ask your students to discuss how they’d like their classroom experience to be. How should it sound, feel and function during the year? Are there any rules that should be put in place to ensure the classroom experience meets their expectations? Give the students 15 minutes to discuss among themselves and then write their suggestions on the whiteboard. You’ll be surprised how many rules students will come up with. As you fill up your white board with their ideas and suggestions, you’ll find some common themes start to appear–your students want to be heard, seen, valued, and respected. This exercise, and similar exercises that can be performed throughout the year, communicate to students that what they say matters, and that you trust and value their input. Find ways to integrate technology Developing a student-center classroom is all about engagement. The better you’re able to engage students in any activity or project the more involved they’ll become in the learning process. In today’s world, technology is one of the most effective tools for engaging students. Technology is not the future, it’s the present. Everything kids do these days revolves around technology–specifically mobile technology. Allow and invite students to use free web tools to present, curate, and share information. When students are given the opportunity to integrate exciting web tools and technology into the learning process, they become eager, anxious participant in just about any learning activity. Create an environment where mutual respect and a quest for knowledge guide behavior–not rules A classroom without rules? Seems a little far fetched, doesn’t it? Well, it may be if you plan on having a teacher-centered classroom where students spend half their time learning, and the other half trying to keep from being bored out of their skulls. So what’s the key to the “no rules” approach? Engagement! If you keep activities engaging, behavior will rarely be an issue. Having an engaging classroom environment, with engaging projects, engaging activities and engaging discussions will foster mutual respect and encourages a pursuit of learning that leaves little time for disruptions. Replace homework with engaging project-based learning activities The jury is still out on the effectiveness of homework as it relates to improved grades and test scores. Some studies indicate there is a positive correlation between homework and improved grades and test scores, while other studies suggest little correlation. However, the entire premise for these studies is based on the assumption that grades and test scores are an accurate barometer for academic achievement and learning. In the teacher-centered classroom, in class learning and student productivity is lower, making homework more necessary and regular testing essential for measuring learning and performance. In the student-centered classroom, where activities and projects are engaging, students become much more eager to learn, and in class productivity is much higher. Where students complete schoolwork outside of the classroom in a student-centered learning environment, it’s typically because they want to complete projects they’re working on inside the classroom. Many teachers are now using engaging project-based learning (PBL) to teach math standards, sciences, technology and other core subjects to their students and increase student productivity and effectiveness of learning in the classroom. So what exactly is project-based learning? In short, it’s learning through identifying real-world programs and developing real-world solutions. Not only is project-based learning extremely engaging when implemented correctly, but student learn as they journey through the entire project. Project-based learning also relies heavily on technology, where projects are driven by interactive web tools and solutions are presented using a multimedia approach. When implemented effectively, project-based learning can replace the need for out-of-class homework and in class learning becomes more productive. Develop ongoing projects One of the keys to developing a student-centered classroom and learning environment is to create ongoing projects for students. Ongoing projects promotes mastery of subject matter being taught and learned. Learning objectives and standards, for just about any subject matter, can be met through well-designed projects and activities. And providing students with various project choices allows them to demonstrate what they’re learning. Allow students to share in decision making Creating a student-centered classroom requires collaboration. It requires placing students at the center of their own learning environment by allowing them to be involved in deciding why, what, and how their learning experience will take shape. Before students will be willing to invest the mental, emotional and physical effort real learning requires, they need to know why what they’re learning is relevant to their lives, wants and needs. Explaining to students that they need to study a subject “because it’s required for they’re grade level,” or “they need to know it to get into college” does not establish why in terms of relevance from students’ perspective. Such explanations result in lack luster performance, low motivation and poor learning. Students should determine, or guide, the selection of content matter used to teach skills and concepts. What is taught and learned in a student-centered classroom becomes a function of students’ interests and involves students’ input and teacher-student collaboration. For example, when learning about American history, students might decide a class play, where each student acts the role of a key historical figure, would be preferable to writing a traditional report or bibliography. In this example, not only do students take ownership of the learning process, all students benefit from the decisions of other students. The how in a student-centered learning environment is just as important as the why and the what. Students process information, understand and learn in different ways. Offering students the option of how they’ll learn will allow each student to adopt the method of learning that will be most comfortable and effective for them. It also allows student to feel more invested in the learning process. Teachers should consider offering students various performance based learning options that meet academic requirements. Give students the opportunity to lead Providing students the opportunity to lead in the classroom is a great way to develop a student-centered learning environment that fosters engagement, growth and empowers students to take ownership of the learning experience. Each day consider allowing a few students to each take charge of an individual activity, even if the activity requires content skills beyond the level of the students. Then rote students between leadership roles so each student gets the opportunity to lead an activity. You may even consider introducing the leadership role, or activity they’ll be leading, to each student the day before so they’ll have time to prepare and really take ownership of their activity. Get students involved in their performance evaluation In a traditional classroom, performance evaluation and learning assessment are reduced to a series of numbers, percentages, and letter grades presented periodically on report cards, through activities and via standardized testing. These measures say little about what a student is learning and provide little in the way of useful feedback to the student so he or she can improve their performance and achieve mastery. The student-centered learning environment is based on a form of narrative feedback that encourages students to continue learning until they demonstrate they’ve achieved mastery of a subject. This form of learning, feedback and evaluation encourages students to resubmit assignments and work on projects until mastery is achieved.
Gender inclusion is a concept that transcends mere equality. It’s the notion that all services, opportunities, and establishments are open to all people and that male and female stereotypes do not define societal roles and expectations. While the United States has made dramatic strides in narrowing the gap between the sexes, statistics show that prevalent challenges exist, and we must address and eradicate them before our society can achieve true gender inclusion. Promoting gender inclusion through the creation of demonstrative examples of the concept in action is of the utmost importance as we strive toward this goal. The Need for Gender Inclusion Even as prominent sections of the population strive for a more inclusive society, the lack of gender inclusion persists as a major issue. According to a 2017 United Nations survey, various structural roles in the home, school, and workplace are prime movers behind the ongoing lack of inclusion. Gender Gaps in the Workforce The discrepancy in workplace wages is one of the biggest indicators that a lack of gender inclusion still exists. While studies indicate that the gender gap in pay has decreased since 1980, there is still a noticeable difference between what women earn and what men earn. The following resources demonstrate how this gap looks in modern society. Click here to read more.
By Fatma Katr In a fast-paced world where rising human relation issues are often misrepresented or misinterpreted, higher education institutions are moving forward to diversify teaching methods when introducing these issues. One of those methods involves the use of virtual reality (VR) to evoke emotion and empathy among students. Like any learning technology, VR can be integrated in university curriculums in useful ways, in the sense that it is able to tap into student emotions and their concepts of certain issues, to challenge stereotypes and bias. Despite the scarcity of studies on the use of VR in classrooms, evidence examined so far suggests that VR is a powerful tool for provoking empathy and self-efficacy, according to an article by The Hechinger Report. This article will delve into the true experience of VR and its impacts on provoking empathy among students: - Why empathy needs to be taught more in schools. - The process of using VR to provoke empathy. - The challenges that may rise out of such experience. - Pathways to a successful VR experience. - What teachers using VR need to know to enhance the experience for their students. Why empathy needs to be taught more in schools Using the power of VR in developing empathy has become the focus of several educational institutions, including Stanford University’s Virtual Human Interaction Lab (VHIL). The lab curates the idea of virtual physical embodiments by allowing participants to inhabit avatars of a different age, race, or social standard. Building on Dr. Cogburn’s research and previous VHIL studies that have examined how virtual reality can induce empathy for people different from oneself, the project continues to examine the effects of this immersive virtual experience on changes in psychological processes, including empathy/social perspective taking, racial bias, and decision making. For example, the lab created “1000 Cut Journey,” an immersive virtual reality experience that allows you to walk in the shoes of Michael Sterling, a Black male, and encounter racism first-hand, as a young child, an adolescent, and a young adult. Understanding the social realities of racism is critical to promoting effective and collective social action. Participants who inhabit avatars of a different race in a virtual world later scored lower in tests of subconscious racial bias, and young people who “wore” an elderly avatar were more inclined to save for retirement. Another project of VHIL is called “Becoming Homeless: A Human Experience,” which explores the Fundamental Attribution Error. Coined by Stanford Psychologists, this concept describes how we blame others when bad things happen to them, but blame the external situations when bad things happen to us. When it comes to homelessness, there is a misconception that losing one’s home is due to who you are and the choices you make. Becoming Homeless seeks to counter this irrational tendency through an immersive VR experience that allows a participant to spend days in the life of someone who can no longer afford a home, interacting with their environment as they attempt to save their home and protect themselves and their belongings. It allows the participant to virtually walk in another’s shoes and face the adversity of living with diminishing resources. VHIL is not the only institution taking VR to another human level in evoking empathy. The International Red Cross produced VR films to counter “compassion fatigue” and boost donations. A VR-based curriculum is also applied by the NGO Global Nomads Group in “One World, Many Stories,“ a series of 360-degree biographical videos portrayed from the perspective of a boy in eastern Kentucky, a young black woman in New York City, and/or a young man in Amman, Jordan. Students are asked to what ideas they had about the characters and then later asked to add their own scenes into those videos that would boost the storyline of these characters, and how they could be better. The process of using VR to provoke empathy VR allows a seamless process for students to step into someone else’s shoes, like in the scenarios above, according to what the research paper “Learning Empathy through Virtual Reality” described as the “body ownership illusion.” The research pointed out that the physical presence in VR is not necessarily relevant to having a body, but rather having the convinced feeling of actually “being there” in the experience. While the users or college students are not physically present in a certain scenario relevant to human issues, they feel that they are in that set of events, and thus start adopting behaviors to inhabit their virtual environments. In simply defining the process that a student undergoes when learning about human issues, the process is described as “swapping bodies with another person” — in most cases, a digital avatar. Because students inhabit an invisible self in such an experience, they may be driven to present subjective anxieties. The research mentioned an example of a participant’s real hand getting stiffer and heavier in case of a sound feedback of a hammer hitting the user’s virtual hand. Sound manipulation techniques have proven to enhance VR experiences in a way that it increases self-identification and self-location in relation to the participant’s virtual body. Therefore, according to the research, this shifts the participant’s perception of touch toward the virtual body. A similar idea was portrayed by one of the researchers of “Learning Empathy through Virtual Reality”, and a research assistant at Université Paris, Descartes Philippe Bertrand in his collection called the Library of Ourselves. It is an archive of real stories pre-recorded in first-person perspective, through immersive 360-degree cameras positioned on their heads. Bertrand mentioned that the characters included a Syrian refugee in France, in which users in the VR experience are invited to interact with the refugee’s story and pictures of his Syrian family on his mobile phone. They are also invited to march in a protest with a Syrian flag and interact with his friends at the university in France. Bertrand is now working with the Ministry of National Education in France to tap into different intergroup empathy-related issues including presenting the perspective of students with dyslexia who are faced with cognitive and socially emotional challenges. He is also involved in another project where 13-year-old students are encouraged to produce VR films relevant to bullying at school, as part of a broader discussion on bullying. “Students tend to demonstrate self-reported empathy towards the subjects, presented they are reminded of important details of the experience, and engage actively in a discussion about the content and about the concepts of empathy phenomena themselves applied to their everyday lives,” Bertrand said. Potential challenges that may arise with these uses While the challenges of using VR in empathy-related curriculums are still not fully examined, research assistant professor at the University of Southern California Marientina Gotsis believes that one could fall victim to novelty, and not fully emotionally process what is experienced. “It is challenging for storytellers to produce a VR experience that does justice to the actual lived experience of others,” she added. Gotsis said that a lot of work still needs to be done in using VR to relate to empathy, specifically that the current work is technical and relatively “naïve.” Another aspect to be considered when expanding the use of VR in human-related issues, according to Gostis, is that unlike empathy, it is harder to provoke compassion because that requires a moral and ethical compass, and a thoughtful interaction design. There is also a chance in which the VR experience could be misused in class. Bertrand explained that although intergroup interactions may reduce bias, for example, interventions along these lines must consider the participant’s motivations. In other words, forcing students to go to an intervention related to empathy and obliging them to behave in a certain way may backfire and fail to develop empathic capacities. One of the pathways to a prosperous VR experience Based on past experience and research, VR has so far proved to be an eligible machine for promoting empathy; however, researchers like Bertrand argue that it should never be used alone. In enhancing an experience, VR should be integrated with follow-ups conveyed through pedagogies of empathy in which students would present, explain, discuss, and debrief about their experiences. VR clearly has immense potential in education, but researchers agree that there is still a long road to go until it can be fully and effectively integrated in empathy-related curriculum. What teachers using VR need to know to enhance the experience for their students There are many opportunities to enhance the VR experience in curriculum to best evoke empathy; however, Bertrand argues that VR should not be considered an empathy machine itself. Teachers need to debate with users about the content presented after their VR experience, and provide psychological support in case the experience leaves a strong emotional effect. Educators could also encourage their students to express their fear, anger, and/or anxiety about a certain topic through the VR experience instead of the real world. That form of expression is practice that will assist them in tackling struggles in real life. About the author Fatma is a multimedia journalist who has reported on different beats including politics, business, education, genders issues, human rights and foreign policy. Her reporting is focused in the Middle East region where she majored in print and electronic journalism.
Los Angeles, California - A new study by scientists at the UCLA Henry Samueli School of Engineering and Applied Science and Harvard Medical School offers surprising new insight into the genetic ancestry of modern humans. The research, published today in the journal Current Biology, also rewrites the timeline of when ancient humans interbred with other hominids by thousands of years. Scientists have long known that most of the world’s population, outside of Africa, has a little bit of Neanderthal DNA in their genetic makeup, meaning that humans and Neanderthals interbred at some point. But the new study suggests that many people might actually have a little bit of DNA that can be traced back to Denisovans — a population of ancient extinct hominids who lived alongside humans and Neanderthals until tens of thousands of years ago. And the research shows that humans interbred with Denisovans even more recently than they did with Neanderthals — perhaps as long as 100 generations later. The researchers used a library of genomic data for more than 250 modern human populations around the world and compared it to the DNA found in the Denisovan fossils. Then, using sophisticated modeling techniques, the scientists — a UCLA computational biologist and geneticists from Harvard — found that people living today in India, Nepal, Bhutan, Tibet and other parts of South Asia, carry more Denisovan DNA than existing genomic models had suggested. Denisovans were first described in 2010 through DNA extracted from a tooth and a finger bone fragment found in a Siberian cave in 2008. Genetically distinct from humans and Neanderthals, Denisovans diverged from the human family tree about 500,000 years ago. Previous studies showed that as much as 5 percent of the DNA of people who are native to Australia, Papua New Guinea and other parts of Oceana, descends from Denisovans. “‘Who are we?’ and ‘Where did we come from?’ have been among the most essential questions in the human story,” said Sriram Sankararaman, the study’s co-corresponding author and a UCLA assistant professor of computer science. “We did not even know about this important group until just a few years ago, and our study yields some insights on where Denisovans fit into this story. This also shows some new paths of interest that computational biology can explore.” Researchers applied several genomic and statistical techniques to a rich dataset that included 257 genomes from 120 non-African populations. The study found that Denisovans and humans mated as recently 44,000 to 54,000 years ago. Neanderthals had previously been found to have interbred with humans approximately 50,000 to 60,000 years ago. The researchers also discovered that both Denisovan and Neanderthal ancestry has been deleted from male X chromosomes, as well as from genes expressed in male testes. The paper suggests this has contributed to reduced fertility in modern men, which it notes is common in hybrids of two divergent populations. The paper’s other corresponding author was David Reich, a professor of genetics at Harvard Medical School. Reich is also affiliated with the Broad Institute of MIT and Harvard and the Howard Hughes Medical Institute. Other authors were Swapan Mallick and Nick Patterson, both of Harvard. Sankararaman also holds a faculty appointment in human genetics at the David Geffen School of Medicine at UCLA. He started the study while a postdoctoral scholar at Harvard. The research was funded by the National Institutes of Health.
Humanism and Human Rights Who or what is the ‘human’ of human rights and the ‘humanity’ of humanitarianism? The question sounds naïve, silly even. Yet, important philosophical and ontological questions are involved. If rights are given to beings on account of their humanity, ‘human’ nature with its needs, characteristics and desires is the normative source of rights. The definition of the human will determine the substance and scope of rights. Even if we knew who is the ‘human’, when does its existence and the associated rights begin and when do they end? Are foetuses, designer babies, clones, those in permanent vegetative state fully human? What about animals? The animal rights movement, from deep ecology and anti-vivisection militancy to its gentler green versions, has placed the legal differentiation between human and animal firmly on the political agenda and has drafted a number of bills of animal entitlements. This essay examines the ideology of humanism in its various transformations and permutations. It starts with the history of the concepts of humanity and human nature. The concept of humanity is an invention of modernity. Both Athens and Rome had citizens but not ‘men’, in the sense of members of the human species. Free men were Athenians or Spartans, Romans or Carthaginians, but not persons; they were Greeks or barbarians but not humans. The word humanitas appeared in the Roman Republic. It was a translation of paideia, the Greek word for culture and education, and was defined as eruditio et institutio in bonas artes.1 The Romans inherited the idea of humanity from Hellenistic philosophy, in particular Stoicism, and used it to distinguish between the homo humanus, the educated Roman, and the homo barbarus. The ‘human man’ was regulated by the jus civile, had some knowledge of Greek culture and philosophy and spoke in a cultivated language — he was like a graduate who read Greats at Oxford and speaks with a slightly posh accent. The homo barbarus was subjected to the jus gentium, lacked the sophistication of the real man and lived in the periphery of the empire. The first humanism was the result of the encounter between Greek and Roman civilisation and was used by the Romans to impress their superiority upon the world. Similarly, the early modern humanism of the Italian Renaissance retained a nostalgia for a lost past and the exclusion of those who are not equal to that Edenic period. It was presented as a return to Greek and Roman prototypes and targeted the barbarism of medieval scholasticism and the gothic north. A different conception of humanitas emerged in Christian theology, superbly captured in the Pauline statement that there is no Greek or Jew, free man or slave. All men are equally part of spiritual humanity, which is juxtaposed to the deity and the inanimate world of nature. They can all be saved through God’s plan of salvation. Universal equality — albeit of a spiritual character — a concept unknown to the classics, entered the world stage. But the religious grounding of humanity was undermined by the liberal political philosophies of the 18th century. The foundation of humanity was transferred from God to (human) nature, initially perceived in a deistic and today a scientific manner. By the end of the 18th century, the concept of ‘man’ came into existence and soon became the absolute and inalienable value around which the whole world revolved. Humanity, man as species existence, entered the historical stage as the peculiar combination of classical and Christian metaphysics. For humanism, there is a universal essence of man and this essence is the attribute of each individual who is the real subject.2 Michael Ignatieff is typical when he writes that ‘our species is one, and each of the individuals who compose it is entitles to equal moral consideration.’3 As species existence, man appears without differentiation or distinction in his nakedness and simplicity, united with all others in an empty nature deprived of substantive characteristics except for his free will, reason and soul — the universal elements of human essence. This is the man of the rights of man, someone without history, desires or needs, an abstraction that has as little humanity as possible, since he has jettisoned all those traits and qualities that build human identity. If according to Heidegger, subjectivity is the metaphysical principle of modernity, it is legal personality, the ‘man’ of the rights of man the subject of rights who exemplifies and drives the new epoch. A minimum of humanity is what allows man to claim autonomy, moral responsibility and legal subjectivity. The idea that the essence of humanity is to be found in a human cipher lacking the characteristics which make each person a unique being is bizarre. It is still the dominant ideology of liberalism. Francis Fukuyama recently repeated the 18th century orthodoxies in the context of genetic engineering. ‘[W]hen we strip all of a person’s contingent and accidental characteristics away, there remains some essential human quality underneath that is worthy of a certain minimal level of respect — call it Factor X. Skin, color, looks, social class and wealth, gender, cultural background, and even one’s natural talents are all accidents of birth relegated to the class of nonessential characteristics. . . But in the political realm we are required to respect people equally on the basis of their possession of Factor X.’4 For Fukuyama, the differences that create our identity are superficial and accidental, contingent characteristics of no major importance. In this, he repeats Rawls’s claim that the principles of justice can only be agreed by people who have no knowledge of their specific talents, needs and desires, which are concealed under a veil of ignorance.5 But unlike Rawls and Habermas who discover the elusive factor defining the essence of humanity in transcendental characteristics and species ethics, Fukuyama seeks it in our genetic inheritance. We may all be different, but behind the accidental idiosyncrasies a universal equivalence lurks, a certain je ne sais quoi which endows us with our human dignity. Yet, if we look at the empirical person who enjoys the ‘rights of man’, he is and remains a ‘man all too man’ — a well-off citizen, a heterosexual, white, urban male. This man of rights condenses in his identity the abstract dignity of humanity and the real prerogatives of belonging to the community of the powerful. In other words, the accidental surface differences of race, colour, gender, ethnicity have been consistently defined as inequalities supporting the domination of some and subjection of others, despite the common underlying factor X. One could write the history of human rights as the ongoing and always failing struggle to close the gap between the abstract man and the concrete citizen; to add flesh, blood and sex to the pale outline of the ‘human’. The persistence throughout history of barbarians, inhuman humans, the ‘vermin’, ‘dogs’ and ‘cockroaches’ of our older and more recent concentration camps, such as Guatanamo Bay and Abu Ghraib, the potential of world annihilation by humanity’s creations as well as recent developments in genetic technology and robotics indicate that no definition of humanity and is definite nor conclusive. Humanity’s mastery, like God’s omnipotence, includes the ability to redefine who or what counts as human and even to destroy itself. From Aristotle’s slaves to designer babies, clones and cyborgs, the boundaries of humanity have been shifting. What history has taught us is that there is nothing sacred about any definition of humanity and nothing eternal about its scope. No common ‘factor X’ exists. The meaning of humanity, as the ground normative source, is fought over today by the universalists and relativists, the two more prominent expressions of postmodern humanism. The universalist claims that cultural values and moral norms should pass a test of universal applicability and logical consistency and often concludes that if there is one moral truth but many errors, it is incumbent upon its agents to impose it on others. The relativists and the communitarians (since relativism is a meta-ethical position) start from the obvious observation that values are context-bound and try to impose them on those who disagree with the oppressiveness of tradition. In Kosovo, Serbs massacred in the name of threatened community (the Serb nation should keep Kosovo its ‘cradle’ in perpetuity and oppress Albanians who lived there in a large majority). The allies bombed in the name of threatened humanity and in support of universal rights, even though the link between the rights of Kosovar Albanians and the bombing of civilians in Belgrade is not immediately apparent. Both positions, when they define the meaning and value of humanity fully and without remainder, find everything that resists them expendable. They exemplify, perhaps in different ways, the contemporary metaphysical urge: they have made an axiomatic decision as to what constitutes the essence of humanity and follow it with a stubborn disregard for opposing arguments. The individualism of universal principles forgets that every person is a world and comes into existence in common with others, that we are all in community. Being in common is an integral part of being self: self is exposed to the other, it is posed in exteriority, the other is part of the intimacy of self. Before me comes the (m)other. I am I because the other and language has called me ‘you’, ‘Costas’. My face is always exposed to others, always turned toward an other and faced by him or her never facing myself. On the other hand, being in community with others is the opposite of the communitarian common being or belonging to an essential community. Most communitarians define community through the commonality of tradition, history and culture, the various past crystallisations whose inescapable weight determines present possibilities. The essence of the communitarian community is often to compel or ‘allow’ people to find their ‘essence’, common ‘humanity’ now defined as the spirit of tradition, or the nation, religion, the people, the leader. We have to follow traditional values and exclude what is alien and other. Community as communion accepts human rights only to the extent that they help submerge the I into the We, all the way till death, the point of ‘absolute communion’ with dead tradition.6 If we abandon the essentialism of humanity, human rights appear as highly artificial constructs, a historical accident of European intellectual and political history. The concept of rights belongs to the symbolic order of language and law, which determines their scope and reach with scant regard for ontologically solid categories, like those of man, human nature or dignity. The ‘human’ of rights or the ‘humanity’ of humanitarianism can be called a ‘floating signifier’. As a signifier, it is just a word, a discursive element, neither automatically nor necessarily linked to any particular signified or meaning. On the contrary, the word ‘human’ is empty of all meaning and can be attached to an infinite number of signifieds. As a result, it cannot be fully and finally pinned down to any particular conception because it transcends and overdetermines them all.7 But the ‘humanity’ of human rights is not just an empty signifier; it carries an enormous symbolic capital, a surplus of value and dignity endowed by the revolutions and the declarations and augmented by every new struggle that adopts the rhetoric of human rights. This symbolic excess turns the ‘human’ into a floating signifier, into something that combatants in political, social and legal struggles want to co-opt to their cause, and explains its importance for political campaigns. From a semiotic perspective, rights do not refer to things or other material entities in the world but are pure combinations of legal and linguistic signs, words and images, symbols and fantasies. No person, thing or relation is in principle closed to the logic of rights. Any entity open to semiotic substitution can become the subject or object of rights; any right can be extended to new areas and persons, or, conversely, withdrawn from existing ones. Civil and political rights have been extended to social and economic rights, and then to rights in culture and the environment. Individual rights have been supplemented by group, national or animal rights. The Spanish MP Francisco Garido recently moved a resolution to create human rights for great apes, the animals genetically closest to humans.8 The right to free speech or to annual holidays can be accompanied by a right to love, to party or to have back episodes of Star Trek shown daily. Or, as a British minister put it, we all have a human right to properly functioning kitchen appliances. If something can be put into language, it may acquire rights and can certainly become the object of rights. The only limits to the ceaseless expansion or contraction of rights are conventional: the effectiveness of political struggles and the limited and limiting logic of the law. Human rights struggles are symbolic and political: their immediate battleground is the meaning of words, such as ‘difference’ and ‘similarity’ or ‘equality’ and ‘otherness’, but if successful, they have ontological consequences — they radically change the constitution of the legal subject and affect peoples’ lives. A refugee whose claim to enter the recipient country has been constructed in human rights terms is a more privileged subject — more ‘human’ — than someone else, whose claim is seen as simply economic turning him into a ‘bogus’ subject. Similarly, the claim of gay and lesbians to be admitted to the army has a greater chance of success if presented as a rights-claim about discrimination than if it attacks the irrationality of the exclusion on administrative law grounds.9 Its success has wider repercussions than the protection of army employment. The claimants’ position changes as a result, their identity becomes fuller and more nuanced through the official recognition of their sexuality. If we accept the psychoanalytic insight that people have no essential identities outside of those constructed in symbolic discourses and practices,10 a key aim of politics and of law is to fix meanings and to close identities by making the contingent, historical links between signifiers and signifieds permanent and necessary. But such attempts can succeed only partially because the work of desire never stops. If human rights are the cause and effect of desire, they do not belong to humans; human rights construct humans.11 We can conclude that ‘humanity’ cannot act as the a priori normative source and is mute in the matter of legal and moral rules. Humanity is not a property shared, it has no foundation and no ends, it is the definition of groundlessness. It is discernible in the incessant surprising of the human condition and its exposure to an undecided open future. Its function lies not in a philosophical essence but in its non-essence, in the endless process of redefinition and the continuous but impossible attempt to escape fate and external determination. In this ontology, what links me to the other is not common membership of humanity, common ethnicity or even common citizenship. Each one is a unique world, the point of knotting of singular memories, desires, fantasies, needs, planned and random encounters. This infinite and ever changing set of events, people and thoughts is unrepeated and unrepeatable, unique for each of us like our face, unexpected and surprising like a coup de foudre. Each one is unique but this uniqueness is always created with others, the other is part of me and I am part of the other. But my being — always a being together — is on the move, created and recreated in the infinite number of encounters with the unique worlds of other singular beings. This is the ontology of the cosmopolitanism to come. Humanity has no intrinsic normative value. It is continuously mobilised however in political, military and, recently, humanitarian campaigns. Humanitarianism started its career as a limited regulation of war but has now expanded and affects all aspects of culture and politics. The next part examines the military humanitarianism of our recent wars while the last will explore the effects of humanitarianism on the citizens of the Western world. The humanitarian movement started in the 19th century. According to received opinion, the key event was the foundation of the International Committee of the Red Cross by Jean-Henri Dunant, in 1859, after he witnessed the widespread slaughter of combatants at the battle of Solferino between France and Austria. Dunant spearheaded the adoption of the Geneva Convention of 1864 under which governments agreed to allow access to battlefields for neutral field hospitals, ambulances and medical staff. By WWI, the Red Cross had established itself as the largest humanitarian organisation responsible for monitoring the Geneva Conventions, which codified the laws of war and established rules for the humane treatment of prisoners of war. Traditional humanitarian law is the body of international law, which attempts to regulate the use of force during armed conflict, the modern version of the jus in bello. Its core principles have developed from just war theory and are rather basic and broad: the use of force must be a last resort; a distinction must be maintained during hostilities between military personnel and civilians; all efforts must be made to minimise non-combatant casualties; finally, the use of force must be proportional to its objective. A less technical use of the term humanitarianism refers to the efforts by organisations and governments to alleviate mass suffering after major natural catastrophes and to aid populations caught in war or civil strife. Combining both types of humanitarianism and enjoying the strongest reputation, the Red Cross adopted, in 1965, seven fundamental principles which became the rule-book of humanitarian action: humanity, impartiality, neutrality, independence, voluntary service, unity and universality. The main characteristic of the Red Cross and of humanitarianism more generally was supposed to be, as these principles indicate, its non-political character and its neutrality towards the protagonists of wars and natural disasters. Other charities and Non-Governmental Organisations (NGOs) such as Oxfam, Save the Children and Christian Aid adopted the same non-political posture. Amnesty International, for example, campaigned for prisoners of conscience without regard for their political views. Early humanitarianism did not make distinctions between good and bad wars, just and unjust causes or, even, between aggressors and innocents. It was committed to the direct and immediate reduction of human suffering through the protection of prisoners of war and civilians involved in conflict or through famine relief and medical aid. As interest in development and human rights grew in the 1970s and 1980s, NGOs adopted these concerns and promoted policies of popular appeal. A high point of NGO humanitarianism was the Live Aid campaign in 1984-5 to raise funds for relief of the Ethiopian famine. Carried out in the face of governmental indifference, humanitarian aid had few political conditions attached and avoided association with western foreign or defence objectives. Indeed up to 1989, the division between state-led development aid with strategic ends and ideological priorities and politically neutral needs-based humanitariarism was clear. But this clear distinction has been blurred after the end of the Cold War. The roots of the new humanitarianism lie in the growing western involvement in the internal affairs of the developing world and the use of economic sanctions and force for humanitarian purposes. The move beyond the aims of saving lives and reducing suffering to the more muscular recent humanitarianism has two strands. The first grew out of conflict situations. It extended involvement from the provision of immediate assistance to victims to a commitment to solidarity and advocacy and a concern for the long-term protection and security of groups at risk. The second strand, which deals with national catastrophes such as famines, droughts or the recent tsunami, expressed an interest in the long-term development of poor countries beyond the failing aid policies of governments. This broader and deeper humanitarianism was obliged to make strategic choices about aims to be prioritised and groups to be assisted. Once the neutrality principle was broken, the road was opened, in the 1990s, for various NGOs to advocate Western military intervention for humanitarian purposes. This politicisation of aid work is in conflict with the apolitical profile on which the public appreciation for NGOs depends. As a result, NGOs have become extremely concerned to re-assert their traditional neutrality and non-political reputation. One way of reconciling conflicting priorities and justifying policy choices was to present them in the language of morality and ethics instead of that of politics. Human rights have become the preferred vocabulary of this new type of humanitarianism and are often used to disguise complex and contentious decisions. In some conflicts, the justice of the cause is clear; in most, it is not. The blurring of the line dividing human rights and humanitarianism has led to disturbing consequences. Some policies and regulatory regimes have been translated into the language of rights, others have not. The treatments of war prisoners, for example, has been largely displaced from the international law language of regulation and limits on state action into that of prisoners’ rights. The effects of this change are evident in the American assertion that the Guatanamo Bay prisoners have no rights because they are evil murderers and a threat to western security. This is a clear violation of the Geneva Conventions but can be justified in the language of human rights. Human rights with their principles and counter-principles and their concern to create an equilibrium of entitlements are much easier to manipulate than clear proscriptions of state action. The emphasis placed by the British government on the protection of the rights of the majority from terrorism, after the July 2005 London bombings, is consistent with human rights legislation. Most substantive rights under the European Convention on Human Rights can be limited or restricted in the interests of national security or for the protection of the rights of others. When national security becomes human security, when ‘the others’ are defined as anyone who may be affected by a terrorist act (potentially everyone), there is very little these overbroad qualifications disallow. In this sense, the annoyance of the British government with judges, who found detention without trial and the control orders imposed on terrorist suspects in violation of human rights, was justified. As the scope of the human rights language expands and most political and social claims and counter-claims are expressed in it, the protection afforded by clearly formulated prohibitions of international law becomes weakened. When everything becomes actually or potentially a right, nothing attracts the full or special protection of a superior or absolute right. These developments have led to the convergence between humanitarian work and governmental rhetoric and policies. As David Kennedy, an influential Harvard international lawyer, has recently argued, contemporary humanitarianism is no longer the cry of dissidents, campaigners and protesters but a common vocabulary that brings together the government, the army and erstwhile radicals and human rights activists.12 The dissidents have stopped marching and protesting. Instead they have become bit players in governmental policy-making and even in military planning. Kennedy approves this development and reserves his strongest criticisms for the remaining radicals, idealists and activists. The indictment is long: radical humanitarians believe in abstract generalisations, they do not accept responsibility for the long-term consequences of their actions and are happy to criticise governments from the margins; unlike governments and policy-makers, they do not carry out cost-benefit analyses of their activities; their commitment to broad principles of improving humanity to be carried out through constitutional reform, legal measures and institution-building blinds them both to the inadequacy of the tools and the adverse effects of their activities; they see themselves as outsiders and avert their eyes from power generally and their own power specifically.13 Kennedy concludes that humanitarians believe hubristically that history will progress through the adoption of their principles and recipes. These ‘do-gooder’ relics of a previous era judge power extrinsically ‘from religious conviction, natural right, positive law’ and pathetically try to preserve their ‘ethical vision’. But this has been changing. Since at least the end of the Cold War ‘many humanitarian voices have become more comfortable speaking about the completion of their realist project.’14 People who have spent a lifetime feeling marginal to power often find it difficult to imagine that they could inherit the earth in quite this way. They have been admitted into the corridors and back rooms of power and this unnatural coupling paves the way for the future. This development may be shocking news to Amnesty International members stuffing envelopes to support political prisoners. There is ample evidence to support it however. Colin Powell stated before the Afghanistan war that “‘NGOs are such a force multiplier for us, such an important part of our combat team . . . [We are] all committed to the same, singular purpose to help humankind. . .” We share the same values and objectives so let us combine forces’ on the side of civilisation.15 Before the Iraq war, aid organisations were offered grants by the American government to join the coalition. They had to show attachment to American moral values and concern for civilians. The Red Cross and Oxfam argued against that war, rightly anticipating a humanitarian catastrophe, while the Médecins Sans Frontières, an organisation that campaigned actively for the Kosovo war, remained neutral. Bernard Kouchner, its founder, has been credited with coining the term droit d’ingérance humanitaire and became the UN appointed viceroy of Kosovo. Most NGOs however accepted government funding and joined the war effort. They became subcontractors competing with private companies for market share. As the USAID director put it, NGOs under US contracts ‘are an arm of the US government and should do a better job highlighting their ties to the Bush admin if they want to continue receiving money.’16 The head of programmes for the US Agency of International Development in Afghanistan agreed: ‘We’re not here because of the drought and the famine and the condition of women. We’re here because of 9/11. We’re here because of Osama bin Laden.’17 Aid NGOs now work with the military in post-conflict zones assuming responsibility as public service subcontractors for the provision of health and education. Humanitarian governance is ‘imperial because it requires imperial means: garrison of troops and foreign civilian administrators, and because it serves imperial interests.’18 As a result of the perception that NGOs are no longer impartial, aid officers have been under continuously attack in Afghanistan where ‘the humanitarian emblems designed to protect them now identify than as legitimate targets’19 while international NGOs have largely pulled out of Iraq after lethal attacks on the UN compound, the Red Cross headquarters and NGO officers. Michael Hardt and Antonio Negri compare NGOs with the Dominicans and the Jesuits of colonialism, arguing that they act ‘as the charitable campaigns and mendicant order of Empire’.20 It is not wrong to say that the media campaigns of NGOs have prepared public opinion for ‘humanitarian wars’ and are willingly or inadvertently integral parts of the new order supporting and promoting its moral claims. According to David Kennedy, humanitarian policy-makers working for governments, international institutions and international NGOs have adapted much better than their activist counterparts to the needs of ‘ruleship’. The humanitarians dealing with the use of force in close collaboration with the army are a prime example. The military has given up its exclusive claim to power and the radicals their traditional attraction to pacifism in order to participate fully in military policy-making and post-conflict governance. Humanitarians lawyers and NGO officers are fully involved in the planning and conduct of wars. Like their newly-found military comrades, they see force as a tool towards ends and they balance legal and moral rules in instrumental terms. The common language unites humanitarians and military in balancing acts, tradeoffs and calculation of consequences. The vocabulary has ‘drifted free of legal roots and has become the mark of civilisation and participation in a shared ethical and professional common sense community’. This pragmatic merger of military and humanitarian roles has allegedly led the military to ‘best practice’ and has ‘civilised warfare’. In the lead to the Iraq war, we are told, humanitarians and military spoke exactly the same language, with the reformed former radicals apparently interpreting legal limitations on the conduct of war more permissively than the military.21 The military on their part realising the caché of humanitarianism has adopted a not dissimilar rhetoric. A few examples can illustrate the point. According to Michael Ignatieff, the Kosovo air raids were decided in the NATO Brussels headquarters with military planners and lawyers peering over screens with the lawyers advising on the legalities before a bombing raid was ordered.22 While this elaborate procedure did not limit civilian casualties, it meets the definition of a ‘humane war’.23 Colonel Tim Collins, the commander of the Irish Guards during the Iraq war, was an exemplary humanitarian soldier when telling his troops before crossing into Iraq to join the campaign: “We are going to Iraq to liberate and not to conquer. We will not fly our flags in their country . . . The only flag that will be flown in that ancient land is their own . . . Iraq is steeped in history; it is the site of the Garden of Eden, of the Great Flood and the birthplace of Abraham. Tread lightly there.”24 Collins soon realised that occupation lite is not an option and changed his views. Another telling example was the practice of American aircraft to drop aid packages in Afghanistan in between bombing raids. ‘Cruise missiles and corned beef ‘ could be the motto of military humanitarianism. David Kennedy concludes after a visit to an aircraft carrier that humanitarian norms have been ‘metabolised into the routines of the US Navy.’25 The military is the world’s ‘largest human rights training institution’ and the vocabulary of humanitarianism is nowhere ‘as effective as it seemed to be abroad the USS Independence.’26 As Michael Walzer, another reformed radical puts it, ‘I am inclined to say that justice has become, in all Western countries, one of the tests that any proposed military strategy or action has to meet . . . moral theory has been incorporated into war-making as a real constraint on when and how wars are fought.’27 But we should take such bravura statements with a dose of salt. General Wesley Clark, the commander of the Kosovo operation, complained that Europe’s ‘legal issues’ were ‘obstacles to properly planning and preparing’ the war and adversely affected its operational effectiveness. ‘We never want to do this again’ he concluded and Iraq confirmed his prediction. Only lip service was paid to the legal concerns.28 Even if we discount the exaggerations and excessive missionary zeal of the military-humanitarian complex, it looks as if an imperial officer corps and bureaucracy is emerging. The unnatural coupling of ultimate power and its erstwhile critics appears to be well under way. Disciplines, professions and tasks have been cross-pollinated and created a new professional class, the ‘humanitarians’ or ‘internationals’. The term applies to ‘people who aspire to make the worlds more just, to the projects they have launched over the past century in pursuit of that goal, and to the professional vocabularies which have sprung up to defend and elaborate those projects.’29 The group includes the usual suspects: human rights activists, lawyers, international civil servants, NGO operators and assorted do-gooders and extends to politicians, military strategists and ordinary soldiers and all those whose task is to spread the principles of the new world order, if necessary by force. Whatever the ideology, humanitarianism has become a job opportunity. Ignatieff concludes that the ‘internationals’ ‘run everything’ in Kosovo. ‘Pristina’s streets are clogged with the tell-tale white Land Cruisers of the international administrators, and all the fashionable, hillside villas have been snapped up by the Western aid agencies. The earnest aid workers, with their laptops, modems, sneakers and T-shirts, all preach the mantra of “building local capacity”, while the only discernible capacity being created is the scores of young people who serve as drivers, translators and fixers for the international community.’30 It looks as if the most discernible effect of ‘nation-building’ is the creation of a body of colonial administrators. ‘Kabul . . . is one of the few places where a bright spark just out of college can end up in a job that comes with a servant and a driver.’31 It is not surprising; most of the states following the Americans in their wars and occupations are former imperial powers, well-versed in the job of running colonial outposts. The earlier ‘naïve’ humanitarians of the Vietnam war judged the actions of power from an external perspective such as religion, natural or positive human rights law and claimed to speak ‘truth or virtue to power’. Their descendents have realised that if they want to restrain power they must adopt its aims and mindset, become full participants in power’s games and try to influence it from the inside. In more prosaic terms, humanitarians have understood that responsibility involves engagement with power and have abandoned the infantile appeal of pacifism, ‘the radicalism of people who do not expect to exercise power or use force, ever and who are not prepared to make the judgments that this exercise and use require.’32 They have become part of the leading elite, the priests and missionaries of the new world order. For the pragmatist ideologist, the task now is to consolidate and generalise this project of osmosis between humanitarians, the military and politicians and turn it into a world ideology. ‘We must promote the vocabulary among civilian populations, or we must strengthen the legitimacy of professional humanitarians as the voice of a universal ethics . . . harmonic convergence between the military and humanitarian sensibility will only be achieved once the humanitarian vocabulary becomes a dominant global ideology of legitimacy.’33 This is an amazing claim. The purpose of natural law, human rights and humanitarianism has been, from their inception, to resist public and private domination and oppression. When Kennedy deplores radical humanitarians who speak ‘truth to power’ from a position of religious conviction, natural right or positive law, he acknowledges some of the main formalisations of dissent and opposition. For those who have nothing else to fall back upon human rights becomes a kind of imaginary or exceptional law.34 Human rights work in the gap between ideal nature and law, or between real people and universal abstractions. The perspective of the future does not belong to governments, accountants and lawyers. It certainly does not belong to international organisations, diplomats and professional humanitarians. Governments were the enemy against whom human rights were invented. The ‘universal ethics’ of professional humanitarians on the other hand is a misnomer. Its universalism turns the priorities of the American elite into global principle; its ethics upgrades the deontology of a small coterie into a moral code. To claim that human rights are today a main weapon for generating governmental legitimacy is to turn the poacher into the gamekeeper. At this point, human rights lose their end and their role comes to an end. We must defend therefore the radical do-gooders, the marginal pacifists, the anti-war and antiglobalisation protesters and all those who Bartleby-like would prefer not to become scriveners for the elites and accountants of power. They represent the most important European moral and political legacy while military humanitarians represent the abandonment of politics by the liberal nomenclature for a few slivers of power. One could call this, the postmodern trahison des clercs. Hilary Charlesworth, in a hilarious retort to Kennedy, doubts that many principled radicals are left in the humanitarian community anyway: ‘The international human rights movement already largely operates in the pragmatic mode.’35 She may be right, in which case the principle of hope human rights feebly represent today will have been extinguished in the quest for government grants and junior partner role in military campaigns. Professionalism will have won by abolishing the raison d’être of humanitarianism. Following Alex de Waal, we can call this enterprise and its officers ‘Global Ethics Inc’.36 We should insist however against realists, pragmatists and the ideologues of power that the energy necessary for the protection, horizontal proliferation and vertical expansion of human rights comes from below, from those whose lives have been blighted by oppression or exploitation and who have not been offered or have not accepted the blandishments and rewards of political apathy. Human rights professionals, whether radical or pragmatic, are at best ancillary to this task, which cannot be delegated. This question of delegation and substitution is crucial for the politics of humanitarianism within the Western world, to which we now turn. The Stakes of Humanitarianism ‘Thanks for coming to support the greatest thing in the history of the world’ Chris Martin, the lead singer of pop band Coldplay told the crowd at the Live8 concert in Hyde Park, London, in July 2005. ‘We are not looking for charity, we are looking for justice’ was how U2 lead singer and event co-organiser Bono expressed the purpose of the series of concerts organised to coincide with the meeting of the G8 leaders in Scotland. In repeated appeals to the leaders of the eight richest nations of the world, Live8 demanded that African debt should be written off and aid levels substantially increased. Human rights should be put at the centre of the agenda of the Western leaders. There is no doubt that the many hundreds of thousands who followed the eight concerts around the world agreed with these sentiments. Tears and sympathy for African suffering and pain dominated the acres of space dedicated to the concert in the British newspapers. The crowds had a great time listening to Madonna, Pink Floyd and Paul McCartney, participating in the ‘biggest thing ever organised’ and protesting against African poverty and disease. Justice ‘was the simplest and most pervasive theme. . . Everyone is, suddenly, globally, politicised’.37 As a combination of hedonism and good conscience, Live8 will not be easily overtaken in size or hyperbole. This was partying as politics, drinking and dancing as moral calling. Public protest involves an element of publicity acknowledged in the law of public order. Marches, demonstrations, rallies, picketing and sit-ins have always involved some violence or at least inconvenience for protesters and the public at large. Marches and demonstrations take place in public; they also bring people together and create out of isolated monads a public concerned with issues that transcend limited self-interest. The classical agora and forum were re-enacted metaphorically in the public sphere of newspapers and debating societies of early capitalism and, physically, in the streets, squares and other public places of modernity. But publicity, sharing ideas or actions, marching together is scarcely the point of the politics of this type of humanitarianism. In the global politics of protest, inconvenience has been replaced by partying, publicity by TV campaigns, empathy by private donations. Indeed, to the extent that the main tactic of humanitarian campaigns is to have people donate money while watching celebrity-filled shows on TV, the public character has been lost. We participate in human rights struggles from our front room not as polites, publicly-minded citizens, but as idiotes, private persons, committed to personal interest. No wonder that the G8 leaders and targets of Live8 stated, according to Chancellor Gordon Brown, that they would be happy to participate in the ‘action’ against them. Humanitarianism has turned into the ultimate political ideology bringing together the well-being of the West with the hardships of the global South. But what does it mean for politics to become TV campaigns? What type of humanity does humanitarianism project? The idea of humanity that Band Aid, Live8 and Amnesty International letter-writing campaigns propose and promote dominates our imagination and our institutions and determines the way we see ourselves and others. In theory, humanity brings together and transcends regional characteristics such as nationality, citizenship, class, gender, race or sexuality. Michael Ignatieff is on sure ground when he claims that human rights embody the idea that ‘our species is one’.38 We should be able to recognise the same human person, despite empirical differences, all over the world, in the City of London and the slums of Bombay, in the country houses of Berkshire and the town houses of Baghdad. The ideology of humanitarianism: the human has the same needs, desires and traits everywhere and these (ought to) determine the rights we have. Rights follow our nature. As natural, they are evident, they are agreed by everyone; there is no person of good faith who does not accept their universality or political efficacy. They are the entitlements of common humanity, they belongs to us on account of our membership of the species human rather than of narrower categories. But then doubts start creeping up. We would not need legal enforcement of these ‘obvious’ entitlements if they were that obvious. Their institutional proclamation and protection indicates that humanity is not one, that human nature is not common to all, that nature cannot protect its own. Live8 is part of the sad recognition that, despite the claims of humanism, humanity is split, the ‘human’ breaks up into distinct parts. One part is the humanity that suffers, the human as victim; the other is the humanity that saves, the human as rescuer. Humanity’s goodness depends on its suffering but without goodness suffering would not be recognised. The two parts call each other to existence as the two sides of the same coin. You cannot have a rescuer without a victim and there is no victim unless a rescuer recognises him as such. But there is a second split. Humanity suffers because parts of it are evil, degenerate, cruel and inflict indescribable horrors upon the rest. There can be no redemption without sin, no gift without deprivation, no Band Aid without famine. This second separation is officially acknowledged in the important concept of ‘crimes against humanity’. The Nuremberg trial, which first introduced this legal novelty, is seen as a symbolic moment in the creation of the human rights movement. Human rights emerged when humanity acknowledged that one of its parts commits despicable atrocities against another, while a third, the saviour and redeemer, uses law, reason and occasionally force to punish the perpetrators and remedy pain and harm. Humanity suffers as a result of evil and crime, or through the effects of avoidable human error or unavoidable bad luck. If humanity suffers because of its own evil and must be rescued, evil and its consequences, vulnerability suffering pain, are its universal characteristics. Religious traditions and political ideologies attribute suffering to evil. For Christian, particularly Protestant theology, suffering is a permanent existential characteristic, the unavoidable effect of original sin. Suffering and pain are the result of transgression, of lack or deprivation of goodness but also the sinner’s opportunity for salvation by imitating Christ’s passion. Indeed, the word pain derives from the Latin poena, punishment. The human rights movement agrees. It aims to put cruelty first, to stop ‘unmerited suffering and gross physical cruelty’.39 In the dialectic of good and evil, evil comes first; the good is defined negatively as steresis kakou, as the removal, remedy or absence of evil. Human rights and humanitarianism bring the different parts of humanity together, they try to suture a common human essence out of the deeply cut body. Let us examine briefly the three masks of the human, the suffering victim, the atrocious evildoer and the moral rescuer. First, man as victim. The victim is someone whose dignity and worth has been violated. Powerless, helpless and innocent, her basic nature and needs have been denied. But there is more: victims are part of an indistinct mass or horde of despairing, dispirited people. They are faceless and nameless, the massacred Tutsis, the trafficked refugees, the gassed Kurds, the raped Bosnians. Victims are kept in camps, they are incarcerated in prisons, banned into exitless territories en mass. Losing humanity, becoming less than human; losing individuality, becoming part of a horde, crown or mob; losing self-determination, becoming enslaved; these are the results of evil, otherwise known as human rights violations. Indeed here we may have the best example of what Giorgio Agamben calls ‘bare or sacred life’40 or Bernard Ogilvie, the ‘one use human’:41 biological life abandoned by the juridical and political order of the nation-state, valueless life that can be killed with impunity. The publicity campaigns with the “imploring eyes” of dying kids and mourning mothers are ‘the most telling contemporary cipher of the bare life that humanitarian organisations, in perfect symmetry with state power, need.’42 The target of our charity is an amorphous mass of people. It populates our TV screens, newspapers and NGO fund-raising campaigns. The victims are paraded exhausted, tortured, starving but always nameless, a crowd, a mob that inhabits the exotic parts of the world. As a former president of Medecins Sans Frontiers put it, ‘he to whom humanitarian actions is addressed is not defined by his skills or potential, but above all, by his deficiencies and disempowerment. It is his fundamental vulnerability and dependency, rather than his agency and ability to surmount difficulty that is foregrounded by humanitarianism.’43 The victim is only one side of the Other. The reverse side represents the evil aboard in those scary parts the world. This second half, the cause of the fall and the suffering, the Mr Jeckyl or the wolfman, is absolute evil. Its names legion: the African dictator, the Slav torturer, the Balkan rapist, the Moslem butcher, the corrupt bureaucrat, the Levantine conman, the monstrous sacrificer. The beast of Baghdad, the butcher of Belgrade, the warlord, the rogue and the bandit are the single cause and inescapable companion of suffering. As Jacques Derrida puts it, ‘the beast is not simply an animal but the very incarnation of evil, of the satanic, the diabolical, the demonic — a beast of the Apocalypse.’44 The victims are victimised by their own and to that extent their suffering is not undeserved. Famine, malnutrition, disease and lack of medicines result from the intrinsic corruption of the evil Other, signs of divine punishment or of appropriate fate in the form of acts of God or force majeure. The Other of the West combines the suffering mass and the radical evil-doer, the subhuman and the inhuman rolled into one. In this moral universe, the claim that there is a single essence to humanity to be discovered in evil, suffering and its relief, for which debt relief stands as a metaphor, is foundational. Whoever is below the standard is not fully up to the status of human. Indeed, every human rights campaign or humanitarian intervention presupposes an element of contempt for the situation and the victims. Human rights are part of an attitude of the post-colonial world in which the ‘misery’ of Africa is the result of its failings and corruption, its traditional attitudes and lack of modernisation, its nepotism and inefficiency, in a word of its sub-humanity. We can feel great pity for the victims of human rights abuses; but pity is tinged with a little contempt for their fickleness and passivity and huge aversion towards the bestiality of their compatriots and tormentors. We do not like these others, but we love pitying them. They, the savages/victims, make us civilised. This brings us to the rescuer. The human rights campaigner, the western philanthropist and the humanitarian party-goer are there to save the victims. Participation and contributions to the humanitarian movement may be resulting in ‘collateral benefit’. There is a kernel of nobility in joining letter-writing campaigns or giving money to ‘good causes’ to alleviate suffering. Such campaigns have given help to political prisoners and to victims of torture, civil war and natural catastrophe. But a strange paradox accompanies increased humanitarian activism. Our era has witnessed more violations of human rights than any previous less ‘enlightened’ one. Ours is the epoch of massacre, genocide, ethnic cleansing, the age of the Holocaust. At no point in human history has there been a greater gap between the north and the south, between the poor and the rich in the developed world or between the ‘seduced’ and the excluded globally. The results of massive humanitarian campaigns are rather meager. In 2006, an audit of G8 promises made to Live8 a year earlier found that rich countries are failing badly to meet the targets they themselves set.45 No degree of progress allows us to ignore that never before in absolute figures, have so many men, women, and children been subjugated, starved, or exterminated on earth. The triumph of humanitarianism is drowned in human disaster. The ‘best’ and the ‘worst’ come together, prompting and feeding off each other. But if we approach the rescue missions of humanitarianism as part of a wider project on intervention both in the South and in the North, some of the apparent contradictions start disappearing. Liberal theory understands rights as an expression and protection of individual desire, albeit indirectly. Amidst the proliferation of theorists on human rights, few have argued that human suffering is their common foundation or theme. One is Klaus Gunther, for whom all major European institutional innovations and protections, from the Magna Carta, to the French Declaration of the Rights of Man, to the various Bills of Rights across the continent, to the European Convention on Human Rights, have been reactions to different types of atrocity. European history is replete with wars, oppression, annihilation of others and, as a result, the history of human rights is written in blood. In Gunther’s analysis, negative historical experiences and the development of the human rights movement are closely linked. ‘If you want to know what is meant by ‘human dignity’ or ‘equal concern and respect’ for every human being, you can either look at various kinds of legal definitions, or you can think of the German Gestapo torturing a political opponent or the Holocaust of the European Jews.’46 For Gunther, Europeans share memories of injustice and fear, a resource that should be used to promote a human rights culture. We must listen to our past pain and wrongs, everyone who has a story to tell must be heard. Gunther concludes that ‘the most important effect of human rights . . . is the recognition of every individual as an equal participant in the political process which leads to a decision on primary rules. . . One has the power and ability to criticise and amend the rules of justice.’47 Gunther offers a postmodern theoretical foundation for human rights that goes well beyond Rorty’s pragmatism and meek attempts at ‘sentimental education’. According to Rorty, this means educating people to listen to strangers and understand their ways of life. By bringing out similarities in our respective ways of life, the feeling that strangers are ‘people like us’ will be strengthened and the sense of moral community widened. The second strategy for spreading human rights and democracy is to narrate stories of pain, suffering and humiliation happening all over the world.48 This pedagogy of pity will put people ‘in the shoes of those despised and oppressed’, make them more empathetic and less prone to killing and torching others.49 The assumed premise of Rorty’s argument is that ‘our’ culture, society and politics are the ideal others (should) aspire to achieve. The pragmatist’s emphasis on efficiency and results means that a standard of civilisation must be set as the blueprint and aim. For Rorty, this is American liberal culture. In a postmodern repetition of the methods of early social anthropology, Rorty believes that we must understand the ways and travails of others in order to help them efficiently become like us. Gunther’s variation is more honest. Sentimental education must emphasise our own suffering. Past European woes and humiliations should be used to raise public awareness. Because we suffered in the past and may do so again in the future, we should refrain from visiting similar woes on others and try to ameliorate their pain. La noblesse oblige in our post-aristocratic world has become la souffrance oblige. The liberal tradition therefore distinguishes between human rights and the moral obligation to rescue. Rescue is based on a feeling of superiority and the principle of substitution. I am duty-bound to help the suffering other because I am well-off, lucky, unaffected by the atrocities I read about in my newspapers and see on TV screens.50 But I could have been born in one of those hard places or life may still reduce me to the victim’s predicament. We should act morally towards suffering others because we could imagine being in their position. As Michael Ignatieff puts it, ‘the ground we share may actually be . . . not much more than the basic intuition that what is pain and humiliation for you is bound to be pain and humiliation for me.’51 Charity is part of a risk-aversion strategy, an insurance policy against bad luck or an offering to the gods for our great fortune. But as Richard Rorty has convincingly argued, in his deconstructive mood, neo-Kantian philosophy’s obsessions with epistemology and metaphysics reduces the sense of solidarity and weakens the ability to listen to strangers and respond to their suffering.52 Gunther’s theory is a variation of the morality of substitution. Our past suffering becomes the foundation of our moral action. It is because we Europeans have been there, because we have been beastly to each other and suffered as a result that we should now promote human rights. The memory of ‘collective trauma’ should be recovered and put to good effect. Morality moves back where the liberals place it: the self, the ego and its mishaps. Human rights have been constructed as defences of the self against the incursions of powerful others, initially the state increasingly now other people. Gunther tries to make them more attuned to the pity the public is made to feel in humanitarian campaigns. But is the best way of doing this to try and link human rights with European atrocities against Europeans? Europeans suffered in the past at the hands of other Europeans as parts of European humanity. But our greatest atrocities then and now are committed against ‘aliens’ considered less than human. The treatment of the Jews in the Holocaust or of the Muslims in Bosnia are recent examples. Slaves, Indians, aboriginals and indigenous people on the other hand have been consistently placed in the non-human part of humanity. Some 10 million Congolese died in the early 20th century as a result of Belgian forced labour and mass murder. Millions died of avoidable famines in India under colonial rule. Up to one million Algerians died during their war of independence. These were crimes by humanity but not against humanity. We shed tears for these out of sense of superiority and charity rather than out of shared history, community or humanity. If we have a shared history, humanitarianism in its celebration of our goodness erases it. European campaigns of extermination, slavery, colonial subjugation, capitalist exploitation and imperial domination are forgotten or glorified, as shown in recent revisionist celebrations of the British Empire. These atrocities are what psychoanalysis calls the real or traumatic kernel of the West, the cause and effect of economic affluence and personal enjoyment. The horrors visited by the West on its ‘others’ are conveniently forgotten and displaced. Horrible atrocious acts are only committed by the evil inhuman other. Indeed, the human rights movement came into life late, after the Second World War. Humanity started committing crimes against itself in the 1930s when the Germans, this philosophical embodiment of humanity, acted atrociously against its own. The German crimes were appropriately called crimes against humanity because the West is endowed with full humanity and can become the proper victim of atrocity. Humanity offends against herself in the West and against sub-humans in the South. During the recent wars in Bosnia and Kosovo, commentators were shocked that atrocities could take place right in the ‘heart of Europe’. We, Europeans, had supposedly learned the lesson after our rare, exceptional misdeeds and it was inconceivable that we could become criminals again. To be sure, the Balkans are approached as peripheral parts of the civilised world, placed in Europe by accident of geography rather than achievement of history or culture. The Balkan wars confirmed again the principle that we, the Europeans, are the chosen people, the essence of humanity in its three facets. Gunther’s proposal cannot be implemented for precisely the reasons that have turned the pain of others into a powerful ideology and suffering into the main characteristic of humanity. The premise and appeal of humanitarianism is distance and alienation. We must participate in campaigns and fine-tune our morality because we, western liberals, have not suffered in the past, because we cannot share the torments of those unfortunate and exotic parts of the world now. Because we have always been human, we must now extend our generosity to those less than human. This is confirmed by Gunther’s understanding of the principal achievement of human rights culture and main recipe for their violation, namely participation in democratic procedures and legislation. It is not very different from the claim that the aim of our recent wars was to spread formal democracy and neoliberal capitalism to backward parts of the world. They are inescapably part of the egocentric and ethnocentric approach to the suffering of others. Gunter’s claim that democratic participation is the greatest achievement of human rights is a rather extreme and sad case of Eurocentrism refuted by the growing political apathy around the world. Indeed, the historical trajectories of civil liberties, human rights and democracy diverged wildly from the start and often came into conflict.53 Furthermore, as Michael Mann has recently shown, the idea that democracies do not commit genocide is utterly wrong.54 Giving money to alleviate the suffering of others is both an insurance policy against the risks of life and as the ultimate moral duty. Live8 interspersed images of starving kids and of AIDS sufferers at the end of their life with those of beautiful, healthy superstars and fans and the wonderful costumes of dancers and accompanying choirs. On the part of the victims the haggard animal on TV screens, on the other side good conscience and the imperative to intervene. It is a short step from that to define violations of human rights as the supreme form of suffering and to portray the human rights movement as the redemptive practice of our age. A simple equation has taken hold of our political imagination. Human rights are entitlements to be free from evil. As the preamble of the Universal Declaration of Human rights puts it, it is disregard and contempt of human rights that have led to barbarous acts. Pity and a sense of superiority unite the humanitarians. The massive pity engineered by humanitarian campaigns supports western superiority, increases distantiation from its targets and breeds disdain. Pity is addressed by a superior to an inferior, it is the patronising emotion of looking down at the person pitied. The human rights campaigner as rescuer can become deeply egotistical: he is the one who keeps the world together and, as a bonus, he receives full recognition for his goodness by others from close and afar. Individual pity is not sympathy. Syn connotes being with, being together with others; pathos means feeling, emotion and, in another sense, suffering. The Greek verb sym-pascho and noun sym-patheia mean to suffer with others, to feel with and for others, to be affected by the same thing and to link emotions in public. For the human rights world, however, feelings towards the suffering are the result of the absence of togetherness. Because we do not suffer, because there is no possible link between us and the victims, our good luck turns into a modicum of guilt, shame and a few pound, dollar . . . coins. If political and historical events can be measured according to the amount of pain they produce, if indeed this is the only calculus through which we can judge history, humanity is one after all: it is united through inevitable suffering and the pity it generates. Let me open here a historical parenthesis. Contemporary humanitarianism repeats and exaggerates many aspects of the humanitarian campaigns and reforms of the 18th and 19th centuries. Humanitarian reformers of that period detailed the pain and suffering endured by people in slavery, or caught up in the criminal justice system, in crammed and unsafe workplaces, in cruel and impoverished domestic conditions etc. The brutalities of life in England were depicted through explicit imagery as well as graphic novels and journalism. This strategy, part of the epoch’s concern to raise sensibility and launch the bourgeois civilising process, aimed at turning public opinion against brutal practices and improving the life of the poor. Images of suffering of the distant poor and oppressed form the core strategy of contemporary humanitarian campaigns too alongside public relations, advertising, film and video. The young man before the Tiananmen Square tanks, the Amnesty International candle surrounded by barbed wire, the burned girl running away from the fire-bombed Vietnamese village have iconic status and represent human rights much more than a thousand speeches, learned articles and books. As a sympathetic commentator puts it, human rights politics is ‘a politics of images spun from one side of the globe to the other, typically with little local history or context.’55 The search for images of victims, especially children, and for a ‘good story’ dominated the media over the Yugoslav wars. According to one relief agency worker ‘almost every journalist who came to see her in Kosovo asked one thing: could she give them a rape victim to interview.’56 Yet while our culture is saturated with imagery and theories of visuality very little has been written about the visual politics of humanitarianism. In contrast, the visual nature of sympathy and its side-effects were fully discussed in the 18th and 19th centuries. Following the tenets of the Scottish moral enlightenment, Adam Smith argued that ethics is a matter of sentiments aroused by sympathy. Sympathy in turn is the result of seeing the suffering of others. ‘By the imagination we place ourselves in his situation, we conceive ourselves enduring all the same torments, we enter as it were into his body, and become in some measure the same person with him.’57 But Smith was also prepared to acknowledge the limitations of sympathy. An earthquake destroying China, he admitted, would not match for real disturbance the ‘most frivolous disaster that could befall [a man of humanity in Europe]. Losing a little finger is more important than the ‘ruin of a hundred millions of his brethren.’58 Edmund Burke agreed: immediately felt pain or danger is terrible but ‘at certain distances, and with certain modifications, they may be, and they are delightful.’59 The proliferating attempts at arousing humanitarian sensibility evident in sentimental, sensationalist and gothic fiction and journalism were subjected to relentless criticism. John Keats and William Hazlitt accused sentimental poetry of exploring ‘not the feelings of the imagined sufferer but the feelings of the spectator watching that sufferer and was geared to demonstrating the spectator’s/reader’s own exquisite sensibility.’60 The troublesome aspects of humanitarianism were fully discussed in the earlier period. The critics understood that the practice of arousing sympathy through the display of the suffering of others in scenes of execution, torture, public punishment and humiliation could go terribly wrong. It could blunt the moral fibre of the viewer and turn him into a savage by aligning him with the cruelty of the perpetrator rather than the pain of the victim. The humanitarian ‘”civilised” virtue requires a shocked spectatorial sympathy in response to pain scenarios both real and wilfully imagined . . . the cult of sensibility had proclaimed pain unacceptable but simultaneously discovered it to be alluring “delicious”‘.61 Images and tales of suffering have great voyeuristic and pornographic potential. Suffering was often eroticised in humanitarian campaigns. Overt sexual references about the ‘sexual coercion and rape of slave women, the rape of war victims, and to the genital mutilation and torture of both male and female slaves’ were accompanied more commonly with the indirect humanitarian eroticisation of pain through ‘the illicit excitement generated by the infliction of pain.’62 Sigmund Freud reported that Uncle Tom’s Cabin, a book celebrated by Richard Rorty for spreading sympathy for slaves amongst white Americans in the 19th century, was mentioned by many of his patients as the original stimulus of the common fantasy that a child is being beaten.63 The historical record causes a nauseous feeling of déjà vu. The examples of extreme suffering of the earlier period are very close to our own imagery of cruelty. If anything, the images of pain and suffering are more horrible today. They have permeated all aspects of contemporary culture and define music, life-style, fashion, the media and many areas of art alongside politics and humanitarian campaigns. But their voyeuristic or pornographic side was not discussed until the Abu Ghraib torture photographs emerged and even then in an embarrassed and apologetic way that did not address the politics of humanitarian imagery. It may be that we are more aware about human cruelty, that we have become more humanitarian than our ancestors. But we appear to know less about the causes of cruelty and atrocity and to understand very little about the way that images of suffering work on our emotional and psychological life. The Politics of Humanitarianism The effects of humanitarianism on politics are profound. If evil and suffering lie at the foundation of humanity, if an inescapable original sin determines its fate, ethics becomes a barrier against beastliness and the main aim of politics is to restrain evil and relieve suffering. In this ethics, the idea of freedom is primarily negative: it is a defence against the various malevolent interventions of public power. Politics adopts an ethical posturing as a result. Its judgments become moral diagnoses about the evil of others, its action takes the form of rescuing people. As Wendy Brown puts it, human rights activism becomes an ‘antipolitics — a pure defence of the innocent and the powerless against power, a pure defence of the individual against immense and potentially cruel or despotic machineries…’64 At the liberal end of the political spectrum, Michael Ignatieff agrees with the conclusion but not the analysis: ‘Human rights activism likes to portray itself as an anti-politics, in defence of universal moral claims designed to delegitimise “political” (i.e., ideological or sectarian) justifications for the abuse of human beings. In practice, impartiality and neutrality are just as impossible as universal and equal concern for everyone’s human rights.’65 The specific political situation that led to the abuses, the colonial history and the conflicts that matured into civil war, the economics that allowed the famine to develop, all these are irrelevant from the perspective of the moralist. For the Kantian deontologist, the moral attitude should not be contaminated by the specifics of the situation. The moral action is a disinterested response to the demands of the law; moral duty is addressed first and foremost towards the actor and his rational commitment to morality and only secondarily towards the other, the target of its action. But as Alasdair McIntyre objected, acting morally is not acting as Kant thought ‘against inclination; it is to act from inclination formed by the cultivation of virtues. Moral education is an “education sentimentale”‘ which however, unlike Rorty’s, respects local communities and discovers in them the sources of virtue.66 Human rights moralism, on the other hand, has it both ways. Following Kantian absolutism, it claims that acts are right or wrong, no grey zones exist, there are yes and no answers to every ethical dilemma. Paying too much attention to past events, to local politics, and to cultural sensitivities risks conceding principle to calculation and compromise. At the same time, pragmatic humanitarians follow the most extreme form of utilitarian calculation. Humanitarianism’s inescapable contradiction allows its proponents to attack perceived evil in the most uncompromising moral terms while doing deals with the Devil. Secondly, since our campaigns are moral in essence, doubting the rightness or appropriateness of the solution cannot be done in good faith. People may be mobilised in a common cause but the solutions to the problem are given and unchallenged. ‘Eight men in a room can change the world’ was the main slogan of Live8. The millions of people participating in the event around the world were presented as a lobby group addressing the eight heads of state. There was no mention however of a simple and undoubted fact: these states are the main cause, through colonialism, imperialism and exported neo-liberal capitalism, of the huge disparities between the North and the South. A similar thing applies with human rights. We in the West have developed rights as a response to the unavoidable failures of human nature, its propensity to sin. Because we have understood the centrality of suffering and sin and have built defences against it, we have the obligation to send them to the less fortunate. Because we produce abundantly and have so many rights in the West, we must find markets to export them. In the same way, that we give our second hand clothes to Oxfam to be sent to Africa, we also send human rights and democracy. If however the less civilised do not accept our charity we will have to impose it on them with fighter bombers and tanks. The global humanitarian sees victims of misfortune everywhere. Undifferentiated pain and suffering has become the universal currency of the South and pity the global response of the North. Pity is misanthropic. It is the closest we get today to the Hegelian master and slave dialectic; the slave’s recognition of the master in his position of mastery is not reciprocated, the relationship remains one directional. The identity of both remains defective because it lacks the mutuality of full recognition. If subjectivity is the outcome of inter-subjectivity mediated by objectivity,67 the gift is the object that guarantees the (superiority of the) identity of the giver by turning the recipient, who is unable to reciprocate, into the passive support of the Westerner’s self. In this sense, donations have a malevolent aspect: they bestow identity to some at the expense of others who, by receiving material goods without consideration, become the effective givers of recognition without return. Individual empathy in the face of suffering may be a noble characteristic. The good Samaritan, the person who gives himself to the other in a non-calculating act is a great moral example. In extreme situations, helping the other becomes an act of heroism and even of martyrdom. The good Samaritan was a rich government functionary. His role is now performed by the humanitarian militarist and the ethical capitalist. There are many business opportunities in suffering and increased profit margins in promoting human rights. Advice about ‘ethical’ investment options and ‘ethical consumerism’ is routinely published in most serious newspapers in Britain and the United States. It usually includes references to the human rights record of the country or company involved. A few examples indicate the close relatioanship between the ‘best’ and the ‘worst’. George Soros, the financial speculator and venture capitalist was almost single-handedly responsible for the collapse of the British currency in 1987. This led to thousands of small businesses going bankrupt and people losing their homes. The Soros foundation, largely funded by the gains of such parasitical if not piratical activities, however promotes democracy and human rights in Eastern Europe and the Balkans. Bill Gates having monopolised through Microsoft the computing industry is generously giving millions away to good causes around the world. The oil giant Shell does not have a reputation for human rights campaigns. Indeed, in 1995, Shell was involved in the execution of nine Ogoni activists, including the renowned author Ken Saro-Wiwa, who fought for the land rights of their people brutally violated by the Nigerian government with the connivance of Shell. However after protests against its activities, Shell now proclaims its commitment to human rights. Its web-site has an introduction to Nigerian literature, in which Saro-Wiwa is presented as a martyr. Similarly, the Chinese government, never slow in realising a business opportunity, allows a few high profile dissidents to emigrate to the West as a sop to human rights campaigns while continuing its repression. This way it sets itself up ‘as a business enterprise that deals in politicised human persons as precious commodities.’68 As Joseph Slaughter puts it, human rights has now become a large corporation and should be renamed ‘Human Rights Inc.’69 The great modern philosophies of history promised progress through reason. Napoleon, the first modern emperor, was the ‘spirit [that is freedom] on horseback’ for Hegel. The communists preached ‘soviets and electricity’; humanity would be united in future equality through the marvels of technology and common ownership of the means of production. The Nazis tried to purify humanity by eliminating the Jews and the gypsies as inferior races, the Stalinists by purging those who disagreed or obstructed the ideology of violently accelerating the historical process. All great ideologies of the last century ended in violence, atrocities and disaster. These great rationalisms justified their atrocities against race, class, ideology or ethnicity with the argument that a few million dead were the necessary price to pay for the future unity of humanity. Ideologies are systems of thought, ways of understanding and explaining the world drawn from a particular perspective, that of class, nation or religion. Today we have abandoned both ideology and the attempt to understand the world. Post-communist humanitarianism, scared by the atrocities of 20th century ideology, prefers a suffering humanity and replaces the grand narratives of history with the misfortune of the species. This accords fully with the neo-liberal claim that history has ended, that all history-moving political conflict has been resolved and ideology no longer has any value. The young people who join NGOs would have joined left-wing groups and campaigns a few years earlier. The quest for justice, the great motivating force of politics has become anti-political. Care for the victims, defence of rights, promotion of free choices is the indisputable ideology of our post-political world. Humanity has been united not through the plans of revolutionaries, but through universal pain, pity and the market. Political events are not analysed concretely or examined for their historical roots; they are judged by the amount of suffering they generate. It is a comforting vision. We are guided exclusively by moral feelings. United in our pity, we call for soothing interventions and care little for the pre or post-intervention situation as long as they reduce the amount of pain. As a result, the complexity of history, the thick political context and the plurality of possible responses to each new ‘humanitarian tragedy’ is lost. Ideologies sacrificed individuals for the future of humanity; for humanitarians individuals count only as ciphers for suffering humanity. The uniqueness of every person and situation is replaced by a grey, monolithic humanity, the very opposite of the infinite diversity of human experience. According to Alain Finkielkraut, ‘the humanitarian generation does not like men — they are too disconcerting — but enjoys taking care of them. Free men scare it. Eager to express tenderness fully while making sure that men do not get away, it prefers handicapped people.’70 Moreover, as the value of pity and of the resulting intervention is determined in a virtual stock exchange of suffering, the ‘price’ of calamities is endlessly pushed upwards. The Holocaust has become the universal standard of comparison, and the measure of evil of each new real or imagined atrocity, each Rwanda, Bosnia, Kosovo, is judged against that. As Paul Ricoeur put it, ‘the victims of Auschwitz are the representatives, par excellence, in our memory of all history’s victims. Victimisation is the other side of history that no trick of reason can ever justify.’71 Pity has replaced politics, morality reason, suffering progress. The universal exchange of suffering and market capitalism have finally become global currency. Religion is inherently a discourse of truth. It must proclaim the superiority of its doctrines. Universal morality follows the same route. It is impossible to claim the universality of a moral code or principle and accept that others may legitimately disagree with it. If there are many views but one right answer, it is incumbent upon the person, the state or the alliance who has it to pass it on and eventually impose it on others. Morality, like religion, arranges people in hierarchy of superiority. The ‘globalisation of human rights fits a historical pattern in which all high morality comes from the west as a civilising agent against lower forms of civilisation in the rest of the world.’72 Despite differences in content, colonialism and the human rights movement form a continuum, episodes in the same drama, which started with the great discoveries of the new world and is now carried out in the streets of Iraq: bringing civilisation to the barbarians. The claim to spread Reason and Christianity gave the western empires their sense of superiority and their universalising impetus. The urge is still there; the ideas have been redefined but the belief in the universality of our worldview remains as strong as that of the colonialists. Human rights ‘are secularising the Last Judgment’ admits Ulrich Beck.73 There is little difference between imposing reason and good governance or between proselytising for Christianity and human rights. They are both part of the cultural package of the West, aggressive and redemptive at the same time. As Immanuel Wallerstein put it, ‘the intervenors, when challenged, always retort to a moral justification — natural law and Christianity in the sixteenth century, the civilising mission in the nineteenth century, and human rights and democracy in the late twentieth and twenty-first centuries.’74 The westerner used to carry the white man’s burden, the obligation to spread civilisation, reason, religion and law to the barbaric part of the world. If the colonial prototypes were the missionary and the colonial administrator, the post-colonial are the human rights campaigner and the NGO operative.75 Humanity has replaced civilisation. ‘The humanitarian empire is the new face of an old figure’ one of its supporters admits. ‘It is held together by common elements of rhetoric and self-belief: the idea, if not the practice, of democracy; the idea, if not the practice, of human rights; the idea, if not the practice, of equality before the law.’76 The postmodern philanthropist, on the other hand, does not need to go to far-flung places to build clinics and missions. Globalisation has ensured that he can do that from his front room, watching TV images of desolation and atrocity and paying with his credit card. As Upendra Baxi puts it, ‘human rights movements organise themselves in the image of markets’ turning ‘human suffering and human rights’ into commodities.77 But despite the structural differences between victim and rescuer, the vision of politics projected in human rights campaigns is common to both. The donor is as much a passive recipient of messages and solutions as the victim and aid-recipient. His contribution is restricted to accepting the alternatives offered by governments and the media. If the victim is the witless plaything of powers beyond his control, the donor equally accepts that this part of the world is beyond redemption and philanthropy is a transient palliative. Unlike the missionary, the humanitarian does not need to believe in any particular religion or ideology, except the global ideology that people suffer and we have an obligation to relieve their woes. Pain and suffering has replaced ideology and moral sentiments have replaced politics, as Richard Rorty advised us to do. But this type of humanitarian activism ends as an anti-politics, as the defence of ‘innocents’ without any understanding of the operations of power and without the slightest interest in the collective action that would change the causes of poverty, disease or war. The ‘Other’ of Humanitarianism The massive character of humanitarian campaigns despite their relatively meagre returns indicates that the stakes go beyond the immediate action. On the surface, the characteristics of the victims stand in stark contrast to those of their saviours. By joining the humanitarian drive we create our own selves. Standing against the faceless mass, the saviour is individualised. Standing against the evil, the donor becomes virtuous. Standing against inhumanity, the campaigner is elevated to full humanity. And as human rights are not given easily to community building and political collaboration, the main sentiment connecting donors and letter-writers is their relief that they do not find themselves in the position of the recipients of their generosity. Human rights campaigns construct the post-political western subjectivity: they promise the development of a non-traumatised self (and society) supported by our reflection into our suffering mirror-images and by the displacement of the evil in our midst onto their barbaric inhumanity. Using psychoanalytical terms, we can distinguish three types of otherness that support our selfhood and identity, the imaginary, the symbolic and the real. When defined as victim, as the extreme example of universal suffering, the Other is seen as an inferior I, someone who aspires (or should aspire) to reach the same level of civilisation or governance we have. Their inferiority turns them into our imaginary Other in reverse, our narcissistic mirror-image and potential double. These unfortunates are the infants of humanity, ourselves in a state of nascency. In their dark skins and incomprehensible languages, in their colourful and ‘lazy’ lives, in their suffering and perseverance, we see the beautiful people we are. They must be helped to grow up, to develop and become like us. Because the victim is our likeness in reverse, we know his interests and impose them ‘for his own good’. The cures we offer to this imaginary other follows our own desires and recipes. The humanitarian movement is full of these priority cures: liberalisation of trade and opening the local markets is more important than guaranteeing minimum standards of living; democracy is more important than survival. Lack of voting rights in one-party states, censorship of the press or lack of judicial guarantees in China or Zimbabwe are the prime examples of beastliness; death from hunger or debilitating disease, high infant mortality or low life expectancy are not equally important. In the 1980s, the European Community built wine lakes and butter mountains and preferred to stock uselessly and even destroy the produce to avoid flooding the marketplace and driving prices down. Similarly today democracy and good governance, our greatest exports must be sold at the right price: they must follow our rules and should not be used against our interests. As an American official put it complaining about Venezuelan policies challenging American hegemony and redistributing the oil wealth of the country, ‘the government’s actions and frequent statements contribute to regional instability . . . despite being democratically elected, the government of President Hugo Chavez has systematically undermined democratic institutions.’78 The second type of otherness is symbolic. We enter the world through our introduction to the symbolic order, as speaking beings subjected to the law.79 The others, the unfortunate victims of dictators and tsunamis, have not learned as yet to speak (our) language and accept (our) laws, they are non-proper speakers or infants. Consumption of western goods and civil and political rights are signs of progress. If the Chinese have Big Macs and Hollywood movies, democracy and freedom will eventually follow. Learning the importance of consumerism and human rights may take some time as all education and socialisation does. But it takes precedence over economic re-distribution and cultural recognition. Our legal culture promotes equality and dignity by turning concrete people to abstract persons, bearers of formal rights. According to Zen Bankowski, ‘it is as legal persons, the abstract bearers of rights and duties under the law, that we treat concrete people equally. Thus the real human person becomes an abstraction — a point at which is located a bundle of rights and duties. Other concrete facts about them are irrelevant to the law. . . You do not help a person but give them their rights.’80 This is the West’s considered answer: give these unfortunates human rights and second-hand clothes and they will, in time, attain full humanity. Finally, we have the evil inhuman, the irrational, cruel, brutal, disgusting Other. This is the other of the unconscious. As Slavoj Zizek puts it, ‘there is a kind of passive exposure to an overwhelming Otherness, which is the very basis of being human . . . [the inhuman] is marked by a terrifying excess which, although it negates what we understand as ‘humanity’ is inherent to being human.’81 We have called this abysmal other that lurks in the psyche and unsettles the ego various names: God or Satan, barbarian or foreigner, in psychoanalysis death drive or the Real. Individually and socially we are hostages to this irreducible untameable otherness. Becoming human is possible only against this impenetrable inhuman background. Split into two, according to a simple moral calculus, this Other has both a tormenting and a tormented part, both radical evil and radical passivity. He represents our narcissistic self in its infancy (civilisation as potentia, possibility or risk), civilisation in its cradle; but also what is most frightening and horrific in us, the death drive, the evil persona that lurks in our midst. We present the Other as radically different, precisely because he is what we both love and hate about ourselves, the childhood and the beast of humanity. The racial connotations of this hierarchy are not far from the surface. As Makau Mutua has argued, ‘Savages and victims are generally non-white and non-Western, while the saviours are white. This old truism has found new life in the metaphor of human rights.’82 A similar residue, a ‘nonlinked thing’83 beyond control and constitutive faultline haunts community and its law. It is analogous to an ‘unconscious affect’, encountered in the ‘sharp and vague feeling that the civilians are not civilised and that something is ill-disposed towards civility’ that ‘betrays the recurrence of the shameful sickness within what passes for health and betrays the “presence” of the unmanageable’.84 The original separation from other people and societies, the break that lies at the foundation of the modern nation-state cannot be fully represented or managed but keeps coming back as social sickness and personal malady. The unnameable other returns in xenophobia and racism, in hatred and discrimination and remains intractable to politics. Politics becomes a ‘politics of forgetting’, a forgetting of past injustices and current symptoms, a considered strategy which tries to ban what questions the legitimacy of institutions by turning the threatening imponderable powers into memory and myth or into celebration of fictitious unity. Psychoanalysis reminds us that lack and desire leads to symptoms, often violent and repetitive, the cause of which is forgotten because it never entered consciousness. One could claim that the perennial and perennially failing quest for justice is the result of these symptoms, a trace that signifies a past trauma or a future union, always deferred and different. Justice is the name of social desire for unity and wholeness and the series of symptoms created by the lack of this foundational and unattainable condition. Injustice, on the other hand, is the way through which people construct this sense of lack, incompleteness or disorder, the name given to the symptoms of social exclusion, domination or oppression.85 This approach could help us understand the psychic and social investment in human rights campaigns. The absolute and inhuman otherness that lurks in us leads to repression, cruelty and returns in symptoms. We call evil the effects of what we are unable to control in our psychic or social selves, the uncanny fears and symptoms the inhuman part of humanity causes. Absolute evil begins with the attempt to tame this untameable, to dismiss the inhuman in the human in order to master humanity completely.86 We try to silence the terror of the inhuman thing within us by turning it into a question of morality, into evil and obscenity and displacing it into the savage and suffering others. The victims we try to rescue are stand-ins for our own malady. We hope to become whole, to integrate our conscious, rational self and domesticate our unconscious, traumatic, affective part by projecting it into those others upon whom we export our pathetic and atrocious traits. To become fully human, to become whole, our inhuman part is wholly projected onto the other. The internal divide becomes a symmetrical external separation as humanity is neatly split into two, barbarian and kinsman, victim and rescuer, the (evil) inhuman and the (moral) human. The legal category of crimes against humanity expresses well this split. It is humanity that commits atrocities against itself, it is humanity that acts inhumanely, in denial of its dependency on the inhuman other that lurks within us. As Jean-Francois Lyotard put it, the Holocaust was the completion of the dream to exterminate those people (the Jews, the gypsies) who in their otherness bear witness to the absolute other. The rights of the other are about speaking new, the immemorial power of the other and our inability to announce it.87 The stakes of humanitarian campaigns are high. Positing the victim and/or savage other of humanitarianism we create humanity. The perpetrator/victim is a reminder and revenant from our disavowed past. He is the West’s imaginary double, someone who carries our own characteristics and fears albeit in a reversed impoverished sense. Once the moral universe revolves around the recognition of evil, every project to combine people in the name of the good is itself condemned as evil. Willing and pursuing the good inevitably turns into the nightmare of totalitarianism. This is the reason why the price of human rights politics is conservatism. The moralist conception both makes impossible and bars positive political visions and possibilities. Human rights ethics legitimises what the West already possesses; evil is what we do not possess or enjoy. But as Alain Badiou puts it, while the human is partly inhuman, she is also more than human. There is a ‘superhuman or immortal dimension in the human.’ We become human to the extent that we attest to a nature that, while fully mortal, is not expendable and does not conform to the rules of the game. The status of victim, on the other hand, ‘of suffering beast, of emaciated dying individual, reduces man to his animal substructure, to his pure and simple identity as dying . . . neither mortality nor cruelty can define the singularity of the human within the world of the living.’88 We should reverse our ethical approach: it is not suffering and evil which define the good as the defence humanity puts up against its bad part. It is our positive ability to do good, our welcoming of the potential to act and change the world that comes first and must denounce evil as the toleration or promotion of the existent, not the other way around. In this sense, human rights are not what protects from suffering and inhumanity. Radical humanitarianism aims to confront the existent with a transcendence found in history, to make the human, constantly told that suffering is humanity’s inescapable destiny, more than human. We may need to sidestep rights in favour of right. 1 Erudition and training in morals and the arts. 2 Louis Althusser, For Marx. Trans and Ed. Ben Brewster (London, Verso, 1969), 228: ‘If the essence of man is to be a universal attribute, it is essential that concrete subjects exist as absolute givens; this implies an empiricism of the subject. If these empirical individuals are to be men, it is essential that each carries in himself the whole human essence, if not in fact, at least in principle; this implies an idealism of the essence. So empiricism of the subject implies idealism of the essence and vice versa’ (emphasis in original). 5 At the other end of the liberal spectrum, Jurgen Habermas in The Future of Human Nature (Cambridge, Polity, 2003) detects the ‘X factor’ in the ‘integrity of human nature’. Integrity is the basis of rationality and, in turn, of the universal ethics of human species, upon which human rights are based. The universal morality of human rights and the principles of freedom and equality are part of the ‘species ethics’. Genetic intervention and custom-made designer babies are unacceptable because they violate this integrity and our moral self-understanding. Moral agency, Habermas argues, builds on a distinction between the ‘man-made’ and the ‘grown’ of human bodies given to us by nature. This distinction has allowed the development of autonomous morality and democracy, the highest achievements of universal rationality but is now threatened by genetic intervention. While cultures differ, moral self-recognition is the result of the ‘vision different cultures have of “man” who — in his anthropological universality — is everywhere the same’, at 39. Since for Habermas, this self-understanding is not culturally determined it must be an anthropological given. The liberal conceit is evident. Western moral humanism, the most local of traditions, is declared a universal anthropological category. Fukuyama’s ‘factor X’, by avoiding to give content to the anthropological constant looks more credible than the ‘species ethics’ of Habermas. 7 For a use of the psychoanalytic concept of ‘overdetermination’ in political theory, see Ernesto Laclau and Chantal Mouffe, Hegemony and Socialist Strategy: Towards a Radical Democratic Politics (Winston Moore and Paul Cammack trans.) (London, Verso 1985), 97-105. 8 Peter Singer, ‘Great Apes Deserve Life, Liberty and the Prohibition of Torture’, The Guardian, May 27, 2006, 32. 9 Compare R. v. Ministry of Defence, ex parte Smith 1 All ER 257 CA with Smith v. Grady v. UK, ECHR Application Number 33985 and 33986/96, Judgment of 27 September 1999. The English courts found that discharge from the army was not unreasonable but the European Court of Human Rights ruled that it amounted to a violation of the right to privacy in Article 8 of the Convention. 10 The seminal text is Jaques Lacan, ‘The Mirror Stage As Formative of the Function of the I As Revealed in Psychoanalytic Experience’ in Jaques Lacan, Ecrits: A Selection (Alan Sheridan trans.) (London, Routledge, 2001). 11 See chapter 2 of my Human Rights and Empire: The Political Philosophy of Cosmopolitanism (Routledge early 2007). 13 ibid. 327-9. 14 ibid. 277. 16 ibid. 284. In a bizarre story that exemplifies how Western governments exploit the work of NGOs, the Russians exposed, in 2006, two spies working for British intelligence who used fake rocks to conceal receivers for gathering and passing on secret data. The spies contacts and aid recipients were various human rights Russian NGOs. 18 ibid. 59. Ignatieff refers to Kosovo but his statement is even more applicable to Afghanistan or Iraq. 21 Kennedy op. cit., 271 and chapter 8 passim. 24 Two years later, Collins despaired. No weapons of mass destruction were found, the occupation has acted as the ‘best recruiting agent for Al-Qaeda ever’ and ‘if freedom and a chance to live a dignified, stable life free form terror was the motive, then I think of more than 170 families last week would have settled for what they had under Saddam.’ (The Observer, 18 September 2005, 17). 25 Kennedy op. cit., 287, 289. 26 ibid. 294, 296. 29 Kennedy op. cit., 236-7. 30 Ignatieff, Empire Lite, 73-4. 31 ibid., 94. 32 Walzer, op. cit., 14. 33 Kennedy op. cit., 277. 38 Michael Ignatieff, Human Rights as Politics and Idolatry, op. cit., 3. 39 ibid. 173. 40 See chapters 4 and 5. 41 Bernard Ogilvie quoted by Etienne Balibar, Politics and Truth (Athens, Nissos, 1999), 43. 43 Rony Brauman, ‘Contradictions of Humanitarianism’ 7 Alphabet City 140 (2000). 45 The Guardian, July 3, 2006, 13. 46 Klaus Gunther, ‘The Legacies of Injustice and Fear: a European Approach to Human Rights and Their Effects on Political Culture’ in Philip Alston ed., The EU and Human Rights (Oxford, Oxford University Press, 1999), 126. 47 ibid. 132. 48 Richard Rorty, ‘Human Rights, Rationality and Sentimentality’ in Stephen Shute and Susan Hurley eds, On Human Rights (New York, Basic Books, 1993), 117; Rorty, Contingency, Irony and Solidarity (Cambridge, Cambridge University Press, 1989), xvi. 49 ibid. 126-7. 50 This is a position Emanuel Levinas attacked. Levinas insisted that there can be no reciprocity or substitution between the person making the ethical demand and its addressee. The encounter with the other is painful, disturbing and traumatic. In Levinasian ethics, the ego is caught by the other, it becomes literally hostage to the other’s request. The other’s demand torments and decentres me but only I can respond. This has nothing to do with the pity philanthropic campaigns generate nor with the moral superiority the charitable donator receives. See Douzinas, End, chapter 13. 51 Ignatieff, Politics, 95. 54 Michael Mann, The Dark Side of Democracy: Explaining Ethnic Cleansing (Cambridge, Cambridge University Press, 2005). 55 Kenneth Cmiel, ‘The Emergence of Human Rights Politics in the United States’ 86/3 Journal of American History (1999), 1233. 58 ibid. 157. 59 Edmund Burke, A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful (J. T. Boulton ed.) (London, Routledge and Kegan Paul, 1958), 14. 60 Karen Halttunen, ‘Humanitarianism and the Pornography of Pain in Anglo-American Culture’ 100/2 American Historical Review 303 (1995), 308. 61 ibid. 331, 332. 62 ibid. 325. 63 Sigmund Freud, ‘A Child Is Being Beaten’ in Ernest Jones ed. Collected Papers (London, ) Vol. 2, 173. 65 Ignatieff op. cit., 9. 66 Alasdaid McIntyre, After Virtue (London, Diuckworth, 1981), 140. 67 Douzinas, op. cit., chapter 10. 68 Rey Chow, The Protestant Ethnic and the Spirit of Capitalism (New York, Columbia University Press, 2002), 21. 72 op. cit., 210. 75 David Rieff, A Bed for the Night (London, Vintage, 2002); Barbara Harlow, ‘From the Civilising Mission to Humanitarian Intervention’ in Peter Pfeiffer ed., Text and Nation (New York, Camden House, 1996); Alex de Waal, Famine Crimes. Poltics and the Disaster Relief Industry in Africa (Oxford, James Currey, 2002); for a hilarious portrayal of the pleasures and woes of peacekeepers, NGO officers and other ‘internationals’ see Kenneth Cain, Heidi Postlewait and Andrew Thomson, Emergency Sex (London, Ebury Press, 2004). 76 Ignatieff, Empire Lite, op. cit., 17. 79 Douzinas op. cit., chapter 11. 82 Makau Mutua, ‘Savages, Victims Saviours’, 42/1 Harvard International Law Journal 201 (2001), 207. 84 ibid. at 44,43. 85 Costas Douzinas and Adam Gearey, Critical Jurisprudence. The Political Philosophy of Justice (Oxford, Hart, 2005). Costas Douzinas LLB (Athens) LLM PhD (London) is a Professor of Law and Dean of the Faculty of Arts and Humanities at Birbeck University. He was educated in Athens during the Colonels dictatorship where he joined the student resistance. He left Greece in 1974 and continued his studies in London and Strasbourg. This article was first published in Parrhesia 2 (2007) under a Creative Commons 3.0 license.
Page created on December 7, 2018. Last updated on May 12, 2019 at 12:37 The pulmonary circulation usually contains around 10% of the total blood volume. Pulmonary congestion refers to the condition where the pulmonary circulation contains significantly more blood, up to 20% of the total blood volume. The condition can either be active, where the body intentionally increases the blood volume in the pulmonary circulation due to an inflammatory process in the lungs, like pneumonia. As part of the inflammatory process will the capillary permeability increase, which allows leukocytes to enter the interstitium. However, that’s not what we usually mean when we talk about congestion. The more common situation is passive congestion, where the left heart has problems with ejecting enough blood, or the mitral valve is stenotic. The right heart however still pumps happily, meaning that the pulmonary circulation becomes overfilled with blood and therefore congested. The pressure inside the pulmonary circulation increases, which increases the capillary hydrostatic pressure. This forces fluid out of the capillaries. We can distinguish two types of pulmonary oedema, haemodynamic type and microvascular type. Haemodynamic oedema occurs when the capillary hydrostatic pressure increases, or when the intravascular oncotic pressure decreases. The former occurs in acute left ventricular failure (not chronic!) or mitral stenosis, while the latter occurs in any condition that causes hypoproteinaemia, like nephrotic syndrome or cirrhosis. Microvascular oedema occurs when there is damage to the microvascular epithelium due to an acute pulmonary infection, gastric content aspiration or radiation. It’s important to note that only acute left ventricular failure will cause pulmonary oedema. Chronic left ventricular failure only causes pulmonary congestion. Clots that occlude pulmonary arteries are almost always thromboemboli and rarely thrombi. 95% of all pulmonary emboli originate from deep vein thrombi (DVT). Therefore are many of the factors that predispose to pulmonary embolism the same factors that predispose to DVT, such as: - Prolonged bedrest - Varicose veins - Congestive heart failure - Estrogen-containing birth control - Advanced cancer We distinguish four types of pulmonary embolism, based on the artery affected: |Total embolization||Pulmonary trunk||Death| |Subtotal embolization||Pulmonary artery||Most commonly death| |Partial embolization||Segmental branch of pulmonary artery||Rarely causes death| Large emboli usually get stuck in the bifurcation of the pulmonary artery as a saddle-shaped embolus. Pulmonary embolism has two consequences: The downstream pulmonary parenchyme becomes ischaemic, and the pressure inside the blocked artery increases. This sudden pressure increase can cause acute right-sided heart failure, so-called acute cor pulmonale. Hypoxaemia often occurs too. The ischaemic part of the lung produces less surfactant, so these alveoli become atelectic. They still receive perfusion however (despite the embolism), so shunts will form. Also, the pressure increase causes the pressure in the right atrium to increase. Considering that 20-30% of all people have a patent foramen ovale can blood be shunted from the right atrium to the left atrium, contributing to the hypoxaemia. If only a small artery is occluded is it common that no symptoms can be seen. This occurs in 60-80% of all emboli. Because the lungs are oxygenated from the bronchial circulation as well as from alveolar air and the pulmonary circulation is it rare that embolism causes haemorrhagic infarct. It only occurs if there is already pulmonary congestion, because of the increased blood volume in the lung. In some edge cases can emboli other than thromboemboli occur in the lung, like: - Fat emboli - Tumour cell emboli - Bone marrow emboli - Air emboli The pulmonary circulation usually has 1/8 of the pressure of the systemic circulation; the normal MAP is 18 mmHg. Pulmonary hypertension means that the pulmonary mean arterial pressure is above 25 mmHg and is almost always secondary to pulmonary vasoconstriction or pulmonary congestion. We distinguish three different types of pulmonary hypertension, depending on which site of the circulation is abnormal. They, and their causes, are: - Precapillary pulmonary hypertension - Primary pulmonary hypertension - Hypoxic vasoconstriction - Sleep apnoea syndrome - Capillary pulmonary hypertension - Pulmonary fibrosis, due to capillaries being destroyed by fibrosis - Postcapillary pulmonary hypertension - Left-sided heart failure - Mitral stenosis If a person has pulmonary hypertension but the all possible causes are ruled out, is that person said to have primary or idiopathic pulmonary hypertension. 80% of primary pulmonary hypertension is caused by bad genetic background. The consequences of pulmonary hypertension include dyspnoea, attenuation of already present respiratory failure, and increased strain on the right heart, possibly causing chronic cor pulmonale. Atherosclerosis is an extremely rare occurrence in the pulmonary arteries due to the low pressure. In pulmonary hypertension is it possible, however. Diffuse pulmonary haemorrhage syndromes Syndromes belonging to this category are syndromes that cause primary pulmonary haemorrhage, without any infection or congestion. The most characteristic type is Goodpasture syndrome, a hypersensitivity type II reaction where the immune system produces antibodies against collagen IV. Collagen IV is found in basement membranes in the lungs and glomeruli, which is why haemorrhagic pneumonitis and glomerulonephritis are characteristic consequences. Idiopathic pulmonary haemosiderosis is a rare disease where Goodpasture antibodies can’t be detected in the blood but the symptoms are similar as in Goodpasture syndrome. No matter the reason will the lungs become heavy and reddish-brownish macroscopically. With histology we can see haemosiderin-laden macrophages in the alveoli. The most common symptoms are haemoptysis and anaemia. Vasculitis haunts everyone forever Granulomatosis with polyangiitis, also known as Wegener granulomatosis can also affect the lungs. The disease has three components: - Necrotizing granulomatosis of the respiratory tract The lung manifestations are: - Haemorrhagic, necrotic nodules in the lung - Pulmonary haemorrhage - Cavity-forming lesions 63. Chronic restrictive lung diseases 65. Lung tumors Theoretical exam topics
25 December marks the birth of Mohammed Ali Jinnah, the first Governor-General of Pakistan. Born in Karachi and trained as a barrister in London. In 1895, at age 19, he became the youngest Indian to be called to the bar in England. Jinnah rose to prominence in the Indian National Congress in the first two decades of the 20th century. By 1940, Jinnah had come to believe that Indian Muslims should have their own state. In that year, the Muslim League, led by Jinnah, passed the Lahore Resolution, demanding a separate nation. Ultimately, the Congress and the Muslim League could not reach a power-sharing formula for a united India, leading all parties to agree to separate independence of a predominately Hindu India, and for a Muslim-majority state, to be called Pakistan. In summarizing Jinnah’s life and career, historian Stanley Wolpert stated, Few individuals significantly alter the course of history. Fewer still modify the map of the world. Hardly anyone can be credited with creating a nation-state. Mohammad Ali Jinnah did all three.
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated. Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment. The map shows the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, in the meteorological data we have that information. Why do we use UTC? Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.
Embed Size (px) Transcript of Dating Fossils Oldest fossils 3.5 byo prokaryotes1st life forms are probably older What is evolution? The scientific theory that explains how species change over timeToday, species have changed both structurally & physiologically from what they were. Remains/traces of once living organisms that no longer exist.Usually found in sedimentary rock the formation process is quick, preventing bacteria from decaying the organism.Usually consists of hard parts of the organism.Minerals may replace remains, preserving microscopic structures. Freezing i.e. MammothsImprints film of carbon left after decayCasts sediments fill mold of organismKinds of fossils =- Whole organisms, bones, plants; feathers; pollen grains Relative age position in layers of sedimentary rocks.Absolute age radioisotopes w/unstable nucleii & a constant (known) rate of decayHalf-life length of time it takes of radioactive material to decay Atom with a different # of neutrons12C and 13C= 6P, 6N (7N) and are stable14C = 6P & 8N, unstable, life = 5730 yrsHow does dating work?1) ratio of 14C production/decay in atmosphere is constant2) organisms take in 12C & 14C at a constant rate 3) organism dies 12C & 14C in atmosphere remain constant14C in organism decays12C : 14C ratio in organism changes4) compare 12C : 14C ratio in organism to atmospheric ratio5) scientists can date organisms up to 50K years using carbon dating. Use different isotopes i.e. 40K (potassium 40), life = 1.28 billion yearsSo.How is all of this information applied to fossils we find in the dirt?http://images.usatoday.com/tech/_photos/2006/11/07/fossil472.jpg
This week we have been using one of the Magic Tree House books ‘Adventure in the Amazon‘, to continue with our South America unit study, focusing on the country of Brazil. South America: Goals for this Study Whilst working through the next unit of Rainforest Journey and reading Adventure in the Amazon, we will be focusing our learning on the following: - The country of Brazil, the largest country in South America - The adaptations animals make to stay alive South America: In My Book Box Afternoon on the Amazon is a book my younger girls know very well because they have listened to it lots and lots of times from one of their audio books CDs. I will be reading the companion book to them before bed each night. I always love to include the folklore of the countries we are studying. This is not an inexpensive book but as it has undoubtedly taken the place of many other picture books in addition to including lots of recipes and games, I think it will be worth the money. South America: The Geography of Brazil We did our usual building the map of the world and chatting about where Brazil is and the common animals and birds seen there: I then read the Brazil page out of the children’s Atlas of the World book, so they learnt the most important points about Brazil. Together we created another story board, this time about Brazil: We used lapbook pieces obtained from here. This would be for the presentation the little ones would be giving on South America: Science: Learning about Camouflage The bulk of our learning came from EdTechLens Rainforest Journey, where we covered structural, behavioral and genetic adaptations. I read the above camouflage book. The girls did narrations as well as some worksheets. Here are A7’s: Here are B5’s: We watched the following video Science Demonstration: Which environments suit the animals the best, based on their ability to camouflage themselves from predators I found an image of each of the following landscapes: - Polar region - Rain forest I had the little ones go and grab a few of their plastic animals and we tested to see which landscape they would be safest in: The Polar Regions: The Desert Regions: The Rain Forest: Science Activity: What changes would a polar bear need to make to be fully adapted for the rain forest environment? When we studied the polar regions, we did an art activity which demonstrated the adaptations a polar bear had to enable it to survive in the arctic: This time I wanted them to use their knowledge and their imagination to create a polar bear which had made adaptations to live in the rainforest. I said they could be as silly as they wanted, but there had to be a reason for whatever they adapted the polar bear to look like: I just loved their inventions 🙂 South America: Brazil – In the Kitchen As Gary has been recuperating this week, we only managed one of the things on my list – Brigadeiro . This was an incredibly easy and very tasty sweet made using only three ingredients – sweetened condensed milk, sugar and cocoa powder, and heated on the stove: left to cool: and formed into little balls: Ours were made, cooled, formed and eaten in no time at all. A nice easy, albeit not very healthy, addition to my snack repertoire 🙂 This was a great study! Next we will be reading the FIAR book Henry the Castaway and learning about the Orinoko river which flows through Venezuela.
In this kids' biography, discover the inspiring story of Ada Lovelace, who wrote the world's first computer program. In 1833, Ada Lovelace met mathematician Charles Babbage, inventor of calculating machines. She went on to devise a way of inputting data into Babbage's Analytical Machine, and in doing so became the first ever computer programmer. In this biography book for 8-11 year olds, learn all about Ada Lovelace's fascinating life, including her famous father (celebrated poet Lord Byron), her talent for languages and mathematics, and her predictions for how computers could change our lives. This new biography series from DK goes beyond the basic facts to tell the true life stories of history's most interesting people. Full-colour photographs and hand-drawn illustrations complement thoughtfully written, age-appropriate text to create an engaging book children will enjoy reading. Definition boxes, information sidebars, maps, inspiring quotes, and other nonfiction text features add depth, and a handy reference section at the back makes this the one biography series every teacher and librarian will want to collect. Each book also includes an author's introduction letter, a glossary, and an index.
Ultrasonic scanning systems are used for automated data acquisition and imaging. They typically integrate a ultrasonic instrumentation, a scanning bridge, and computer controls. The signal strength and/or the time-of-flight of the signal is measured for every point in the scan plan. The value of the data is plotted using colors or shades of gray to produce detailed images of the surface or internal features of a component. Systems are usually capable of displaying the data in A-, B- and C-scan modes simultaneously. With any ultrasonic scanning system there are two factors to consider: - how to generate and receive the ultrasound. - how to scan the transducer(s) with respect to the part being inspected. The most common ultrasonic scanning systems involve the use of an immersion tank as shown in the image above. The ultrasonic transducer and the part are placed under water so that consistent coupling is maintained by the water path as the transducer or part is moved within the tank. However, scanning systems come in a large variety of configurations to meet specific inspection needs. In the image to the right, an engineer aligns the heads of a squirter system that uses a through-transmission technique to inspect aircraft composite structures. In this system, the ultrasound travels through columns of forced water which are scanned about the part with a robotic system. A variation of the squirter system is the "Dripless Bubbler" scanning system, which is discussed below. It is often desirable to eliminate the need for the water coupling and a number of state-of-the-art UT scanning systems have done this. Laser ultrasonic systems use laser beams to generate the ultrasound and collect the resulting signals in an noncontact mode. Advances in transducer technology has lead to the development of an inspection technique known as air-coupled ultrasonic inspection. These systems are capable of sending ultrasonic energy through air and getting enough energy into the part to have a useable signal. These system typically use a through-transmission technique since reflected energy from discontinuities are too weak to detect. The second major consideration is how to scan the transducer(s) with respect to the part being inspected. When the sample being inspected has a flat surface, a simple raster-scan can be performed. If the sample is cylindrical, a turntable can be used to turn the sample while the transducer is held stationary or scanned in the axial direction of the cylinder. When the sample is irregular shaped, scanning becomes more difficult. As illustrated in the beam modeling animation, curved surface can steer, focus and defocus the ultrasonic beam. For inspection applications involving parts having complex curvatures, scanning systems capable of performing contour following are usually necessary.
Aboriginal Dreamtime (ages 12-13) This scheme of work explores the culture of Australian Aboriginals, drawing inspiration from their art, music and history. The story of the creation is introduced, as seen through the Aborigine eyes, with students miming and developing the characters of the animals at a watering hole using a class soundscape, tableau and symbolism. The origins of Australia serve as the backdrop for a discussion about how cultures may clash, with students devising a movement piece to illustrate this. Groups then use examples of art to devise their own pieces for assessment, performed to music. This scheme of work contains 6 drama lesson plans. - Lesson 1: The Creation. An aboriginal tale is introduced, with students first visualising and miming Australian animals and performing a soundscape of the earth’s creation. - Lesson 2: Developing the Characters. Different animals in the story are modelled and hotseated to draw out their vocal and physical traits, supported by mime and role-play. - Lesson 3: At The Watering Hole. The students re-tell the story to develop it further and create a tableau around the watering hole. Group work uses frozen pictures, slow motion, thought aloud and flashbacks to add depth to the scene. - Lesson 4: Origins of Australia. Groups discuss a piece of Aboriginal art and develop vocal and movement pieces based on their observations of aboriginal culture and belief system. - Lesson 5: Going Walkabout. The concept of Walkabout is explored and modelled through split scenes with mime and dialogue used at key moments. - Lesson 6: Assessment. Students develop a piece of physical theatre using music to demonstrate their understanding of the dreamtime story. A piece of didgeridoo music is included! Supporting materials include - Aboriginal art - Digeridoo music Additional resources are included in the appendices - Basic Drama Skills Sheet - End Of Unit Self-Assessment Form The scheme of work is supplied as a ZIP file. It contains the scheme of work (PDF, readable on most computers) along with 1 music file (MP3 format.) More Lesson Plans Developing vocal delivery through fun exercises and a little Shakespeare (Year 8) One night over dinner the Earl is found dead. A classic whodunnit? (Year 8) Learning to think quickly and creatively with an Olympian Improvisation finale (Year 8) Victorian stage fun, stock characters and villainous intent (Year 8) Exploring classic storytelling, stock characters and key elements (Year 8) Improvised comedy using stock characters and masks (Year 8) Understanding the plight of the homeless and their families (Year 8) Greek tragedy with Theseus, Ariadne and a labyrinth (Year 8) Drama inspired by Aboriginal Dreamtime art and music (Year 8) Shakespeare introduction with Olivia, Viola and Malvolio (Year 8) Creating characters using props, improvisation and text (Year 8) Using music to create tension, mood and atmosphere (Year 8) Annie Besant and the famous match workers strike of 1888 (Year 8) An ancient take on comedy, tragedy and pantomime (Year 8)
1) Seed-dispersal caused land animals by nourishing them. Let’s consider fruit and the animals eating it. The plant is giving away something nutritious so animals will come along and swallow the plant’s seeds, and then some days later deposit those seeds somewhere else. The plant species that do this not only tend to thrive in greater numbers, and over more territory, but they also enlarge their breeding network and thus evolve faster. What did land plants do before amphibians evolved about 370-mya (million years ago)? Before then, seed dispersal must have been a huge problem for plants. Wind and insects may work for pollination, but not so well for spreading seeds. So before amphibians evolved, most plants could only expand their territory by a few meters a year. At this time, water-born seed dispersal would spread seeds farthest. This caused nature to favor the plants that would disperse their seeds via waterways. Here we imagine the world that produced the amphibians, and how there was nutrition just everywhere on the shorelines for the first amphibians that could reach it. And at first, this was the fish that could get the front part of their body out of the water. Then, after about 370-mya, when the first amphibians appeared, there was about a 50-million year period where the measure of terrestrial adaptation was mostly how far the amphibian species could get from water, and into the surrounding nutritional El Dorado. This culminated in the evolution of the first reptiles about 320-mya. Reptiles did not need to return to the water to breed. So it took about 50 million years to get from legs to full reptiles that didn’t need to return to the water. 320-mya was the age of cycads, or proto-cycads, essentially palm tree like plants. And today, many cycads still spread their seeds via water, for example the spouted coconut and atap seeds shown below. Coconuts are commonly seen washed up on a distant shore as shown below. Over time, fish evolved to feed on the nutritious water-borne seeds and spores, which at first had no protection at all from animal consumption. And little doubt, the struggle for nutrition drove some fish to begin entering shallower and shallower waters, and then finally on dry land to get more of the nutritious seeds. About 300-mya the conifers, the pine tree like plants evolved. And here we should note how pine cones float, and have a hard husk and smallish seeds. Now rains would occasionally wash the seeds of nearby plants into the waterways, so many plants that were merely near rivers (and small tributaries) were also being favored by river-based seed dispersal. These probably evolved to drop their seeds throughout the year because heavy rain would carry would often carry their seeds away. Therefore we imagine that the areas around the world’s rivers and brooks were full of nutritious seeds until the arrival of more terrestrial fish. At some point, undigested seeds began passing through this terrestrial aquatic creature and onto land, some distance away. Then once this seed-dispersal symbiosis evolved, the plants participating in it had an advantage (a seed and territory spreading advantage) over all the other plant species. And at the same time, the “land-fish” eating the seeds of these plants were spreading the territory of their symbiots, so their food supply was constantly being enlarged by their own feeding. The initial limiting factor seems to have been the ability of the fish to breathe air and move far from the water in search of seeds. And the individual fish that could do this was both best at finding food and dispersing the seeds of its symbiots. Thus the benefits of seed spreading favored the fish that could venture farther and farther onto the land to find seeds to eat and spread. And this is the evolutionary hurdle that caused the amphibians, beginning about 370-mya. This is also where the conifers (pine, juniper, holly), apparently came from, also about 370-mya. The timing coincidence is because the two were apparently symbiots. 2) Seed-spreading caused reptiles by feeding them. Seed spreading pulled tetrapod (4-leg) life far from the water by feeding it. About 320 to 310-mya, we start to see animals that did not go back to the water to breed, namely the reptiles. This was about the same time when cycads evolved. Cycads are palm-like plants which today put coconuts and giant bundles of dates high up a long stalk. The early cycads were however short. Some experts think the first proto-cycads began around 325-mya, although some experts with a stricter definition of what a cycad is think that it was as late as 280-mya. Thus due to timing, it appears that the fruiting cycads were the main symbiot of the first non-amphibious reptiles. Thus it appears that seed dispersal not only pulled life from the water causing the amphibians, but it also pulled animal life away from the water causing the reptiles. 3) Seed spreading caused plants to offer fruit After some time, the plants began helping their symbiots (and the dispersal of their own seeds) by providing fruit nutrition outside the seed itself. This is why we have fruits of two parts. One part is tasty, soft, sweet fruit, and the other is the hard inedible seed, like with an avocado, peach or mango. The delicious edible fruit evolved as the symbiot-helping bribe. The hard inedible seed evolved as the part to pass through the digestive tract of the symbiot. This fruit bribery is what the angiosperm plants (the flowering and fruiting plants) evolved for. And we do find fossilized angiosperm-like pollen from about 245-mya. And while the first dinosaur fossils officially come from about 225-mya, their proto-dinosaur ancestors could have easily been angiosperm symbiots say 20-million years earlier. The arms race of the plants (to spread their seeds farthest) led them to give all they could to their most beneficial seed spreading animal symbiots (as we see in the hugely nutritious durian above). Thus the benefit of terrestrial seed spreading caused the plants to draw the fish out of the water by nourishing them. Then seed spreading pulled the tetrapods (4-legged animals) far from the water. Then, in a similar way, seed spreading caused the plants to draw the animals into the treetops for reasons that we will get to next. 4) Seed-spreading caused dinosaurs to be big 5) Seed-spreading caused tall trees with macro-fruit Lets go back to before birds and bird-based seed-spreading evolved. At this time, if plants wanted animals to spread their seeds, they had to give away a fruit-snack just like today. But at that time (before birds), the seeds swallowed by the biggest and highest-reaching animals tended to be spread further, giving that tree species an important advantage over seeds swallowed by smaller creatures that did not need to constantly roam in search of food. So before the evolution of birds as effective seed spreaders, survival favored trees with giant, nutritious fruits high up, along with their giant animal symbiots capable of reaching this fruit. This is the evolutionary force that both caused dinosaur gigantism, as well as tall trees. Now saying that dinosaurs also ate fruit is a bit of a heresy for paleontologists. This is because there are many dinosaur coprolites (fossilized feces) that have been analyzed, and none show any evidence of sweet fruit in the dinosaur diet. They show evidence of leaves and pine cones, but not sweet fruit. Coprolites are however quite rare, and they are even rarer inside dinosaurs. Also, we don’t find any large fossilized sweet fruit outside of dinosaurs because high-energy fruit, like high-energy dinosaur soft tissue normally breaks down completely, unlike low-energy pine cones and leaves. So it seems to be that the presence of fruit in the dinosaur’s gut causes all the gut contents to break down completely, and we only find coprolites when dinosaurs were eating low-energy foods. 6) Seed-spreading caused Dangling fruit on break-away stems After some time, small energy efficient animals evolved to climb the trees to steal the macro-fruit (durians, jackfruit etc.) and cheat the dinosaur (mostly sauropod) seed-spreading system. This caused the fruiting plants plants to evolve dangling fruit on break-away stems. At first this boobytrap, or counter-measure must have been highly effective at preventing small animals from cheating the system. 7) Seed-spreading caused birds Today the main theory of the origin of the birds is that they evolved jumping down on prey animals. This theory however works better if we use fruit instead of prey animals. Thus the birds came from the cheating animals that survived the fall when the dangling fruit broke-away. First they survived small falls, then larger falls, then falls from any tree. Then they evolved to conserve energy jumping and gliding between trees… of greater and greater distance. Then they were sort of flying. 8) Seed-spreading caused mammals The mammals came from the animals that were so small they either didn’t trigger the fruit to fall, or so inconsequential their group would prosper anyway if they died bringing down a giant jackfruit that would feed their entire group for a long time. 9) Seed-spreading ended the giant dinosaurs Once birds evolved sufficiently, plants had a more efficient means of spreading their seeds. At this point, the plants that continued to provided macro-fruit for dinosaurs were wasting huge amounts of energy on seed spreading. These were at a huge disadvantage and eventually these plant species either adapted or mostly died out. Then once the macro-fruit trees all started catering to the birds and their tiny appetites, the dinosaurs had less and less food to eat. Ultimately a particularly large climate shock delivered the final blow. An integrated theory of tetrapod evolution Here is a single unified theory that explains most of the most important developments in land animal evolution.
Continental slope – The slope is “the deepening sea floor out from the shelf edge to the upper limit of the continental rise, or the point where there is a general decrease in steepness” (IHO, 2008). Compared with the relatively flat surface and gentle inclination of the continental shelf, the continental slope dips steeply into the ocean basins at an average angle of around 4° although it may be much steeper locally (35 to 90°). The continental slope (often referred to simply as “the slope”) is commonly dissected by submarine canyons; faulting, rifting and slumping of large blocks of sediment can form steep escarpments, relatively flat terraces and (under certain conditions) basins perched on the slope. On average, the slope is a narrow band ~41 km wide that encircles all continents and islands. The passive margin slopes of the South Atlantic Ocean are the widest on average (73 km), although the slope attains its greatest width of 368 km in the North Atlantic, where the slope protrudes south of Newfoundland. The most narrow, active margin, slopes are in the Mediterranean and Black Seas (25.8 km). The average width of active slopes (35.6 km) is somewhat less than the average width of passive margin slopes (45.7 km).
podcast by EarthSky."> Some features of this site are not compatible with your browser. Install Opera Mini to better experience this site. Will Plants and Pollinators Get Out of Sync? According to Esaias, the changes aren’t just dramatic, they’re also kind of scary. The fertility of most flowering plants, including nearly all fruits and vegetables, depends on animal-mediated pollination. As the pollinators move from flower to flower for nectar--a high-energy, sugary enticement—the plants dust them with pollen, which the animals transfer from flower to flower. “Flowering plants and pollinators co-evolved. Pollination is the key event for a plant and for the pollinators in the year. That’s where pollinators get their food, and that’s what determines whether the plant will set fruit. Some species of pollinators have co-evolved with one species of plant, and the two species time their cycles to coincide, for example, insects maturing from larva to adult precisely when nectar flows begin,” says Esaias. The concern is that in thousands upon thousands of cases, we don’t really know what environmental and genetic cues plants and pollinators use to manage this synchrony. According to ecologist David Inouye of the University of Maryland, some plant-pollinator pairs in a particular area likely do respond to the same environmental cues, and it’s reasonable to expect they will react similarly to climate change. But other pairs use different cues, the pollinator emerging in response to air temperature, for example, while the plant flowers in response to snow melt. Migratory pollinators, like hummingbirds, seem to be particularly at risk, since climate change will almost certainly affect different latitudes differently. There is no guarantee that the thousands of plant-pollinator interactions that sustain the productivity of our crops and natural ecosystems won’t be disrupted by climate change. As an example of how environmental cues for the timing of significant life cycle events might become uncoupled, Esaias points out that you don’t have to look any further than his bees. “What limits the growth of my honeybees in the spring are those coldest of the cold nights, because what is happening in their colony is that they are in a cluster, and they have to keep the queen and the larvae at 93 degrees. They do that by eating lots of honey, and tensing their muscles, and generating heat.” When it gets warm enough outside for them to maintain a temperature of 93 degrees, they start laying eggs around the edges of the cluster, and the cluster begins to expand. As long as the workers can keep the brood temperatures at 93 degrees, the eggs will grow into adult bees in about 3 weeks. But if a single cold, cold night in March intervenes, says Esaias, then eggs at the edges that the workers can’t keep warm will die. The cluster shrinks, and the colony must begin again. “Trees, on the other hand, may not feel those cold temperatures in the same way because their roots are well insulated,” Esaias suggests. The sun-warmed ground is slower to chill than the air, so trees may not be feeling the cold snaps in the same way that the bee colony does. Thus, flowering may occur before the bee colony has built up enough workers to take advantage of it, which means the hive will struggle to stockpile enough honey to sustain them through the next winter. “I am not saying they are definitely different,” Esaias stresses, “I am just saying there are good reasons to think that their response to climate change would not be identical. The truth is we don’t know what the relationships are between weather and climate, pollinators, and plants for thousands of species.” Since crops alone can’t sustain the pollen and nectar requirements of honeybee colonies, the potential for honeybees and other pollinators to become out of sync with their most important natural food sources is something that concerns Esaias. A national network of scale-equipped honeybee hives, Esaias believes, would reveal when flowering occurs now and help us better predict how plants and pollinators in both natural and agricultural ecosystems will—or won’t—adapt to climate change in the future. Perhaps the best part of the whole idea, according to Esaias, is that 1-to-5-kilometer-radius area in which a hive’s worker bees forage is the same spatial scale that many ecological and climate models use to predict ecosystems’ responses to climate change. It also matches the spatial scale of satellite images of vegetation collected by NASA’s Terra and Aqua satellites. This similarity of scale means that all these ways of studying ecosystems could be integrated into a more sophisticated picture of how plant and animal communities will respond to climate change than any one method alone could provide. Esaias is particularly interested in comparing the hive data to satellite-based maps of vegetation “greenness,” a scale that remote-sensing scientists commonly use to map the health and density of Earth’s vegetation. Scientists have been making these types of maps for decades, and they have used them to document how warming temperatures in the Northern Hemisphere are causing vegetation to green up earlier in the spring than it did in the 1980s. Such maps are an excellent general indicator of seasonal changes in vegetation, says Esaias, but by themselves, they won’t tell you something as tangible as when plants are flowering. “But if we compare flowering times based on the bee hives to the satellite data, it’s possible we will see some correlated signal or pattern that we didn’t notice before,” he says. “If we can establish a relationship between the hive data in a particular ecosystem and satellite data, then we could use our global satellite data from Terra and Aqua to map flowering times for similar ecosystems. We could make predictions about what is happening to nectar flows and the species that depend on them in places where we don’t have scale hives.” That sort of ground-truth data from scale hives could also be used to evaluate ecosystem models. According to Hank Shugart, a scientist at the University of Virginia who specializes in forest ecosystem modeling, the timing of seasonal events like leaf emergence and flowering are usually related to the accumulated time an area spends above a plant’s minimum growth temperature, a biological benchmark known as “growing degree days.” “It turns out that these heat-sum type approaches are pretty good at predicting the timing of these seasonal events,” says Shugart. In general, a plant will put out leaves or flower after the number of growing degrees days that species requires has passed. “What that means,” he says, “is that the greening-up that the satellites can see is probably also related, for most plants, to their flowering time,” which satellites cannot see. Honeybee hive data would be “a marvelous idea” for verifying the connections, says Shugart.
First defense barriers of the body and how to strengthen them When there is an entry of foreign body into the body, whether it be a germ or a toxic substance, they are faced off by our immune system. In fighting a disease, there are two types of immunity: mainly Innate immunity which is the immediate response to an infection and we are all born with it and adaptive or acquired immunity which develops after first exposure with an infection and produces antibodies against the germs the next time it enters. We will be talking about the innate immunity or natural immunity in this article on how they work and how we can strengthen this system. The first line of defenses The first line of defenses is what we can easily see from the outside and through which a tiny particle of germ can easily enter. They include: Skin is the largest organ in the body. It serves as a giant fort, covering and protecting the body from any possible harmful substances. It is also waterproof. Naturally, both the good and bacteria cohabit on the skin. However, as long as the skin is intact, no germ can enter into the body. - Respiratory system The respiratory system has many natural defenses. The nose responds to a foreign particle by sneezing. Or it can produce mucus and swallow the particle whole. That is why a runny nose and mucous production happens. If the particle is not removed by these mechanisms, the trachea contains small hairs called cilia which cannot be seen with naked eye. They have the ability to sweep away the germs and particles. - Digestive system Similarly, digestive system has its own natural defense system. We vomit when we ingest something that is harmful to our body. Moreover, the gut consists of the mucous lining which is a barrier between the gut and blood so that infections don’t easily enter into the bloodstream. There is also stomach acids that can effectively kill many kinds of infections. Along the epithelial lining of the digestive system, there are many living good bacteria. We can safely say the stomach and intestines are filthiest internal organs in the body. A lot of guarding mechanisms reside here to prevent the spread of germs from this area. What can we do for a great first line of immunity? - Sunlight can be very harmful to the skin. Free radicals and oxidants results from repeated exposure to sunlight can damage the skin and compromise the immune system. Putting on sunscreen, consumption of the antioxidants; Vitamin C and vitamin E rich food can help protect the skin. - Dehydration can also cause wrinkling of the skin and it becomes more vulnerable to the infections. It’s important to hydrate yourself throughout the day. - Skin needs constant renewal. Shedding old skin cells and replacing with new skin cells is crucial to a germ resistant physical barrier. For forming new cells, amino acids are necessary, therefore the adequate intake of protein is also important. - Skin problems can facilitate entry of germs. Proper and fast wound healing is important and collagen can accelerate wound healing. Vitamin C and zinc can also help with the wound healing. - Respiratory system The air we breathe contains dust, particles and many infectious germs. For us to be able to withstand the aerial pollution, we need to boost our respiratory immunity as well. - Vitamin E – Vitamin E in nuts can reduce inflammation in cells and relieve the tracheal congestion. It also has antioxidant properties, therefore protect the respiratory cells from injury. It also aids in transfer of oxygen by the red blood cells. - Vitamin A – Vitamin A is mostly found in green and yellow fruits and it can improve asthma and protects against cell injury. - Vitamin C – A good antioxidant, vitamin C can also lessen cell turnover. - Magnesium – It can improve the elasticity of bronchi and bronchioles and facilitate entry of more air into the lungs. - Digestive system - Good bacteria (probiotics) – They can be found in many milk and milk products. Probiotics in the gut lining can reduce the risk of stomach problems and diarrhea. - Prebiotics – They are food for the good bacteria (probiotics) and they can be easily found in many fermented foods such as kimchi. Probiotics can maintain the gut health while prebiotics are necessary to nourish and populate the healthy bacteria. - Vitamin C – It can reduce cell injury in stomach and intestines. - Zinc – It promotes wound healing in gut epithelial lining. Zinc can also quicken the recovery time after diarrheal episodes. Natural immune defenses need to be strong to reduce the risk for any kind of infections. The habits and methods mentioned above can improve the immune system in every way. It is best if you eat whole organic food but for older people or those with underlying diseases who are unable to consume enough nutritious food, meal replacements and supplements are great alternatives.
With exceptional electrical, optical, mechanical and chemical properties, Graphene was deemed a wonder material upon its fabrication in Manchester, UK in 2004. It is comprised solely of carbon, bonded in a hexagonal honeycomb structure in a layer only one atom in thickness. Aside from electronics, it has been hypothesized that graphene can be utilized in a number of medical applications. This is due to its unique properties; a two-dimensional planar structure, a large surface area, good biocompatibility and good chemical stability. Graphene in Medicine - Cancer Treatments There are vast possibilities for graphene in medicine. One of the most critical applications is in cancer treatments. It has been suggested that functionalized nano-sized graphene can be used as a drug carrier for in vitro intracellular delivery of anticancer chemotherapy drugs. So far, nano-graphene with a biocompatible polyethylene glycol (PEG) coating has been used in effective ablation of tumors in mouse models. In addition to this, a new microfluidic chip based on graphene oxide being developed can arrest tumor cells from blood and support their growth for further analysis. Once completed, this device could be used for cancer diagnosis as well as treatment options that don’t require biopsies, avoiding discomfort for patients and the risk of infection after a biopsy. The basic biological mechanisms by which cancer cells metastasize or spread to distant organs could also be studied/determined using this innovative device. Similar to the microfluidic chip, graphene-based biosensors are also being developed to electrically detect E. coli bacteria. Also, a number of drug molecules can be attached to the surface of graphene and hence it can be used in drug delivery to target diseases that are found on the surface of cells. - Birth Control Graphene has also been touted as a much more effective material than latex in birth control and the prevention of sexually transmitted diseases. University of Manchester researchers are developing a new graphene-latex composite for use in condoms that will be thinner, stronger, safer and more flexible than ever before. - Antimicrobial Applications Researchers at Case Western Reserve University, USA are planning to capatilize on graphene’s antimicrobial properties for reducing infections in hospitals. It can be used to coat stents and medical devices making them much safer for sugeries. They believe that graphene can decelerate the spread of antibiotic-resistant superbugs. - Neurological Disorders Neural stem cell (NSC) therapy is being researched to provide a treatment for numerous neurological disorders. However, NSCs require scaffolds to provide micro-environments for their growth and differentiation. Korean researchers have discovered that graphene sheets could support the required growth for them and most recently Chinese researchers have created graphene foam that can act as efficient NSC scaffolds. - Genetic Diseases Kostas Kosarelos, a nanomedicine researcher at the University of Manchester, believes that nanotechnologies can be used to deliver genetic information to specific regions of the brain for patients suffering from neurodegenerative disorders. Alongside his team, he is planning to work towards uniting graphene and medicine in the coming years.
Do you like to exercise? What's your favorite way to break a sweat? Do you prefer playing a team sport, like volleyball or baseball? Or do you like to run for miles and miles all by yourself? Maybe you would rather lift weights or do push-ups or sit-ups! You know that exercise is good for your body. It helps your muscles grow stronger. It can also help keep your heart and other important organs in tip-top shape. But can it make you smarter? Researchers have learned that exercise can play a role in improving learning and memory. Basically, exercising can help you do what you do — whatever it is — better. More studies are needed, though, to determine exactly how exercise helps the brain function at its best. Some researchers have ideas for how exercise might help, though. For example, scientists point out that exercise stimulates the body's nervous system, causing it to release chemicals such as serotonin and dopamine that make us feel happy and calm. This helps to explain why many people feel more alive and alert after exercising. And if you feel better, you can think more clearly and concentrate better. Overall, you can function at a higher level after exercise. Others point to studies that show that exercise can stimulate the growth of new brain cells. As your brain gets bigger, the areas associated with memory and learning get bigger and overall brain function improves. So how much exercise do you need to help your brain function at its best? Experts believe that as little as 15-30 minutes per day three times a week may be enough to improve brain performance. Thirty to sixty minutes per day four to five times a week is even better. As with all exercise, the more you put into it, the more you get out of it. Exercise is good for your body in so many ways, so incorporate as much exercise as you can into your daily or weekly routine. Just take it easy starting out until your body adjusts to getting more exercise.
Antarctica wasn’t always cold and snowy, but icy it rapidly. Scientists attributed this to the redistribution of currents and… rain. About 34 million years ago cooled and moved to the South pole Antarctica, before the ex only moderately cold, rapidly covered with ice. To our time the Antarctic ice sheet was the largest on the planet. It contains about 30 million cubic kilometers of ice – about 80 percent of all freshwater on the planet – and the Antarctica remains the coldest place on it. The article, published in the journal Nature Geoscience, geologists explained such a powerful cooling effect of two factors. First, the Drake passage that separates Antarctica and South America, became deeper and wider, which increased the mixing of water in the polar latitudes of the Pacific and Atlantic oceans. Warm currents such as the Gulf stream – were quicker to head North, providing a rapid cooling of Antarctica. All this has allowed to manifest the second factor: reducing the level of carbon dioxide in the atmosphere. It was falling since the early Cenozoic, 60 million years ago, but changes in ocean currents have dramatically accelerated this process. The rain began to go more often, “washing out” the air of carbon dioxide, which, interacting with rocks associated with them. Accordingly, decreased its warming greenhouse effect. It is worth saying that so far the redistribution of ocean currents due to the increase of the Drake passage, on the one hand, and the decrease in carbon dioxide in the atmosphere, on the other, considered as competing hypotheses to explain the ice of Antarctica. The authors of the new work was able to merge the two versions. “For us it is extremely interesting lesson,” emphasizes one of the authors, Professor at the canadian McGill University Halverson Galen (Galen Halverson). In fact, the researchers show a picture of the “switch” of the climate between the two stable States: glaciers are either absent or rapidly and completely cover the mainland.
What is slipped disk? The disks are pads of tissue situated between each of the vertebrae that make up the spinal column. Each disk consists of a tough, fibrous outer ring called the annulus fibrous and a softer, jellylike inner layer called the nucleus pulpous. The function of the disk is to act as both a strong connection between the vertebrae and a cushion to absorb weight on the spinal column. A slipped disk does not really slip; the tough outer fibrous ring (annulus) cracks open and the softer inner layer protrudes (prolapse) through the crack, like toothpaste coming out through a crack in a toothpaste tube. For this reason doctors prefer to speak of a disk prolapse rather than a slipped disk. The nucleus of a disk is softest and most jellylike during childhood. Over the years the nucleus gradually dries out so that by middle age it has a consistency similar to crabmeat. As someone gets older, the nucleus becomes even firmer. In elderly people the disk is mainly a section of scar tissue; this accounts for the fact that old people lose height. Slipped disk occurs less frequently as people get older; it is a disorder affecting young adults and people in early middle age. Disk protrusion occurs where the outer layer of the disk is weakest; that is, just in front of the nerve roots, which emerge from the spinal cord at each vertebral level. There is very little free space within the spinal canal, and the protruding disk material presses on the nerve root at that level and causes the painful symptoms of a slipped disk. The area of the spine most likely to be affected is the lowermost part of the back. Here the greatest strains occur, and it is not surprising that most disks that fail are at this level. However, it is possible for disks to prolapse at any level along the length of the spinal column – in the back or the neck. Causes of slipped disk - Intervertebral joint degeneration - Severe strain or trauma Symptoms of slipped disk When a prolapsing disk presses on a nerve root, symptoms occur both in the back and in the area that the nerve root supplies. For example, a slipped disk in the lower back can cause pain in the legs. Symptoms in the back can include severe backache. Often the sufferer will not be able to localize the pain with any accuracy. He or she may also develop painful spasms in the muscles that lie along each side of the spine, particularly in the early stages. The patient will feel more pain when moving about and some relief when lying flat. Coughing or sneezing can cause the prolapsing disk material to bulge out suddenly, causing a sharp pain in the back or legs. In addition there may be a curvature of the spine—the patient unconsciously leans away from the side of the disk prolapse to try to relieve the pressure from the nerve root that is involved. If the pressure on the nerve root is not too severe, the nerve will continue to work but will be painful. The brain cannot tell that the painful pressure is coming from the area of the disk, but instead interprets the information as pain originating in the nerve end. In a lower back disk protrusion the sciatic nerve can be irritated, and the individual may feel pain in the thigh, calf, ankle, or foot. This pain can shoot down a leg and is then called sciatica. More severe pressure on the nerve root may cause the nerve to stop functioning altogether. Areas of skin that the nerve supplies will become numb, so that a light touch or even a pinprick cannot be felt, Muscles supplied by the nerve will become weak or even completely paralyzed. Reflexes such as the knee jerk reflex may disappear. If only one nerve root is involved this is not too serious, because each nerve supplies only a small area of skin, or a limited number of muscles. If the nerves to the bladder or genitals are affected, however, their function can be permanently lost. Urgent medical attention is needed to relieve the pressure on these nerves. Diagnosis of the slipped disk Obtaining a careful patient history is vital because the events that intensify disk pain are diagnostically significant. The straight leg raising test and its variants are perhaps the best tests for slipped disk, but may still be negative. For the straight leg raising test, the patient lies in a supine position while the examiner places one hand on the patient’s ilium, to stabilize the pelvis, and the other hand under the ankle, and then slowly raises the patient’s leg. The test is positive only if the patient complains of posterior leg (sciatic) pain, not back pain. Video for straight leg raising test: In Lasegue test, the patient lies flat while the thigh and knee are flexed to a 90 degree angle. Resistance and pain as well as loss of ankle or knee jerk reflex indicate spinal root compression. Video for Lasegue test: X-rays of the spine are essential to rule out other abnormalities but may not diagnose slipped disk because marked disk prolapse can be present despite a normal X-ray. A thorough check of the patient’s peripheral vascular status—including posterior tibial and dorsalis pedis pulses and skin temperature of limbs—helps rule out ischemic disease, another cause of leg pain or numbness. Pages: 1 2
Totalitarianism, in its adjectival form ‘totalitarian’, originated in 1923 among the opponents of Italian Fascism who used it as a term of abuse to describe the government and politics of Mussolini. The period 1918-39 saw a reaction against democratic governments in Europe and elsewhere and the rise of totalitarian regimes in a number of states. In Italy, a liberal government was overthrown and a fascist regime under the leadership of Mussolini was set up in 1922. Before that, a communist regime had been established in Russia in 1917. The trend continued with Spain, Portugal, Germany and Japan slipping into dictatorial regimes. All these regimes were characterized as totalitarian because they, as Hanna Arendt pointed out, were the novel form of government and not just modern versions of dictatorships that have existed since antiquity. Totalitarianism, among the opponents of Italian Fascism who used it as a term of abuse to describe the government and politics of Mussolini. However, the fascists embraced the term as a fitting description of the true goal and nature of their regime. When Mussolini expounded the doctrine of everything within the state nothing outside the state, nothing against the state in a speech in 1925, he brought forth the essential nature of a totalitarian state. If nothing could stand outside the state, there could be no free markets, no free political parties, no free families and no free churches.Thus, totalitarianism stands at the opposite pole of liberal democracy. Under a totalitarian regime, the state controls nearly every aspect of individual life and does not tolerate activities by individuals or groups that are not directed by the state’s goals. If Mussolini applied the term to his own regime in Italy, Leon Trotsky applied the term to both fascism and ‘Stalinism’ as ‘symmetrical phenomena, and the great thinker Hanna Arendt popularized the term in order to illustrate the commonalities between Nazi Germany and Stalinist Soviet Union. Thus, the main examples of the regimes considered totalitarian are Fascist Italy, Nazi Germany and the Soviet Union under Stalin. Giovanni Gentiles explaining the concept says that ‘totalitarian’ is the condition of a state in which all activities of civil society, inadvertently or not, ultimately lead to, and therefore perpetually exist in, something resembling a state. William Ebenstein describes the nature of such a state as ‘the organization of government and society by a single-party dictatorship, intensely nationalist, racist, militarist and imperialist.’ Totalitarianism cultivates and encourages state worship. It preaches that every individual’s life belongs not to him but to state and to the state alone. Individuals acquire significance only by service to the state, and if they do not identify themselves with the state, they are little more than atoms. Thus, a totalitarian state permits no autonomous institutions and the aims, activities and membership of all associations are subject to the control of the state. The state becomes omnipotent and omnipresent. Religion, morals and education are subordinate to the state. The aim of totalitarianism is to abolish the fundamental distinction between the state and society and to make the state unlimited. Franz Schanwecher, the Nazi theorist, used to say ‘the nation enjoys a direct and deep unity with God…. Germany is the kingdom of God’. Here, it is necessary to point out that the totalitarian theory of state was not a full-fledged theory to start with. It gradually evolved and worked out of practical movements and actual socio-political situations. Thus, in this case, theory followed practice instead of preceding it. Among the thinkers who have analyzed the totalitarian theory and movements, the names of Hanna Arendt, Carl Friedrich, Brzezinsky and Jean Kiskpatrick figure prominently.
Today I got a chance to do a lesson that gets 6th graders to think critically about the meaning of multiplication. I really think that if you just teach math as a rote mindless activity, it accomplishes nothing. Maybe the kids will remember it long enough to pass a test, but they really won’t have ‘learned’ anything. So I made an activity that enables students to explore the relationship between counting, adding, and multiplying. So you take a question like #4, with 5 houses having 6 windows each. They can count the windows to get 30, or they can do 6+6+6+6+6 to get 30, or they can do 6*5 (or 5*6 !) to get 30. This way if a kid ever forgets 6*5, he can just add 6+6+6+6+6 and re-derive it. That’s thinking critically! There were a few kids who raced through the thing, just using their memorized times tables, but I made them also turn each question into the related adding problem, just to force them to think of the relationship between those two math concepts. After answering the questions, they had to make their own examples. Some kids got really creative with them, which was great. Here’s the worksheet:
This is a Hubble Space Telescope view of one of the most dynamic and intricately detailed star-forming regions in space, located 210,000 light-years away in the Small Magellanic Cloud (SMC), a satellite galaxy of our Milky Way. At the center of the region is a brilliant star cluster called NGC 346. A dramatic structure of arched, ragged filaments with a distinct ridge surrounds the cluster. A torrent of radiation from the cluster's hot stars eats into denser areas creating a fantasy sculpture of dust and gas. The dark, intricately beaded edge of the ridge, seen in silhouette by Hubble, is particularly dramatic. It contains several small dust globules that point back towards the central cluster, like windsocks caught in a gale. Energetic outflows and radiation from hot young stars are eroding the dense outer portions of the star-forming region, formally known as N66, exposing new stellar nurseries. The diffuse fringes of the nebula prevent the energetic outflows from streaming directly away from the cluster, leaving instead a trail of filaments marking the swirling path of the outflows. The NGC 346 cluster, at the center of this Hubble image, is resolved into at least three sub-clusters and collectively contains dozens of hot, blue, high-mass stars, more than half of the known high-mass stars in the entire SMC galaxy. A myriad of smaller, compact clusters is also visible throughout the region. Some of these mini-clusters appear to be embedded in dust and nebulosity, and are sites of recent or ongoing star formation. Much of the starlight from these clusters is reddened by local dust concentrations that are the remnants of the original molecular cloud that collapsed to form N66. An international team of astronomers, led by Dr. Antonella Nota of the Space Telescope Science Institute/European Space Agency in Baltimore, has been studying the Hubble data. In an upcoming issue of Astrophysical Journal Letters the team reports the discovery of a rich population of infant stars scattered around the young cluster NGC 346. These stars are likely to have formed 3 to 5 million years ago, together with the other stars in the NGC 346 cluster. These infant stars are particularly interesting as they have not yet contracted to the point where their interiors are hot enough to convert hydrogen to helium. The Small and Large Magellanic Clouds are diffuse irregular galaxies visible to the naked eye in the southern hemisphere. They are two smallish satellite galaxies that orbit our own Milky Way Galaxy on a long slow journey inwards towards a future union with the Milky Way. Hubble has resolved many star formation regions in both of these neighboring galaxies that provide astronomers with laboratories other than our own Milky Way Galaxy to study how young stars interact with and shape their environments. The two satellites are named after the Portuguese seafarer Ferdinand Magellan (1480-1521) who sailed from Europe to Asia and is best known as the first person to lead an expedition to circumnavigate the globe. This image of NGC 346 and its surrounding star formation region was taken with Hubble's Advanced Camera for Surveys in July 2004. Two broadband filters that contribute starlight from visible and near-infrared wavelengths (shown in blue and green, respectively) have been combined with light from the nebulosity that has passed though a narrow-band hydrogen-alpha filter (shown in red). For more information, please contact: Antonella Nota, Space Telescope Science Institute/ESA, 3700 San Martin Drive, Baltimore, Md., (phone) 410-338-4520, (e-mail) [email protected], or Marco Sirianni, Space Telescope Science Institute/ESA, 3700 San Martin Drive, Baltimore, Md., (phone) 410-338-4810, (e-mail) [email protected], or Lars Lindberg Christensen, Hubble European Space Agency Information Center, Garching, Germany, (phone) +49-(0)89-3200-6306, (cell) +49-(0)173-3872-621, (e-mail) [email protected], or Ray Villard, Space Telescope Science Institute, Baltimore, Md., (phone) 410-338-4514, (e-mail) [email protected] Object Name: NGC 346 Image Type: Astronomical To access available information and downloadable versions of images in this news release, click on any of the images below:
The way a specific cost reacts to changes in activity levels is called cost behavior. Costs may stay the same or may change proportionately in response to a change in activity. Knowing how a cost reacts to a change in the level of activity makes it easier to create a budget, prepare a forecast, determine how much profit a new product will generate, and determine which of two alternatives should be selected. Fixed costs are those that stay the same in total regardless of the number of units produced or sold. Although total fixed costs are the same, fixed costs per unit changes as fewer or more units are produced. Straight‐line depreciation is an example of a fixed cost. It does not matter whether the machine is used to produce 1,000 units or 10,000,000 units in a month, the depreciation expense is the same because it is based on the number of years the machine will be in service. Variable costs are the costs that change in total each time an additional unit is produced or sold. With a variable cost, the per unit cost stays the same, but the more units produced or sold, the higher the total cost. Direct materials is a variable cost. If it takes one yard of fabric at a cost of $5 per yard to make one chair, the total materials cost for one chair is $5. The total cost for 10 chairs is $50 (10 chairs × $5 per chair) and the total cost for 100 chairs is $500 (100 chairs × $5 per chair). Graphically, the total fixed cost looks like a straight horizontal line while the total variable cost line slopes upward. The graphs for the fixed cost per unit and variable cost per unit look exactly opposite the total fixed costs and total variable costs graphs. Although total fixed costs are constant, the fixed cost per unit changes with the number of units. The variable cost per unit is constant. When cost behavior is discussed, an assumption must be made about operating levels. At certain levels of activity, new machines might be needed, which results in more depreciation, or overtime may be required of existing employees, resulting in higher per hour direct labor costs. The definitions of fixed cost and variable cost assumes the company is operating or selling within the relevant range (the shaded area in the graphs) so additional costs will not be incurred. Some costs, called mixed costs, have characteristics of both fixed and variable costs. For example, a company pays a fee of $1,000 for the first 800 local phone calls in a month and $0.10 per local call made above 800. During March, a company made 2,000 local calls. Its phone bill will be $1,120 ($1,000 +(1,200 × $0.10)). To analyze cost behavior when costs are mixed, the cost must be split into its fixed and variable components. Several methods, including scatter diagrams, the high‐low method, and least‐square regression, are used to identify the variable and fixed portions of a mixed cost, which are based on the past experience of the company. Scatter diagram. In a scatter diagram, all parts would be plotted on a graph with activity (gallons of water used, in the example graph later in this section) on the horizontal axis and cost on the vertical axis. A line is drawn through the points and an estimate made for total fixed costs at the point where the line intersects the vertical axis at zero units of activity. To compute the variable cost per unit, the slope of the line is determined by choosing two points and dividing the change in their cost by the change in the units of activity for the two points selected. For example, using data from the following example, if 36,000 gallons of water and 60,000 gallons of water were selected, the change in cost is $6,000 ($20,000 – $14,000) and the change in activity is 24,000 (60,000 – 36,000). This makes the slope of the line, the variable cost, $0.25 ($6,000 ÷ 24,000), and the fixed costs $5,000. See the graph to illustrate the point. High‐low method. The high‐low method divides the change in costs for the highest and lowest levels of activity by the change in units for the highest and lowest levels of activity to estimate variable costs. The high point of activity is 75,000 gallons and the low point is 32,000 gallons. The variable cost per unit is estimated to be $0.163. It was calculated by dividing $7,000 ($20,000 – $13,000) by 43,000 (75,000 – 32,000) gallons of water. Least‐squares regression analysis. The least‐squares regression analysis is a statistical method used to calculate variable costs. It requires a computer spreadsheet program (for example, Excel) or calculator and uses all points of data instead of just two points like the high‐low method.
|Authors: Dena Plemmons and Michael Kalichman, 2008| Case studies are a tool for discussing scientific integrity. Although one of the most frequently used tools for encouraging discussion, cases are only one of many possible tools. Many of the principles discussed below for discussing case studies can be generalized to other approaches to encouraging discussion about research ethics. Cases are designed to confront readers with specific real-life problems that do not lend themselves to easy answers. Case discussion demands critical and analytical skills and, when implemented in small groups, also fosters collaboration (Pimple, 2002). By providing a focus for discussion, cases help trainees to define or refine their own standards, to appreciate alternative approaches to identifying and resolving ethical problems, and to develop skills for analyzing and dealing with hard problems on their own. The effective use of case studies is comprised of many factors, including: - appropriate selection of case(s) (topic, relevance, length, complexity) - method of case presentation (verbal, printed, before or during discussion) - format for case discussion (Email or Internet-based, small group, large group) - leadership of case discussion (choice of discussion leader, roles and responsibilities for discussion leader) - outcomes for case discussion (answers to specific questions, answers to general questions, written or verbal summaries) Leading Case DiscussionsFor the sake of time and clarity of purpose, it is essential that one individual have responsibility for leading the group discussion. As a minimum, this responsibility should include: - Reading the case aloud. - Defining, and re-defining as needed, the questions to be answered. - Encouraging discussion that is "on topic". - Discouraging discussion that is "off topic". - Keeping the pace of discussion appropriate to the time available. - Eliciting contributions from all members of the discussion group. - Summarizing both majority and minority opinions at the end of the discussion. How should cases be analyzed?Many of the skills necessary to analyze case studies can become tools for responding to real world problems. Cases, like the real world, contain uncertainties and ambiguities. Readers are encouraged to identify key issues, make assumptions as needed, and articulate options for resolution. In addition to the specific questions accompanying each case, readers might consider the following questions: - Who are the affected parties (individuals, institutions, a field, society) in this situation? - What interest(s) (material, financial, ethical, other) does each party have in the situation? Which interests are in conflict? - Were the actions taken by each of the affected parties acceptable (ethical, legal, moral, or common sense)? If not, are there circumstances under which those actions would have been acceptable? Who should impose what sanction(s)? - What other courses of action are open to each of the affected parties? What is the likely outcome of each course of action? - For each party involved, what course of action would you take, and why? - What actions could have been taken to avoid the conflict? Is there a right answer? Most problems will have several acceptable solutions or answers, but it will not always be the case that a perfect solution can be found. At times, even the best solution will still have some unsatisfactory consequences. While more than one acceptable solution may be possible, not all solutions are acceptable. For example, obvious violations of specific rules and regulations or of generally accepted standards of conduct would typically be unacceptable. However, it is also plausible that blind adherence to accepted rules or standards would sometimes be an unacceptable course of action. - Bebeau MJ with Pimple KD, Muskavitch KMT, Borden SL, Smith DH (1995): Moral Reasoning in Scientific Research: Cases for Teaching and Assessment. Indiana University. - Elliott D, Stern JE (1997): Research Ethics - A Reader. University Press of New England, Hanover, NH. - Cases and Scenarios, Online Ethics Center for Engineering and Research, National Academy of Engineering - Foran J (2002): Case Method Website: Teaching the Case Method: Materials for a New Pedagogy, University of California, Santa Barbara. - Herreid CF: National Center for Case Study Teaching in Science, State University of New York at Buffalo. This comprehensive site offers methodology, a case study collection, case study teachers, workshops, and links to additional resources. - Korenman SG, Shipp AC (1994): Teaching the Responsible Conduct of Research through a Case Study Approach: A Handbook for Instructors. Association of American Medical Colleges, Washington, DC. - Macrina FL (2005): Scientific Integrity: An Introductory Text with Cases. 3rd edition, American Society for Microbiology Press, Washington, DC. - National Academy of Sciences (1995): On Being a Scientist: Responsible Conduct in Research. 2nd Edition. Publication from the Committee on Science, Engineering, and Public Policy, National Academy of Sciences, National Academy of Engineering, and Institute of Medicine. National Academy Press, Washington DC. - National Academy of Sciences (2009): On Being a Scientist: Responsible Conduct in Research. 3rd Edition. Publication from the Committee on Science, Engineering, and Public Policy, National Academy of Sciences, National Academy of Engineering, and Institute of Medicine. National Academy Press, Washington DC. - Penslar RL, ed. (1995): Research Ethics: Cases and Materials. Indiana University Press, Bloomington, IN. - Pimple KD (2002): Using Small Group Assignments in Teaching Research Ethics, The Poynter Center, Indiana University, Bloomington. - Schrag B, ed. (1996): Research Ethics: Cases and Commentaries, Volumes 1-6, Association for Practical and Professional Ethics, Bloomington, Indiana.