content
stringlengths
275
370k
If your child is frequently ill, it may be due to a condition called neutropenia. Neutropenia is when your body doesn’t have enough of a certain type of blood cell that helps fight infection. The 30 second science lesson… In the blood, there are different kinds of cells, including white blood cells. The white blood cells are the ones that help fight infection, essentially the body’s defense. When you get sick, the number of white cells goes up, trying to defend the body. Within the broad term of white blood cells, there are different subtypes of cells. One of those is called the neutrophil. Generally speaking, the neutrophil helps fight bacteria. Neutropenia is simply low levels of this particular type of white cell, the neutrophil. So if your neutrophil count is low, you won’t be able to fight off infection. Why does it happen? Neutropenia can happen when: - The body uses up or destroys all the neutrophils it makes. - Example: Your child gets one really bad infection and has to use all of the neutrophils to fight it, thereby making him/her more susceptible to other infections. - Example: Your child’s cells are destroyed during chemotherapy or radiation, or; - The body simply doesn’t make enough to start with. - Example: Your child has an underlying autoimmune disorder. - Example: Your child has an underlying bone marrow diseases like aplastic anemia, cancer, or leukemia. Signs and symptoms of neutropenia - Frequent infections essentially anywhere: - Ear infections - Sore throats - Sinus infections - Urinary tract infections - Skin infections, etc. How is neutropenia diagnosed? The only way to diagnose neutropenia is by doing a blood test, specifically a CBC (or complete blood count) with a differential. The “differential” component means that the doctor is looking specifically at the various types of cells. You cannot look at a person and surmise that he/she has neutropenia. What is an ANC? The ANC is the “absolute neutrophil count.” If your child has neutropenia, this number is one you will become very familiar with. This tells you how many of the actual little neutrophils your child has circulating in the blood stream. In medicine, we measure how severe the neutropenia by the sheer number in the body. The fewer neutrophils there are (i.e., the lower the ANC), the more severe the neutropenia. What do I do once diagnosed? There isn’t a specific treatment for just neutropenia (it’s not like you can take a pill or get a transfusion of just neutrophils to boost the numbers). There are a few basics though. - Assess the clinical situation. If the condition was picked up incidentally, you may just wait and watch (allowing the body time to recover on its own). If there are serious other factors (e.g., life threatening infections or cancers), you may be much more aggressive. - Treat any symptoms or complications that result from neutropenia. For example, give an antibiotic for pneumonia or a urinary tract infection. - Evaluate for underlying causes. For example, there may be a problem with the immune system in general or the bone marrow (which is responsible for making blood). - Prevent complications and/or further infections. A child who has a weakened defense system needs to be kept out of the line of fire. Don’t allow the child with neutropenia to be around sick people. Depending on the severity of the neutropenia (and what underlying conditions exist), it may be appropriate to keep your child home from preschool/school, daycare, and public places (like grocery stores, etc.) Should I be worried? Yes and No. While freaking out never helped anyone, having a healthy respect for the serious nature of an illness is appropriate. Neutropenia can leave your child very vulnerable. While most children recover without any serious complications/problems, there are kids who get very serious life threatening illnesses. If you are worried that your child may have neutropenia , talk to your doctor. He/she can easily test for it and let you know if there is something worth worrying about. Some of the Products I Love One of the questions I get a lot include, "What is the best booster seat?" and "What are the top-rated booster seats?" Here's what I know, and what I use: Booster seats are car seats designed to be used by children between the ages of 4-8 years-old. The seat belt in a... Everyone should have a few common, key items in their medicine cabinet. These few items should help in a pinch, and save you from making trips to the store in the middle of the night. Here are the must haves to any medicine cabinet: Tylenol (generic is acetaminophen).... If you are having a baby and planning on breastfeeding, you may want to consider buying a breast pump. The most valuable time to have a breast pump is generally in the first few days after having a baby. So if you’re going to invest in one, do so early. Consider...
Learning isn't just memorizing facts. It's discovering how to brainstorm and how to use that skill to solve problems. So before kids tackle anything else they need brainstorming basic training. Here it is in five fun, action-packed steps. *Teachers, this is just what your students need to get ready for "Next Generation of Science Standards"(NGSS) based skill-building. 1. Creative Observing Children will power up their observational skills to collect as much information as possible. It's what they'll need to do when getting ready to creatively tackle any problem. Follow the recipe below to mix up a batch of some really cool stuff. I call it "Goop". Then check it out! Recipe for Goop Pour one cup of cornstarch into a shallow container, like a plastic storage box. Add two to four drops of green food coloring. Use a metal spoon to slowly stir in water, adding just a few tablespoons at a time, until the Goop feels solid when poke. Now, find out everything you can about Goop. Examine it in every way you can think of. Then test Goop in each of these ways and observe closely: - Try to pour it into an empty container. - Try to make it change its shape. Try to mold it into a ball. Try to shape it into a cube. - Poke it with a finger. Decide how it feels. - Try to break it into two chunks. Can you? How does this change it? Take a close look at the edges of the broken pieces. - Try to put the two chunks back together. Was it easy to do? NOTE: When you're finished, throw the Goop away in the trash basket. Do not wash it down the sink as it can clog pipes. 2. Brainstorming Kids will be revving their brains with this one and learning what problem solving is like in real life. Read the following story aloud. Then have children work individually or in small groups to think of all the possible solutions to the problem. Add to the fun by challenging them to come up with a solution in just one minute. Story: The angry native are hot on Smitty's trail. A heavy, sharp-tipped spear zips past his head and thumps into a tree trunk. He ducks and swerves off the trail and into the thick underbrush. Shoving leaves out of his way, Smitty charges through the jungle. Suddenly, Smitty jerks to a stop on the bank of a river. Water is roaring and churning around huge boulders that poke up like humps on a sea serpent's back. He's also pretty sure there's a crocodile lurking on the far bank. However, if he could just get across the river, the natives would probably let him go. How can Smitty get across the river? After one minute, challenge kids to brainstorm even harder by thinking about things that might keep some of these possible escapes from working. Then challenge them to pick Smitty's best possible escape option and tell why they believe it will work. 3. Creative Predicting Now it's time to make use of patterns. Children will discover how important it is to look for patterns and remember them. Prepare for this by placing something familiar such as popped popcorn, paperclips, or M&M candies in a paper bag and staple the top shut. Next, divide the class into small groups. Give each group a bag. Challenge them to use their senses one at a time to collect information about what's in the bag. After each observation, challenge them to use their past experiences and these observations to try and identify the mystery object. Have the children use their sense of hearing first. Have them shake the bag and listen. Next, have children gently poke and squeeze the bag. What do they observe this time? Does that make them want to change their earlier prediction about what's inside the bag? If so, what do they now think the mystery item might be? Challenge children to use their sense of smell this time and repeat the process. They'll need to make observations, consider what they may have smelled in the past that had that scent, and decide if they want to change their prediction. Finally, allow children to open the bag and use their sense of sight to collect observations and identify the mystery object. This is another fun active to get kids observing, inferring, and predicting. Give each group a different kitchen tool and challenge them to figure out what problem that tool was designed to solve. The weirder the tool the better for this activity. Allow time for the groups to combine their brainpower on this. Then let each group display their kitchen tool, tell what use they predict it has, and explain why they came to this conclusion. Encourage all of the groups to discuss whether or not this is the most likely function of each tool before you share the real use. 4. Experimenting Kids will be challenged to predict and test. It's fun now and will be a survival skill for the life. Prepare for this activity by having partner groups fold paper airplanes. Have each group complete their plane by slipping a paperclip over the nose (narrow) end. Use tape on the floor on a long hall or a piece of rope outdoors on the playground to be a starting line. Have the partner groups toss their plane and measure how far it flies. Then challenge the partners to change their plane so it will fly even farther. Before they leap into action, have the partners list all the changes they could make. Next, have them list the three changes that (based on past experiences and observing patterns) they think are most likely to be successful. Have them narrow this down to the one change they believe is likely to work best. Be sure they list why they think this will work. Have the partners also think about what variables (things that could change the outcome) they need to keep exactly the same as they test their modified plane. Finally, have the partners change their plane per their idea and test it three times. Did it work? Could something be changed to make fly even farther? If so, what? Have all of the groups compare their results and decide which modification worked best. Why did it? 5. Creative Evaluating Children will analyze results and think of other possibilities. They'll be pushing their brains into maximum action now! It's time for an activity that will take kids from creative thinking to inventing. Divide the class into partner groups again. This time have them examine a tennis shoe. Challenge them to list everything about the shoe that makes it good for the job it was made to do. Have them think how this tennis shoe could be changed to function even better. Next, have the partners come up with new features that might be added to the shoe to make it perform even better or do something that's totally new and wonderful. Encourage them to make diagrams of the shoe showing their proposed changes. Have each partner group describe the new and improved features they're proposing. If you have old tennis shoes available, you could let the partners collect materials and create a prototype model of their proposed improved shoe.
Human-engineered microbes are workhorses of the pharmaceutical and chemical industries, churning out biofuels, drugs, and many other products. But they can cause big problems if they become contaminated by other microbes or viruses or escape into the environment. Now, a new type of microbe that can survive only on artificial nutrients promises better security against such mishaps. The strategy, described in two papers in this week’s issue of Nature, might ultimately be used to control genetically engineered plants or other organisms released into the wild to create products or clean up pollution. Contamination is one of many risks involved in using engineered microbes to produce biological pharmaceuticals and high-value chemicals. Viruses, for example, can hijack bacteria and spoil a batch of drugs. "It can be disastrous," George Church of Harvard Medical School in Boston, who helped lead the new research, told reporters during a telephone briefing. Engineered microbes themselves can also accidentally end up in a product or the environment. That's why government regulators require that most engineered microbes be physically contained in sealed vats or other containers. In principle, a microbe could be confined more securely by modifying its genome so that it can reproduce only in the presence of certain nutrients or chemicals. But microbes can usually evolve to get around these obstacles. Now, Church and colleagues say they’ve overcome those issues by redesigning the Escherichia coli genome so that the microbe depends on a synthetic amino acid to create proteins necessary for survival and reproduction. When the microbes are grown without this synthetic nutrient, their genetic machinery grinds to a halt and they die. The modification also prevents viruses from contaminating the culture, as they can’t replicate inside the altered microbes. Farren Isaacs, a former postdoc in Church’s lab now at Yale University, describes similar results in the same issue of Nature. To lower the chances that any of the engineered microbes can mutate and survive without the special diet, the groups altered three genes to require the synthetic amino acids. "It really adds increasing layers of security onto this system," says Tom Ellis, a synthetic biologist at Imperial College London, who was not involved in the research. Neither group has yet detected any successful mutations in the microbes. "They’re opening a door into a completely new area for investigation in biosafety," adds Markus Schmidt, a biosafety expert and consultant in technology assessment at Biofaction KG in Vienna. Several steps remain before the microbes are ready for prime time. One important question is the cost of the synthetic amino acids used to feed the engineered microbes. For example, Ellis says that the amino acid used in Isaacs's experiment would be prohibitively expensive for most commercial applications. Church engineered microbes to require a cheaper synthetic amino acid. *Correction, 26 January, 10:07 a.m.: A previous version of this story incorrectly stated that Church’s microbes reproduced less quickly than those in the experiments by Isaacs. In addition, neither group measured the ability of the microbes to produce chemicals.
This course provides an outline of the physical processes that control how watersheds function; it provides the necessary geophysical link with biology required to successfully plan, undertake and complete ecological restoration. Both terrestrial and fluvial processes are considered. Because these processes require understanding of general geoscience principals, this course includes selected basic introduction to earth science concepts. The first section of the course covers general earth science principals leading into terrain assessment, including a wide range of terrain attributes, with mapping and related interpretations such as landslide and erosion hazards from the point of view of the map user and according to current provincial (British Columbia) standards. Topics covered include an overview of watershed assessment approaches, morphetmetry, hydrogeological concepts, surficial materials and landforms, principles of soil physical behaviour (e.g., drainage and strength), terrain map symbols, terrain survey intensity levels, engineering characteristics of surficial materials (soils), landslide and other slope processes, and the reliability and limitations of terrain and slope stability mapping. The second section, dealing with fluvial processes, covers applicable provincial and federal legislation as well as collection and interpretation of stream channel data. Other topics will include: the provincial Channel Assessment Procedure and the effects of land use on stream channel, gully and alluvial fan morphology, and channel restoration strategies. This course is reserved for Ecological Restoration program. students require department approval prior to registration. For registration please contact Giti Abouhamzeh at [email protected] or call 778-331-1392. This course offering is in progress. Please check back next term or subscribe to receive email updates. Upon successful completion of this course, the student will be able to: Understand basic geoscience principals, as they apply to ecological restoration. Understand the factors that control watershed diversity and the biophysical processes that must be considered in ecological restoration activities. Identify watershed landforms that relate to restoration planning and effectiveness. Understand terrain maps, terrain stability maps, and other interpretive maps, and be aware of how these maps should and should not be used in a restoration context. Interpret terrain map symbols with the aid of a map legend (BC Terrain Classification System). Conceptualize the physical characteristics of the common surficial materials and demonstrate how these relate to the material’s original mode of deposition. Assess how surficial material characteristics and properties are related to landforms and their significance with regard to land use activities. Recognize the common geomorphic processes (e.g., debris flows, snow avalanches) related to different watersheds, and be aware of the potential effects of these processes on land use and the potential influence of land use on these processes. Classify the different types of geomorphic processes and describe the chief controls on slope stability. Outline simple interpretations for geological hazards. Evaluate how terrain information can be used to make decisions such as road and cutblock location. Interpret air photos under a stereoscope and recognize features shown on a terrain map of the same area. Outline the general stream channel morphologies and understand the importance of watershed characteristics in controlling morphologies. Summarize the stream channel inventory procedure. Identify and interpret the various mapping conventions for stream channel characteristics. Compare the various types of stream channel restoration strategies that are available and evaluate their relative effectiveness in a watershed context. Develop prescriptive actions for the restoration of forestry harvested and urban disturbed watercourses. Apply the Stream Channel Assessment Procedure to select examples on natural watercourses. Effective as of Winter 2013 RENR 8201 is offered as a part of the following programs: Interested in being notified about future offerings of RENR 8201 - Terrain and Stream Channel Assessment for Ecological Restoration? If so, fill out the information below and we'll notify you by email when courses for each new term are displayed here.
As basic as reading is, how much do we know about it? How and why exactly do people read? How we Read For starters, our eyes have 3 types of vision ranges: fovea or the area at the centre of the retina; parafovea, which expands 5 degrees on either side of a fixation; and periphery, which is everything else. The peripheral vision is unclear and not detailed, but it picks up color and movement. Fovea however, picks up details well and is critical for reading. Most of what we understand clearly when reading, happens in the foveal area. Meanwhile, a letter or two on either side occurs in the parafoveal area. We also use our working memory when reading. Research states that our working memory manages four distinct “chunks” at a time—a chunk being a bundle of information connected through some meaning. Chunks are smaller for new or difficult material. Generally, your brain can only handle so much in a certain period, which means reading too fast leads to minimal comprehension. Pauses for comprehension take about 300 to 500 milliseconds on average. Why we Love to Read Most people can read but not many love doing it. Bookworms have an insatiable passion for one reason or another. Some pick up books for personal enlightenment, entertainment, or professional growth. Some read to beat boredom, while others to enhance their vocabulary and communication skills. And the list of reasons why people read just goes on (counseling aid, brain exercise, virtual travel, etc.). To give you a more exact picture, here are statistics about why people read: - 26% of those who read simply enjoy learning, gaining knowledge, and discovering information. - 15% cited escaping reality, immersing themselves in another world, and being able to use their imagination as their reason for taking pleasure in reading. - 12% said that reading has high entertainment value while noting the drama in good stories and the suspense of discovering how a plot unfolds. - 12% equate reading to relaxation and finding their quiet time. - 6% love the variety of topic books offer.
A common misconception about the history of mental illness is that before Freud and psychoanalysis, there was no such thing as talk therapies or what is commonly known today as psychotherapy. Confinement (eg, cages, chains, and straitjackets), sedation (eg, opium and bromides), and somatic interventions (eg, bindings, purging, and electric stimulation) dominate the imagined landscape of premodern treatments for mental disorders. This picture is not altogether inaccurate—but it is incomplete. In large measure, this prevailing image is why so many today continue to associate the term “madness” with coercion and violence. Yet the history of madness is replete with examples of noninvasive, noncoercive forms of treatment.1 Music, for instance, was recommended as a treatment for relieving symptoms since ancient times.2 During the 18th and early 19th centuries, Philippe Pinel (1745-1826), William Tuke (1732-1822), Francis Willis (1718-1807), and many others developed and refined the so-called moral treatment, intended as a way of healing lunacy by appealing to a patient’s intellectual and emotional faculties.3,4 And throughout the 19th and early 20th centuries, hypnotism and suggestion were widely used by neurologists, psychiatrists, and healers of different kinds for treating a wide variety of ailments.5 One of the least acknowledged, yet most enduring, forms of psychological treatment was psychagogy. Dating back to ancient Greece and Palestine, psychagogy remained a mainstream therapeutic method and profession until the term largely fell into disuse during the 1970s and 1980s. It remains unclear (to me at least) whether the “disappearance” of the field of psychagogy was a function of its having lost out to competing therapies or whether its methods and ideas were simply appropriated by and folded into other fields. Whatever the case is, however, today the term is likely unfamiliar to all except for those versed in Christian theology. So what was psychagogy? The term comes from ancient Greek philosophy, in which Plato used it to refer to “the manner of leading the soul through words.”6 He contrasted psychagogy, or “guidance of the soul” toward self-knowledge, with deceptive uses of the art of persuasion (rhetoric). Centuries later, the idea was taken up by Paul of Tarsus and early Christian thinkers, who relied on psychagogic techniques in writing the New Testament.7 And as historian Paul Dilley has recently shown, ancient Christian monks developed a form of ascetic psychagogy, by which disciples were trained in self-improvement through various “stages of advice, discipline, and emotional support.”8 Psychagogy mostly retained its association with techniques aimed at moral self-improvement until the turn of the 20th century. Then, in the 1920s, French and German specialists began to incorporate psychagogic methods in their work with character psychology, psychoanalysis, hypnosis, and general psychotherapeutic practice. In 1924, an International Institute for Psychagogy and Psychotherapy was founded by the Swiss psychoanalyst Charles Baudouin. A mix of purposes, methods, and clients would come to distinguish psychagogy throughout the rest of the century. Taking on influences from depth psychology, pedagogy, developmental psychology, social psychology, and casework, practitioners utilized a variety of individual and group methods in helping adults and children improve quality of life and better adjust to their social circumstances. Work therapy, directive and nondirective conversation, organized group activities, occupational therapy, conflict resolution, and therapeutic community were used in different measure by psychagogues.9 During the 1950s and 1960s, a professional identity was maintained through conferences and training programs in West Germany. There, psychagogy increasingly drew on the fields of special education, social work, and psychoanalysis to carve out a role working with emotionally disturbed adolescents in both inpatient and outpatient settings.10 Today, the influence of psychagogy is still discernible in pastoral counseling. For the most part, however, it is a term that largely prompts quizzical looks on the faces of those to whom it is mentioned. Too bad, really. Given its remarkable resilience, modern psychagogy warrants the same kind of attention from historians of psychiatry as its ancient counterpart has received from philosophy and religious studies scholars. 1. Jackson SW. Care of the Psyche: A History of Psychological Healing. New Haven, CT: Yale University Press; 1999. 2. MacKinnon D. Music, madness and the body: symptom and cure. Hist Psychiatry. 2006;17(65, pt 1):9-21. 3. Charland LC. Benevolent theory: moral treatment at the York Retreat. Hist Psychiatry. 2007;18:61-80. 4. Weiner DB. Alienists, treatises, and the psychologic approach in the era of Pinel. In: Wallace ER IV, Gach J, eds. History of Psychiatry and Medical Psychology. New York: Springer Science + Business Media; 2008:281-303. 5. Gauld A. A History of Hypnotism. New York: Cambridge University Press; 1992. 6. Glad CE. Paul and Philodemus: Adaptability in Epicurean and Early Christian Psychagogy. Supplements to Novum Testamentum. Leiden, the Netherlands: EJ Brill; 1995:17. 7. Sterling G. Hellenistic moral philosophy and the New Testament. In: Porter SE, ed. Dictionary of Biblical Criticism and Interpretation. New York: Routledge; 2007:153. 8. Dilley P. Care of the Other in Ancient Monasticism: A Cultural History of Ascetic Guidance [dissertation]. New Haven, CT: Yale University; 2008. 9. Schraml W. Die psychagogischen Methoden. Z Klin Psychol Psychother. 1958;6:304-312. 10. Knöll H. Psychagogik—Gedanken zur Begriffsbestimmung. Prax Kinderpsychol Kinderpsychiatr. 1968;17:155-157.
"Sentence patterns" is just another way talk about the way a sentence is put together; the order of the elements in the sentence; sentence construction. Some sources say there are six English sentence patterns; some say eight. A few sources list even more. Here are the ones we feel are the most common, and the easiest to recognize: 1. Subject + Verb (S-V) This is the simplest kind of sentence. It consists of a subject, a verb, and possibly some adjectives, adverbs, or prepositional phrases. There are no direct objects, indirect objects, or complements. Abraham speaks fluently. (subject, verb, adverb) Many of the class members write well in class. (subject, verb, adverbs) (The "complete" subject is "Many of the class members"--a noun phrase.) 2. Verb + Subject (V-S) Sentences in English usually have the subject come first, followed by the verb. But when a sentence begins with there is, there was, there are, there were, the verb comes first, followed by the subject. The word There is never a subject! There is a strange shadow in the woods. (verb, subject--the complete subject is the noun phrase a strange shadow, adverb) There were no leftovers after the buffet. (verb, subject, adverb) 3. Subject + Verb + Direct Object (S-V-DO) Andrew composes music. (subject, verb, direct object.) Matthew helps others in several English practice rooms. (subject, verb, direct object, adverb) Helen tells jokes to make people smile. (subject, verb, direct object, adverb) 4. Subject + Verb + Complement (S-V-SC) A complement is a word or group of words that describe or rename the subject. Complements follow a linking verb. There are two kinds of subject complements: 1) predicate nominative, which is a noun or pronoun that renames or classifies the subject of the sentence and 2) predicate adjective, which is an adjective that describes the subject of the sentence. Mother looks tired. (subject, verb, complement--predicate adjective) Some students in the class are engineers. (the noun phrase Some students in the class is the complete subject, verb, complement--predicate nominative) The men are handsome, the women are clever, and the children are above-average. (compound sentence of three independent clauses, so three subjects, three verbs, three complements--all predicate adjectives) 5. Subject + Verb + Indirect Object + Direct Object (S-V-IO-DO) An indirect object tells for whom or to whom. If the indirect object comes after the direct object (in a prepositional phrase "to ________" or "for _______"), the sentence pattern is shown as S-V-DO-IO. Pronouns are usually used as indirect objects (but not always). I sent her a birthday present. (subject, verb, indirect object, direct object) Jay gave his dog a bone. (subject, verb, indirect object, direct object) Granny left Gary all of her money. (subject, verb, indirect object, direct object) Granny gave every last asset to Gary. (subject, verb, direct object, indirect object in a prepositional phrase) 6. Subject + Verb + Direct Object + Object Complement (S-V-DO-OC) This pattern isn't as common as the others, but it is used. An object complement is a word or group of words that renames, describes, or classifies the direct object. Object complements are nouns or adjectives and follow the object. Debbie left the window open during the rain storm. (subject, verb, direct object, object complement, adverb) The class picked Susie class representative. (subject, verb, direct object, object complement) Sentence Pattern Quiz Some patterns in using clauses: 1. Independent clause: We are happy about the approaching holiday season. 2. Two independent clauses joined with a coordinating conjunction: We are happy about the approaching holiday season, and we look forward to a prosperous new year. 3. Two independent clauses, with no conjunction: We are happy about the approaching holiday season; we look forward to a prosperous new year. 4. Two independent clauses with an independent marker (therefore, moreover, thus, consequently, however, also are some): We are happy about the approaching holiday season; furthermore, we look forward to a prosperous new year. 5. Dependent marker (because, since, while, although, if, until, when, as, after, then are some), dependent clause, independent clause: Because we are happy about the approaching holiday season, we are planning many parties and gatherings with friends. 6. Independent clause, dependent marker, dependent clause: We are planning many parties and gatherings with friends, because we are happy about the approaching holiday season. 7. First part of an independent clause, unneeded clause or phrase, the rest of the independent clause: We are planning many parties and gatherings, including formal and informal, with friends. 8. First part of an independent clause, essential clause or phrase, the rest of the independent clause: We who are happy about the approaching holiday season are planning many parties and gatherings, formal and informal, with friends. Back to Sentences Back to Exercises
Tsunamis are giant sea waves that are caused by earthquakes or volcanic eruptions under the sea. Tsunami waves do not suddenly increase its height beneath the ocean, but as they travel towards inland, they build up to higher and higher heights with the decrease in the depth of the ocean. Tsunami waves may travel as fast as the jet planes over deep waters, and slows down only when they reach to the shallow waters. Tsunami is always like a disaster for the humans. Therefore, understanding the true occurrence of tsunamis is crucial for assessing the present risk and look for appropriate protective measures for densely populated coastal areas. It is a tough task to discriminate between tsunamis and storm. A review of geological research has been done for tsunamis during the past 4500 years in the Mediterranean Sea. It has revealed that almost 90% of these inundation events may have been misinterpreted by scientists and was due to storm activity instead. It has been found that the risk from Tsunami could have been significantly overstated in the Mediterranean region. The records of 135 past events in eight Mediterranean countries were studied that had been identified by the scientists as tsunamis, were actually not tsunami but were only severe storms. For the areas where a lot of population resides, true forecasting of any calamity should be there, so that correct measures could be applied at the right time. By: Anita Aishvarya
SPEAKING AND LISTENING: Integrated Activities for Pupilsat Key Stage 4. By Chris Phillips. Folens. pound;29.95. DRAMA ACTIVITIES FOR KEY STAGE 3. By Jan Ashcroft and Leonie Pearce. Folens. pound;29.95. Be careful what you wish for," says the old man in the story, "for your wish may be granted." This is supremely true of the status now given to speaking and listening in the national curriculum. For many years, English teachers rightly battled for this most crucial of skills to be given a higher priority. However, now that their wish has been granted, they have encountered huge problems in providing the contexts in which speaking and listening can be fostered. Trying to find a range of audiences in school stretches ingenuity and timetabling skills almost to breaking point, while setting up stimuli for a range of activities often drivesteachers to the paradox of again and again giving students written materials about which to talk. Neither of these fundamental problems is solved in Speaking and Listening: Integrated Activities for Pupils at Key Stage 4, nor can they be, since this is a folder of worksheets heavily dependent on student role-play. However, within the real-life constraints of school, this is a useful resource for teachers, with sound self-assessment sheets, well-differentiated activities, and a solid sense of how the materials will work in the classroom. While oral work has, for the moment at least, a secure place in the national curriculum, drama is under considerable threat. However, DramaActivities for Key Stage 3 is much more than just the rearguard action of a subject in retreat. The introduction not only presents a clear rationale for the place of drama in English teaching, but it also shows how even the least experienced teacher can set up constructive activities in the classroom. Drama can be an invaluable way of enabling even the wariest readers to find their way in what is increasingly a text-dominated subject. With rigorous activities such as these to draw on, teachers will be well placed to continue to use drama to inform and enhance their students' learning. Sarah Matthews is a former headof English at Chipping NortonSchool, Oxfordshire
發表於 2014-2-12 10:49:09 Tiny discrepancies between the GPS receiver's onboard clock and GPS time, which synchronizes the whole global positioning system, mean distances calculated can drift. There are two solutions to this problem. The first would be to use an atomic clock in each receiver costing $100,000. The second is to use some clever mathematical trickery to account for the time-keeping error based on how the signals from three or more satellite signals are detected by the receiver, which essentially allows the receiver to reset its clock. There is also an intrinsic error source in GPS associated with the way the system works. GPS receivers analyze three signals from satellites in the system and work out how long it has taken each signal to reach them. This allows them to carry out a trilateration calculation to pinpoint the exact location of the receiver. The signals are transmitted by the satellites at a specific rate. Unfortunately, the electronic detector in standard GPS devices is accurate to just 1 percent of a bit time. This is approximately 10 billionths of a second (10 nanoseconds). Given that the GPS microwave signals travel at the speed of light, this equates to an error of about 3 meters. So standard GPS cannot determine position to greater than 3-metre accuracy. More sophisticated GPS receivers used by the military are ten times more accurate to 300 millimeters.
When a voltage is applied to the circuit, current from the battery flows through coil L1 and to the emitter through RE. Current then flows from the emitter to the collector and back to the battery. The surge of current through coil L1 induces a voltage in coil L2 to start oscillations within the tank circuit. When current first starts to flow through coil L1, the bottom of L1 is negative with respect to the top of L2. The voltage induced into coil L2 makes the top of L2 positive. As the top of L2 becomes positive, the positive potential is coupled to the base of Q1 by capacitor C1. A positive potential on the base results in an increase of the forward bias of Q1 and causes collector current to increase. The increased collector current also increases the emitter current flowing through coil L1. Increased current through L1 results in more energy being supplied to the tank circuit, which, in turn, increases the positive potential at the top of the tank (L2) and increases the forward bias of Q1. This action continues until the rate of current change through coil L1 can no longer increase. The current through coil L1 and the transistor cannot continue increasing indefinitely, or the coil and transistor will burn up. The circuit must be designed, by proper selection of the transistor and associated parts, so that some point is reached when the current can no longer continue to increase. At this point C2 has charged to the potential across L1 and L2. This is shown as the heavy dot on the base waveform. As the current through L1 decreases, the voltage induced in L2 decreases. The positive potential across the tank begins to decrease and C2 starts discharging through L1 and L2. This action maintains current flow through the tapped coil and causes a decrease in the forward bias of Q1. In turn, this decrease in the forward bias of Q1 causes the collector and emitter current to decrease. At the instant the potential across the tank circuit decreases to 0, the energy of the tank circuit is contained in the magnetic field of the coil. The oscillator has completed a half cycle of operation. Next, the magnetic field around L2 collapses as the current from C2 stops. The action of the collapsing magnetic field causes the top of L2 to become negative at this instant. The negative charge causes capacitor C2 to begin to charge in the opposite direction. This negative potential is coupled to the base of Q1, opposing its forward bias. Most transistor oscillators are operated class A; therefore, the positive and negative signals applied to the base of Q1 will not cause it to go into saturation or cutoff. When the tank circuit reaches its maximum negative value, the collector and the emitter currents will still be present but at a minimum value. The magnetic field will have collapsed and the oscillator will have completed 3/4 cycle. At this point C2 begins to discharge, decreasing the negative potential at the top of L2 (potential will swing in the positive direction). As the negative potential applied to the base of Q1 decreases, the opposition to the forward bias also decreases. This, in effect, causes the forward bias to begin increasing, resulting in increased emitter current flowing through L1. The increase in current through L1 causes additional energy to be fed to the tank circuit to replace lost energy. If the energy lost in the tank is replaced with an equal or larger amount of energy, oscillations will be sustained. The oscillator has now completed 1 cycle and will continue to repeat it over and over again. Shunt-Fed Hartley Oscillator A version of a SHUNT-FED HARTLEY OSCILLATOR is shown in figure 2-14. The parts in this circuit perform the same basic functions as do their counterparts in the series-fed Hartley oscillator. The difference between the series-fed and the shunt-fed circuit is that dc does not flow through the tank circuit. The shunt-fed circuit operation is essentially the same as the series-fed Hartley oscillator. When voltage is applied to the circuit, Q1 starts conducting. As the collector current of Q1 increases, the change (increase) is coupled through capacitor C3 to the tank circuit, causing it to oscillate. C3 also acts as an isolation capacitor to prevent dc from flowing through the feedback coil. The oscillations at the collector will be coupled through C3 (feedback) to supply energy lost within the tank.
The record-breaking drought in Texas that has fueled wildfires, decimated crops, and forced the sale of cattle herds has also reduced levels of groundwater to the lowest levels observed in more than 63 years. Groundwater is moisture trapped in pores in the soil and in underground gaps in rock, often known as aquifers. The map above depicts the amount of groundwater stored underground in the continental United States on November 28, 2011, as compared to the long-term average from 1948 to 2011. Deep reds reveal the most depletion, with deep blues representing aquifers and soils that are nearly full. The maroon shading over eastern Texas, for example, shows that the ground has been this dry less than two percent of the time between 1948 and the present. At the end of November 2011, groundwater supplies were extremely depleted in more than half of Texas, as well as parts of New Mexico, Louisiana, Alabama, and Georgia. The northeastern states and the High Plains appear saturated with water heading into winter months. “Texas groundwater will take months or longer to recharge,” said Matt Rodell, a hydrologist based at NASA’s Goddard Space Flight Center, who worked together with partners at the National Drought Mitigation Center at the University of Nebraska-Lincoln. “Even if we have a major rainfall event, most of the water runs off. It takes a longer period of sustained, greater-than-average precipitation to recharge aquifers significantly.” The map is based on data from the twin satellites of the Gravity Recovery and Climate Experiment (GRACE), which detects small changes in Earth’s gravity field that are caused by the redistribution of water on and beneath the land surface. Scientists used a sophisticated computer model that combines measurements of water storage observed by GRACE with a long-term meteorological data set to generate a continuous record of soil moisture and groundwater stretching back to 1948. (GRACE has been recording data back to 2002). The meteorological data include precipitation, temperature, solar radiation and other ground- and space-based measurements. “These maps would be impossible to generate using only ground-based observations,” said Rodell. “There are groundwater wells all around the United States and the U.S. Geological Survey does keep records from some of those wells, but it's not spatially continuous and there are some big gaps.” The GRACE mission is a partnership between NASA and the Deutsche Forschungsanstalt für Luft und Raumfahrt (DLR) in Germany. The paired satellites travel about 137 miles (220 km) apart and record small changes in the distance separating them as they encounter variations in Earth's gravitational field. - National Drought Mitigation Center. (2011, November 28). Groundwater and Soil Moisture Conditions from GRACE Data Assimilation. Accessed November 30, 2011. - NASA Jet Propulsion Laboratory. (2009, December 14). NASA Data Reveal Major Groundwater Loss in California. Accessed November 30, 2011. - NASA. (2009, August 12). NASA Satellites Unlock Secret to Northern India's Vanishing Water. Accessed November 30, 2011. Image created by Chris Poulsen, National Drought Mitigation Center at the University of Nebraska-Lincoln, based on data from Matt Rodell, NASA Goddard Space Flight Center, and the GRACE science team. Caption adapted from work by Kelly Helm Smith, National Drought Mitigation Center, and Adam Voiland, NASA's Earth Science News Team.
The canvasback feeds almost entirely by diving to consume the leaves, roots and seeds of aquatic plants. It usually dives to depths of around 2 metres when feeding, and remains submerged for 10 to 20 seconds, but will sometimes dive to depths of 9 metres (3). The canvasback will occasionally also feed at the water surface, either grabbing food items from the surface, or upending and submerging the head underwater. This species also eats a variety of insects, crustaceans and small fish (8). Gregarious for most of the year, except when breeding, the canvasback is often seen foraging in large flocks. At migration stopover sites, extremely large flocks of over 1,000 individuals are often seen (3). Pair bonds are established during the spring northward migration, which commences in early February. Breeding birds arrive at the nesting grounds around early April, with the females often returning to the same site to breed each year (5) (9). The female canvasback builds the nest, which is a bulky structure built on a mat of floating dead plants or suspended from emergent vegetation (3). Usually, 9 or 10 eggs are laid, and are incubated for around 24 days (2). The male canvasback usually abandons the female during incubation to gather with other males at moulting grounds and begin the southward migration (9). The chicks, which have brownish upperparts and yellowish underparts, can fly at 63 to 77 days and reach sexual maturity at a year old (2). Most female and juvenile canvasbacks begin the southward migration in early September and arrive on the wintering grounds around late November (9).
Structural Biochemistry/Protein function/Myosin Myosins are a large super-family of motor proteins that move along actin filaments, while hydrolyzing ATP to forms of mechanical energy that can be used for a variety of functions such as muscle movement and contraction. About 20 classes of myosin have been distinguished on the basis of the sequence of amino acids in their ATP-hydrolyzing motor domains. The different classes of myosin also differ in structure of their tail domains. Tail domains have various functions in different myosin classes, including dimerization and other protein-protein interactions. Myosin is a common protein found in the muscles which are responsible for making the muscle contract and relax. It is a large, asymmetric molecule, and has one long tail as well as two globular heads. If dissociated, it will dissociate into six polypeptide chains. Two of them are heavy chains which are wrapped around each other to form a double helical structure, and the other four are light chains. One main characteristic of myosin is its ability to bind very specifically with actin. When myosin and actin are combined together, that makes the muscle produce force. Sarcomeres and the Sliding Filament Theory Skeletal muscles are responsible for voluntary movement. Skeletal muscles contain many muscle fibers and these muscle fibers are actually made up of myofibrils, bundles of thick myosin filaments and thin actin filaments. Myofibrils are constructed and lined up in a chain-like formation to create what are called sarcomeres. Sarcomeres contain several regions. One region is called the A-bands and only consist of myosin filaments. The counterpart of A-bands is the I-bands that only contain actin filaments. The ends of each sarcomere are called Z discs. A middle region of each sarcomere called the H-zone only contains myosin. According to the sliding filament theory by Andrew Huxley and Ralph Niedergerke, muscles contract when Z-discs come closer together thus shortening the sarcomeres. Actin filaments from the I-bands become very short while myosin filaments from the A-bands do not change in length. The actin filaments are actually sliding towards the H-zone and the A-bands thus creating an overlap of myosin and actin filaments. As this overlap occurs, myosin filaments are binding to the actin filaments, allowing myosin to function as the driving motor of filament sliding. This relative movement between myosin and actin is what results in muscle contraction. The molecular basis for muscle action and contraction is explained in the next section. Mechanism of muscle movement This mechanism of contraction is also called "The Sliding Filament Theory." - ATP binding to the myosin head causes and it is in its low-energy conformation - The active site closes and ATP is hydrolyzed to ADP and Pi. This induces a conformational change (cocking of the head) resulting in myosin weakly binding to actin. This forms a cross-bridge. - Pi release results in conformational change that leads to stronger myosin binding, and the power stroke. - ADP dissociation leaves the myosin head tightly bound to actin. - Binding of a new molecule of ATP to myosin head triggers it to let go of actin and the cycle starts all over again. - In the absence of ATP, this state results in muscle rigidity called rigor mortis. Different Types of the Myosin Myosin has groups of protein that divide the motor proteins. The motor proteins are involving actin filaments that hydrolyze ATP. There are 20 different types of Myosin that already distinguished by amino acid sequence. All 20 types of Myosin have different structure following by tail domain. Because Each classes have characteristic of dimerization, and protein interactions. However, there are known classes in the Myosin. Myosin I, Myosin II, Myosin V and VI. Among the proteins whose genes have been linked to deafness are several types of myosin. Myosin I appears to cross-link actin filaments to control the tension inside each stereocilium. The ratcheting activity of this myosin motor along the actin filaments may adjust the sensitivity of the hair cells to different sounds. Other types of myosin use their motor activity to redistribute cellular constituents along the length of the actin filaments. MyosinII consists of six polypeptied chains: two 220-kD heavy chains and two pairs of different light chain that vary in size between 15 and 22kD, depending on their source. The N-terminal half of each individual heavy chain assumes a globular form that is stretched in one direction. Coming up next is a roughly 100 Angstrom long alpha helix stiffened by the two light chains wrapping around it. This portion of the protein acts as a lever when the muscle contracts. The C-terminal half of the heavy chain takes the form of alpha-helix that ends as a long, fibrous chain. Two of those associate and takes the form a left-handed coiled-coil motif. The overall shape of myosin is a rod 1600 Angstroms long with two globular heads. MyosinV has a different structure of motor. It has a two headed motor protein which heavy chains diverge. That means actin dependent transport move to axon associated vesicle effect on a melanin. Both microtubule and actin filaments lead to the speculation and affect to the hair color. Myosin V is also a two-headed protein, but it doesn't form a thick filament like Myosin II. Myosin V acts by itself - the domain at the tail-end binds a vesicle that has pigments as its cargo. The lever region of this protein is long enough to have six light chains bound to it, giving it three times greater capacity for those light chains than Myosin II's counterpart lever arm. Under electron microscope (EM) image of Myosin V bound to F-actin filament, it is estimated that the globular heads have thirteen actin subunits between them.
An almost complete history of Austria In less than 1000 words Stone and Iron Ages The area of today′s Austria was first populated at the time of the Neanderthal. The countries oldest piece of art is a display of a woman and 32,000 years old. The frost mummy "Ötzi" was found in the Alps and is 5300 years old. In the Iron Age, there were two important cultures; the Hallstatt Culture drew its prosperity from salt trade and had important connections with Mediterranean civilisations; later the Celtic Latene Culture led to the formation of the Kingdom Noricum. Celtic Kingdom, Roman Province, German county: 15 BC - 976 AD In 15 BC, Noricum was annexed by the Roman Empire. Many cities were founded, streets built and the mixed Romano-Celtic population was Christianised in the 4th century. The invasion of different barbarian tribes in the 6th and 7th century led to a predominantly Bavarian population north of and in the Alps and a Slavonic one south-east of the mountains. The region was re-Christianised and came under the rule of the Franconian Empire, consolidated through Charlemagne. To protect the Eastfranconian lands from invasions of Aware and Hungarians, a county was formed. It was given to the rule of the house of Babenberg in 976 and later called eastern mark or Ostarrichi (Austria). Early Middle Ages & Habsburg expansion 1500 The Babenbergs were ambitious builders and transformed Austria from a wilderness into a centre of medieval culture. In 1156, Vienna became the capital. They also extended the territory and gained some autonomy rights ("Privilegium Minus") until the last Babenberger died in 1246. In 1278, the Habsburg family succeeded in securing Austria for their house. They revived the determined politics of the Babenberger, until succession fights paralysed Austria in the late Middle Ages. Once they were sorted, Habsburgs became Emperors of the Holy Roman Empire of German Nation in 1452. This allowed them to rule as a European heavy-weight. Around 1500, Emperor Maximilian I started a policy of strategic marriages to increase his power and possessions. This way, the Habsburgs gained control over large parts of Southern Germany, the Netherlands, Burgundy, Spain with its colonies and Southern Italy. Karl V said that he ruled over an empire in which the sun never sets. Reformation, Turks & baroque enlightenment The 16th century saw a significant loss of power for the church; in the heat of the reformation, Karl V had to resign and the Habsburgs were split into a Spanish and an Austrian line. The reformation and Habsburg′s loyalty to the pope also stoked the 30 Years War (1608-1638) with its devastating effects on Central Europe. Another drastic factor was the threat of a Turkish invasion, most immediately felt at the two sieges of Vienna (1529 and 1683). Once the religious conflicts had settled and the Turks had been defeated in the early 18th century, Austria bloomed in Baroque glory, a very shaping period for the country. The Baroque Empress Maria Theresia and her son Emperor Joseph II modernised the Austria and the other "crownlands" and introduced many reforms driven by new ideas of enlightenment. French Revolution, Napoleon & Austria-Hungary: 1790-1900 In the late 18th century, the French revolution shocked Europe′s nobility. Its ideas of freedom and equality and spreading nationalism were fought with a stubborn policy of censorship and suppression by the Habsburgs. In 1806, the Holy Roman Empire was extinguished when Franz I resigned and later declared himself Emperor of Austria in response to Napoleon′s coronation. After the Napoleonic Wars, Europe looked for a new order at the Vienna Congress. Austria gained Salzburg and became the chair in the German Union. It returned to absolutism until a revolution in 1848 forced the Emperor to resign and grant basic civil rights. Increasing nationalism in the multi-ethnic Austrian Empire led to the autonomy for Hungary, which stoked the urge for independence among other ethnic groups. Around 1900, Vienna was one of the biggest cities in the World and its intellectual and cultural life peaked again. WWI, first republic, WWII & Holocaust: 1914-1945 When the Archduke of Austria was shot by a Serbian nationalist, World War I started in 1914. It left millions dead and was a disaster for the Austrian-Hungarian Empire, which split into many tiny countries. The Habsburg reign ended in 1919, when Austria turned into a Republic, suffering badly from inflation, unemployment and the loss of a national identity. Tensions between Social Democrats and Conservatives accumulated into open fights. In 1934, a Conservative government took the legislative powers from the parliament, thereby de-facto making Austria a fascist country. Nazis and Socialists were prosecuted and independence from Nazi-Germany made a priority. In 1938, the Wehrmacht went into Austria and was welcomed by cheering crowds ("Anschluss"). Austria was merged with Nazi-Germany. Its extensive Jewish population largely fled the country. World War II and the holocaust caused 300,000 Austrian victims (mostly soldiers, but also many Jews, Socialists and other prosecuted groups), with many Nazi-criminals including Hitler being Austrian, too. Post-war & today′s Austria After World War II, much of the infrastructure was destroyed. With international help, much was re-built in the following years. In 1955, Austria declared its neutrality and re-gained full sovereignty. The neutral status allowed the country to establish itself as a bridge between East and West during the Cold War. Vienna became a centre for international organisations like the UN. The economy recovered and in the 1970ies, Social Democrat governments shaped Austria through a pronounced socialist tradition similar to Scandinavia. After the Iron Curtain had fallen, Austria was back at the heart of Europe. It joined the EU in 1995 and introduced the Euro in 2002. Politically, the two traditional blocks (conservative and social democrats) have lost in relevance since the 1980ies and ideologies fade for the sake of current issues. There′s more to Austria. If you care (and you should), read my more extensive essay "A History of Austria - With many details that I find interesting". back to "background"
Hydropower is based on a simple process, taking advantage of the kinetic energy freed by falling water. In practice, this process is applied in many different ways depending on the electrical services sought and the specific site conditions. Accordingly, there is a wide variety of hydroelectric projects, each providing different types of services and generating environmental and social impacts of different nature and magnitude. This article illustrates the necessity to evaluate each hydroelectric project in relation to the services it provides and to compare electricity supply projects on the basis of equivalent services provided to society. The impoundment and presence of a reservoir stand out as the most significant sources of impacts. However, a reservoir also provides the highest level of electricity supply services: it is the most efficient means of storing large amounts of energy and a hydroelectric plant has the capacity of releasing this energy in quantities that can be adjusted instantly to electricity demand. Furthermore, a reservoir allows for many other uses besides energy storage such as the cost-effective development of run-of-river plants downstream with little environmental impacts. © 2002 Elsevier Science Ltd. All rights reserved. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
Robotics: Design Basics: Design software When designing your robot there are plenty of programs to help. Ranging from a simple tool to print wheel encoders, through CAD drawing programs up to mechanical simulation programs. e.g. AutoCAD. This type of software is used to turn a rough sketch into a nice professional drawing. This type of drawing is standardized for readability. (Meaning every different type of line has a particular meaning. Solid lines are visible edges, dashed lines are hidden edges, line-dash-line lines are center lines. Standards also include methods of dimensioning and types of views presented in a drawing.) Of course you're free to use your own standards, but using an industrial standard, such as ANSI or ISO, makes it easier to share your plans with other people around the world. While it may somewhat more tedious to make a drawing using 2D software, the results are generally better than using 3D solid modeling software. Solid modelers still have problems translating 3D models into 2D drawings and adding proper notation to standards. e.g. SolidWorks or Pro/Engineer Pro/Engineer (Wikipedia:Pro/ENGINEER). A newer way to draw parts and machines. With solid modeling you "build" the parts in 3D, put them together in an assembly and then let the software generate the 2D drawings (sounds harder than it is). The major advantage over 2D CAD programs is you can see the complete part/machine without actually building it in real life. Mistakes are easily found and corrected in the model. These 3D models are not yet completely standardized though there is a standard for digital data. At this time the 2D drawings this software generates do not conform completely to industrial standards. The 2D paper drawing is still the communication tool of preference in industry and clarity of intent is very important. Solid modeling software tend to generate overly complex drawing views with overly simplified dimensioning methods that likely do not correctly convey the fit, form or function of the part or assembly. Pneumatic & Hydraulic SimulationEdit Festo has a demo version of both a pneumatic and a hydraulic simulation program. Look for FluidSIM Pneumatiek and FluidSIM Hydraulica. (Pick country; click on industrial automatisation; and use the search field to the right.) Limitations: Can't save nor print. Most of the didactic material isn't included. IRAI has a free demonstration version of electric / pneumatic and hydraulic simulation software : AUTOMGEN / AUTOMSIM. Go to Download / AUTOMGEN7. Schematic Capture & PCBEdit Software to draw electronics schematics and designing Printed Circuit Boards (PCBs). These packages contain software to draw the schematic, libraries with symbols, and software to draw the PCBs (with autorouter). In no particular order: - Freeware: Eagle is commonly used by beginners for their projects because a limited version is available for free. The toolset is well integrated and has a large hobbiest user base. However, once you progress beyond basic designs, you need to pay for the full version. - Open Source: The open-source gEDA Project has produced a mature suite of applications for electronics design, including: a schematic capture program, attribute manager, netlister supporting over 20 netlist formats, analog and digital simulation, PCB layout with autorouter, and Gerber viewer. The project was started in 1997 to write EDA tools useful for personal robotics projects, but as of this writing the tools are also used by hobbiests, students, educators, and professionals for many different design tasks. The suite runs best on Linux and OSX, although Windows ports of some apps have been made. - Open Source: Free PCB is a mature Windows only open source PCB drafting tool. - IntelligentCad.org has a few links to FPGA and PCB design tools (GPL) There are many different programming languages available for µControllers: - Assembly: Every µcontroller can be programmed in Assembly. However the differences between µcontrollers can be huge. Assembly gives you the most power of the µcontroller but this power comes with a price: Hard to learn and (almost)no code reuse. Assembly code is in essence translated machine code. It provides only the instruction set of the processor: add, subtract, maybe multiply, move data between registers and/or memory, conditional jumps. No loops, complex selection or build in I/O as in C/C++, Basic, Pascal, ... The disadvantage is that you have to implement everything yourself (lots of work even for the most simple programs). The advantage is that you have to implement everything yourself (programs can be written extremely efficient both in speed and size). This language is intended for advanced users and is usually only used as an optimisation for code in tight loops or for pushing the performance of a limited device to the edge of its abilities. Reasons to learn it: - Teaches you how the computer works on its lowest level. - Provides high speed code which consumes little memory. Reasons to avoid it: - Limited use. - very hard to master. - C: C offers power but is much more portable than Assembly. For most µcontrollers there is a C compiler available. The differences between µcontrollers is smaller here, except for using hardware. Learning C is much easier than learning Assembly, still C isn't an easy language to learn from scratch. However these days there are very good books available on this subject. - Freeware:GCC Tools for AVR Studio Software - Basic: For many µcontrollers there are special flavours of Basic available. This is the easiest and fastest way to code µcontrollers, however you'll have to sacrifice some power. Still modern basic compilers can produce very impressive code. - Limited Freeware/payware:Bascom AVR Very good Basic compiler for AVR. Limited to 4Kb programs. There is also a version available for the 8051 µcontrollers. - Limited Freeware/payware:XCSB PIC Basic compiler. Lite version. No 32-bit integer and floating point support. (OS/2 WARP, Win95, Win98, Win2K, XP and Linux) - Embedded Systems/Embedded Systems Introduction#Which_Programming_Languages_Will_This_Book_Use? - Embedded Systems/PIC Programming#Compilers.2C_Assemblers After you've written your program, you need to get it into your µcontroller. If you use C or Basic you'll have to compile it. Then use a programmer to upload the code into the µcontroller. There are several different methods for this last step. - External programmers: This is a device that's connected to a PC. You'll plug the µcontroller IC, EEPROM or other memory IC in its socket and let the PC upload the code. Afterwards you plug the IC in its circuit and test it. Can be time consuming when updating your program after debugging. - ISP In System Programming: The board with the µcontroller has a special connector to connect to a PC. Hook up the cable, download code, test and repeat. More modern method. Only disadvantage: it consumes some boardspace. Not all µcontrollers support this. - Bootloader, also called "self-programming": The CPU accepts a new program through any available connection to a PC (no special connector needed), then programs itself. Not all µcontrollers support this. And you also need some other programming method, to get the initial bootloader programmed in (telling it exactly which connector to watch for a new program, the baud rate, etc.). Modern µcontrollers have on-chip debug hardware called w:JTAG. See This site for: - The Motion Applet – Path modeling for the differential steering system of robot locomotion. - The Encoder Designer – A design tool for encoder wheel patterns. (Wikipedia:Rotary encoder) - RP1 – A mobile-robot simulator. - Map Viewer – A Mapping Tool For Mobile Robotics. And "Experimental Robotics Framework" for rapid prototyping of robotics algorithms. - CAD & Linux has a long list of CAD tools that run under Linux, some of them GPL. - Linux online: CAD/CAM has a long list of CAD tools that run under Linux, some of them GPL. - Practical Electronics/PCB Layout has more information on using PCB design software. - Urbiforge has free downloadable links to Urbi and tutorials on how to use Urbiscript. Urbi is AGPL.
Offsetting greenhouse gas emissions using charcoal By Darren Quick August 11, 2010 According to a new study, as much as 12 percent of the world’s human-caused greenhouse gas emissions could be sustainably offset by producing biochar, a charcoal-like substance made from plants and other organic materials. That’s more than would be offset if the same plants and materials were burned to generate bioenergy, says the study. Additionally, biochar could improve food production in the world’s poorest regions as it increases soil fertility. Biochar is made by decomposing biomass like plants, wood and other organic materials at high temperature in a process called slow pyrolysis – a form of incineration that decomposes organic materials by heat in the absence of oxygen. Normally, biomass breaks down and releases its carbon into the atmosphere within a decade or two. But biochar is more stable and can hold onto its carbon for hundreds or even thousands of years, keeping greenhouse gases like carbon dioxide out of the air longer. Other biochar benefits include: improving soils by increasing their ability to retain water and nutrients; decreasing nitrous oxide and methane emissions from the soil into which it is tilled; and, during the slow pyrolysis process, producing some bio-based gas and oil that can offset emissions from fossil fuels. The carbon-packed substance was first suggested as a way to counteract climate change in 1993. Scientists and policymakers have given it increasing attention in the past few years and this new study conducted by a collaborative team from the Department of Energy’s Pacific Northwest National Laboratory (PNNL), Swansea University, Cornell University, and the University of New South Wales, is the most thorough and comprehensive analysis to date on the global potential of biochar. For their study, the researchers looked to the world’s sources of biomass that aren’t already being used by humans as food. For example, they considered the world’s supply of corn leaves and stalks, rice husks, livestock manure and yard trimmings, to name a few. The researchers then calculated the carbon content of that biomass and how much of each source could realistically be used for biochar production. With this information, they developed a mathematical model that could account for three possible scenarios. In one, the maximum possible amount of biochar was made by using all sustainably available biomass. Another scenario involved a minimal amount of biomass being converted into biochar, while the third offered a middle course. The maximum scenario required significant changes to the way the entire planet manages biomass, while the minimal scenario limited biochar production to using biomass residues and wastes that are readily available with few changes to current practices. The researchers found that the maximum scenario could offset up to the equivalent of 1.8 petagrams – or 1.8 billion metric tons – of carbon emissions annually and a total of 130 billion metric tons throughout in the first 100 years. Avoided emissions include the greenhouse gases carbon dioxide, methane and nitrous oxide. The estimated annual maximum offset is 12 percent of the 15.4 billion metric tons of greenhouse gas emissions that human activity adds to the atmosphere each year. Researchers also calculated that the minimal scenario could sequester just under 1 billion metric tons annually and 65 billion metric tons during the same period. Making biochar sustainably requires heating mostly residual biomass with modern technologies that recover energy created during biochar’s production and eliminate the emissions of methane and nitrous oxide, the study also noted. Biochar and bioenergy Instead of making biochar, biomass can also be burned to produce bioenergy from heat. Researchers found that burning the same amount of biomass used in their maximum biochar scenario would offset 107 billion metric tons of carbon emissions during the first century. The bioenergy offset, while substantial, was 23 metric tons less than the offset from biochar. Researchers attributed this difference to a positive feedback from the addition of biochar to soils. By improving soil conditions, biochar increases plant growth and therefore creates more biomass for biochar productions. Adding biochar to soils can also decrease nitrous oxide and methane emissions that are naturally released from soil. However, the researchers say a flexible approach including the production of biochar in some areas and bioenergy in others would create optimal greenhouse gas offsets. Their study showed that biochar would be most beneficial if it were tilled into the planet’s poorest soils, such as those in the tropics and the Southeastern United States. Those soils, which have lost their ability to hold onto nutrients during thousands of years of weathering, would become more fertile with the extra water and nutrients the biochar would help retain. Richer soils would increase the crop and biomass growth – and future biochar sources – in those areas. Adding biochar to the most infertile cropland would offset greenhouse gases by 60 percent more than if bioenergy were made using the same amount of biomass from that location, the researchers found. On the other hand, the authors wrote that bioenergy production could be better suited for areas that already have rich soils - such as the Midwest – and that also rely on coal for energy. Their analysis showed that bioenergy production on fertile soils would offset the greenhouse gas emissions of coal-fired power plants by 16 to 22 percent more than biochar in the same situation. The study also shows how sustainable practices can make the biochar that creates these offsets. “The scientific community has been split on biochar,” says PNNL’s Jim Amonette. “Some think it’ll ruin biodiversity and require large biomass plantations. But our research shows that won’t be the case if the right approach is taken.” The researchers’ estimates of avoided emissions were developed by assuming no agricultural or previously unmanaged lands will be converted for biomass crop production. Other sustainability criteria included leaving enough biomass residue on the soil to prevent erosion, not using crop residues currently eaten by livestock, not adding biochar made from treated building materials to agricultural soils and requiring that only modern pyrolysis technologies – those that fully recover energy released during the process and eliminate soot, methane and nitrous oxide emissions – be used for biochar production. “Roughly half of biochar’s climate-mitigation potential is due to its carbon storage abilities,” Amonette said. “The rest depends on the efficient recovery of the energy created during pyrolysis and the positive feedback achieved when biochar is added to soil. All of these are needed for biochar to reach its full sustainable potential.” The study, "Sustainable biochar to mitigate global climate change," appears in the journal Nature Communications.Share - Around The Home - Digital Cameras - Good Thinking - Health and Wellbeing - Holiday Destinations - Home Entertainment - Inventors and Remarkable People - Mobile Technology - Urban Transport - Wearable Electronics
Advancing Basic Science for Humanity 2010 Nanoscience Prize Explanatory Notes The ability to control the basic building blocks of matter on a very small scale is one of the core themes of nanoscience. Being able to put atomic, molecular, and nanoscale species where we want them provides new understanding of quantum properties and allows us to create new structures from scratch with a wide range of potential applications. In making their award, the Kavli Nanoscience Prize Committee has chosen two scientists whose development of unprecedented ways to control matter on the nanoscale have greatly pushed forward the boundaries of human knowledge and proved highly influential in inspiring hundreds of others to follow in their footsteps. In 1989, Donald M. Eigler, of IBM’s Almaden Research Centre, San Jose, California, US, became the first person to move an individual atom in a controlled way. Eigler’s breakthrough was made possible thanks to the invention of the scanning tunneling microscope (STM) by Gerd Binning and Heinrich Rohrer in 1981, a device that made possible the imaging of atoms by measuring changes in the way electrons hop between a sharp probe and a specimen, as the probe shifts position. He built a low temperature, high vacuum STM so that atoms could be better visualised and studied, and as a result discovered it was possible to slide individual atoms across a surface using the tip of his STM. In a landmark experiment he dragged 35 xenon atoms one at a time across a nickel surface to spell out the name of his employer. He later refined his method so that the atoms could be lifted from the surface and released in a new location. Here they have positioned 48 iron atoms into a circular ring in order to “corral” some surface state electrons and force them into “quantum” states of the circular structure. The ripples in the ring of atoms are the density distribution of a particular set of quantum states of the corral. [Crommie, Lutz & Eigler] Eigler went on to create “quantum corrals,” which generated well-defined quantum wave patterns within 48 iron atoms positioned in a circle on a copper surface. In the year 2000, he demonstrated the formation of “quantum mirages” in which the energy and distribution of copper surface electrons around a cobalt atom placed at a focal point of an elliptical quantum corral were detected at the ellipse’s other focal point, despite there being no second atom present. Later work included the development and operation of new logic circuits made from carbon monoxide molecules. Eigler showed that changing the orientation of one molecule could initiate a cascade of shifts in adjacent molecules. He used this phenomenon to generate the basic logic functions and other features required for computation, thereby creating the first computer circuit in which all components were of nanometer scale. Most recently he developed “single-atom spin-flip spectroscopy,” which made feasible the precise measurement of the amount of energy needed to flip an atom’s magnetic orientation and expanded our knowledge of the fundamental magnetic properties of atoms. Nadrian C. Seeman, of New York University, in the US, is the founding father of structural DNA nanotechnology, a field that exploits the structural properties of DNA to use it as a raw material for the next generation of nanoscale circuits, sensors and biomedical devices. Most people are more familiar with DNA as the molecule that contains the genetic instruction set for living organisms. It is made up of sequences of the four base pairs A, T, C and G. Two complimentary strands of DNA are attracted to each other to form the famous double helix shape. Seeman realised in 1980 that this natural tendency of strands of DNA with matching base pair sequences to spontaneously attach to one another meant that synthesised short sections could be made to self-assemble into predictable forms. Artistic rendering by Ken Eward of a DNA truncated octahedron constructed in Ned Seeman’s Laboratory. Seeman worked out the rules that govern DNA strand design and assembly so as to be able to create specific new shapes and structures. DNA is normally a linear molecule without branches, however a DNA molecule that self-assembles to have branches or junctions can be created if say the two halves of an individual strand attach to two separate other strands. Seeman used this technique to guide branched DNA molecules into stick polyhedra including cubes and truncated octahedrons. He also created DNA knots and Borromean rings. He went on to develop structures that were robust enough to be used as scaffolding for both crystalline lattices and nanomechanical devices, and created two dimensional periodic arrays of DNA. Seeman used robust two dimensional arrays of DNA to make metallic nanoparticles assemble into a checker board pattern. He designed the first DNA-based nanomechanical device, as well as robust individually-addressable 2- and 3-state nanomechanical devices. He developed ways in which DNA could be used to operate a robot arm, to capture target species, and to translate DNA sequences into polymer assembly instructions. Later he developed DNA-based “walkers” as a step towards creating devices that can move cargo, for example, drugs in molecular machines and in biomedical devices. More recently he has developed programmable DNA-based assembly line. Professor Arne Skjeltorp, of the University of Oslo, and chairman of the Kavli Nanoscience Prize Committee, said: “Donald Eigler’s demonstration of the ability to move individual atoms on a surface with atomic precision provided credibility and inspiration to what was at the time the emerging field of nanoscience. It could also be described as the research event that gave birth to nanotechnology. “Nadrian Seeman’s invention of DNA nanotechnology is unprecedented as a method to control matter on the nanoscale. In many ways it is still early days for the field, however, one day it promises to turn the basic molecular component of life into a means of producing a wide range of novel devices in fields ranging from electronics to biology.”
Students design and conduct a scientific experiment to test claims that aeroponic growing methods can produce more food in ways that use less land and water compared to traditional soil-based growing methods. Current Version: January 10, 2018 Think of the food you ate today, in the last week, and over the last year. Where did it come from? Did you grow any of it yourself? What resources were needed to produce it? According to some estimates, it takes 1.76 acres of land to grow the fruits and vegetables needed to feed a family of four for one year, and closer to 2 acres if wheat and corn are included. Fruits and vegetables are part of a healthy diet; but is there a less land-intensive way to grow them? Juice Plus+® thinks so. They make an aeroponic growing system called the Tower Garden®, which has been proven to use just a fraction of the land and water as traditional horizontal gardening, all while producing higher crop yields. In the Tower Garden Challenge, students design and conduct a scientific experiment that rigorously tests these claims by comparing the two growing methods. Using scientific methods of inquiry and data collection, they take specific and accurate measurements, manage variables, make detailed observations, maintain a lab journal, visually represent their findings, draw conclusions, and consider aeroponic gardening’s future implications and real-world applications. They will present their findings to a panel of people who play the roles of experts and stakeholders such as urban planners, farmers, restaurant owners, and environmental engineers.
Claim Your: Useful "Arduino Software Guide" 12F675 Tutorial 4 : Making an LM35 temperature recorder. Measuring temperature is easy using an LM35. This page shows you how you can make an LM35 an temperature recorder by using the 12F675 PIC microcontroller as the controller and data store. It generates serial output so that you can view the results on a PC and it also calculates the temperature reading in Fahrenheit sending both to the serial port at half second intervals. The project uses the code from the previous tutorials to report the temperature to the PC using the serial port so the serial RS232 data format is generated in Jump to circuit. Jump to Solderless breadboard. Jump to Circuit Diagram. Jump to Software. The LM35 is a precision temperature sensor. It is guaranteed accurate to ±¼°Cat 25°C (At different temperatures it is less accurate! but it is never more than 2°C inaccurate and it probably is not this inaccurate anyway it's just the manufacturers maximum limits that may apply). Typically is stays accurate to within ±¾°C over its temperature range so this is a good general purpose sensor and it's easy to It generates a linear output voltage using a centigrade scale - generating 10mV of output voltage for every degree centigrade change and there are several versions for operation over different temperature ranges: Note: The project code calculates the temperature in Fahrenheit and generates both Centigrade and Fahrenheit outputs to the Temperature recorder : recorder : pinout for theLM35DZ (from the The LM35 is connected to analogue input AN0 which is also the data input line for programming the 12F675 using ICSP so you need a way of connecting the sensor and the programming input at the same time with the programming input overriding the sensor output (and not damaging the This is done here by using 1k resistor that reduces the current flowing back into the sensor and at the same time is not too large (so that the ADC can easily convert the sensor output value - the impedance must be equal to or smaller than 10k Ohm from the sensor). reference for the circuit is taken from pin 6 using a resistor divider giving a 2.5V reference. This is simply done to increase the resolution of the ADC as for the LM35 only 0-1V is generated so you loose ADC range when using a 5V reference. You could use a lower reference value but this value gives Alternatively you could use an amplifier to scale the LM35 output up which would make the ADC less sensitive to noise but for this project it is simpler not to do so. Note: The large decoupling capacitor on the supply input of the 12F675. This reduces noise overall and gives a more consistent reading. However using a plug block and ADC is not a very good idea as there is no ground plane and no control over current paths which you would be able control in a pcb. In a commercial system the internal ADC is often not used at all as it is essential to separate the noise introduced to the ADC using separate grounds and shielding - some designs encase the ADC in a custom metal shield and along with a ground plane connecting to the shield gives the best possible result. To overcome noise problems on the ADC the software averages the input readings so you get a better result. Add the components (at top right to) the temperature recorder - wires and R3,R4,R5 and the LM35 temperature sensor (U4) and the decoupling capacitor C4. LM35 Temperature Recorder Circuit The analogue reference for the ADC is taken from the power supply resistive divider to the 12F675 input pin 6 and for the 7805 its accuracy is specified as ±5% so the accuracy of the ADC is only 5% due to the reference -the divider also introduces a 1% error giving a 6% error overall. Note: Since the 7805 is only accurate to ±5% the accuracy of the temperature reading will be accurate (plus errors in the ADC and temperature sensor itself and any noise introduced the the analogue input and the reference). However the reference source gives you the biggest error - the overriding accuracy - if you used a more accurate voltage supply then the ADC accuracy would become more important as well as the temperature sensor accuracy etc. The software uses the Soft USART (transmit only) described in the previous tutorial and uses the built in MikroC routines to get the data from analogue input pin AN0. Source code files To get the file software project files and c source code click here |// Temperature recorder analogue val = ADC_Read(0); // more code adds up 10 readings of ADC val = ((val/MAX_AVG)*122)/50; // Convert to Fahrenheit x 5/9 // add 32 scaled to 3 digits val = ((val*18)/10)+320; interesting parts of the software are shown above. The variable val is an unsigned int so the maximum value it can store is 65535 The reference in use is 2.5V so for the 10bit ADC each ADC bit is worth 2.5/1023 = 2.44mV If you work out values generated for a maximum temperature of 100°C using the scale factor 2.44mV (or 244/100) 100 * 10mV = 1.0V 1.0V/2.44mV = 410 410 * 244 = 100,040 which will not fit into an So this scale factor does not work for all input values By using a little maths it can be made to fit -you need to reduce the top number to fit. e.g. 410 * 122 = 50,020 which does fit. Dividing by 50 gets back to the correct scale factor of 244. So the scale 122/50 works for all input values. This is an example of avoiding the use of floating point variables which take up too much resources. You can still make the system work but you have to be careful when using fixed types and you have to check all input values and outputs to make sure they fit. Averaging would be better done in the PC as it has more resources - the same goes for calculating and displaying the temperature in Fahrenheit but this gives a demonstration of what you can do. Note: The RAM is used up since a bug in MikroC 22.214.171.124 puts strings int RAM - in future versions this will be Typical output from the The left most value is the RAW ADC value, the next is the temperature sensor output in degrees centigrade and the next is the temperature sensor output in degrees Fahrenheit. Note: You have to put in the decimal point so the above readings are: Learn to program in C in easy steps. Essential & EASY: PIC C Programming Course: | About Me Jump from 12F675 Tutorial 2 : LM35 Temperature Best-Microcontroller-Projects Home Page
A wave that changes speed as it crosses the boundary of between two materials will also change direction if it crosses the boundary at an angle other than perpendicular. This is because the part of the wavefront that gets to the boundary first, slows down first. The bending of a wave due to changes in speed as it crosses a boundary is called refraction. As mentioned in the last chapter, light in air or a vacuum travels at c = 3.0×108 m/s but slows down when passing through glass. As shown in the diagram below, this will cause light to change direction a little. For a piece of glass with flat surfaces this isn’t very noticeable unless the glass is very thick. But for a curved surface the light ends up leaving the glass going in a different direction and this is how lenses for glasses, telescopes, microscopes, binoculars, etc. are made. What about sound? Sound also undergoes refraction. Recall from the last chapter that wind can change the speed of air. In the following picture notice that Jill can hear Jack because the wind speeds up the upper edges of the sound, bending it back towards the ground. Jill can’t hear Dana because the wind bends the sound upward. Likewise we know that the speed of sound depends on density which changes with temperature and humidity. In the following picture notice that Jill can hear Jack because the warmer temperature speeds up the upper edges of the sound, bending it back towards the ground. In the second picture there is a temperature inversion with warmer air trapped underneath cooler air. Jill sees but does not hear the lightning (this is sometimes called heat lightning, as shown in the second figure below). The broken straw illusion (due to refraction of light). sound in a balloon full of gas in the last chapter) Optical illusions due to the refraction of light. Snell’s law tells you how much a light wave will bend when going from air to glass or vice versa. Light going into the glass ends up with a refracted angle that is smaller than the incident angle. Going the other way (glass to air) the light ends up with a larger refracted angle than the incident angle. In this case, what happens if the refracted angle tries to exceed 90 degrees? It reflects back into the glass, rather than passing into air. This is known as total internal reflection and is a consequence of Snell’s Law. An example of total internal reflection in a water stream. The same thing happens in a fiber optic cable; light stays inside the cable because of total internal reflection. This Ripple Tank Simulation by Paul Falstad lets you look at waves being bent by refraction and temperature gradients. Directions: First choose Setup: Refraction. What is going on? Why does the wave change direction when it reaches the lower medium? Now choose Setup: Temperature Gradient 1. Why do the waves bend to go downwards? What is the parallel between this simulation and the description given by Paul Hewitt (second link in this list)?
In this clip, students are gathered around in a whiteboard circle to discuss constant acceleration equations. One student (S1) is explaining her group’s results regarding displacement. When she’s finished, another student (S2) expresses his concerns regarding the average velocity formula that S1 had written on her whiteboard. S2 uses another group’s results as evidence to point out that the average velocity formula that S1 had written down was actually the formula for average acceleration. The clip concludes with S1 realizing her mistake. This clip takes place during week 3 (Becoming Quantitative with Constant Acceleration). The problem that is being discussed comes from the Becoming Quantitative worksheet.
With reference to food chains in ecosystems, consider the following statements : 1. A food chain illustrates the order in which a chain of organisms feed upon each other. 2. Food chains are found within the populations of a species. 3. A food chain illustrates the numbers of each organism which are eaten by others. Which of the statements given above is/are correct ?
Activity 1. Review of circumstances surrounding Whiskey Rebellion of 1794 The Whiskey Rebellion of 1794 is regarded as one of the first tests of federal authority in United States history and of the young nation's commitment to the constitutional rule of law. Introduce students to the circumstances surrounding this pivotal event, referring for background to the "Editorial Note" to George Washington's diary of his campaign against the rebels, available through EDSITEment at the Papers of George Washington website. (At the website's homepage, click on "Selected Documents" in the navigational frame, then select "The Whiskey Insurrection, from Washington's Diaries.") The following timeline, drawn from the "Editorial Note," may also prove helpful: - March 1791: Federalists in Congress succeed in passing an excise tax on domestically distilled spirits (i.e., liquor) and provide an elaborate system of local inspectors and collection officers to insure that the tax is paid. - September 1792: The excise tax provokes opposition in frontier areas, where spirits were distilled primarily for personal use, not for sale, and where a tradition of militant individualism objected to the presence of tax inspectors. In response, George Washington issues a presidential proclamation condemning activities that tend "to obstruct the operation of the laws of the United States for raising a revenue upon spirits distilled within the same." - July 1794: Following unsuccessful petitions against the excise tax, an armed group in western Pennsylvania attacks a federal marshal when he attempts to serve papers on those who have not registered their stills as required by law. Two days later, insurgents burn the home of the local tax collector. As the uprising spreads, government agents and local citizens sympathetic to the government become the target of violence and harassment. - August 2, 1794: Washington confers with Pennsylvania officials and his cabinet to set a course for meeting this emergency. He decides to lay the matter before a Justice of the Supreme Court in order to determine, as one cabinet member wrote, "all the means vested in the President for suppressing the progress of the mischief." Two days later the court rules that circumstances in western Pennsylvania cannot be controlled by civil authorities and warrant a military response. - August 7, 1794: Washington calls up the militia in Pennsylvania, New Jersey, Maryland, and Virginia to assemble a force of nearly 13,000 men, "feeling the deepest regret for the occasion, but withal, the most solemn conviction, that the essential interests of the Union demand it." He also offers amnesty to all insurgents who "disperse and retire peaceably to their respective abodes" by September 1. - August 21, 1794: Washington sends three federal commissioners into western Pennsylvania in a final attempt to resolve the situation peacefully. Their efforts are met with violent resistance, and on September 24 they report that "there is no probability that . . [the laws] can at present be enforced by the usual course of civil authority, and that some more competent force is necessary to cause the laws to be duly executed." - September 25, 1794: Washington issues a proclamation ordering the militia to assemble and march against the insurgents: "Every form of conciliation not inconsistent with the being of Government, has been adopted without effect . . . [and] Government is set at defiance, the contest being whether a small portion of the United States shall dictate to the whole union, and at the expence of those, who desire peace, indulge a desperate ambition; Now therefore I, George Washington, . . . deploring that the American name should be sullied by the outrages of citizens on their own Government; . . . but resolved . . . to reduce the refractory to a due subordination to the law; Do Hereby declare . . . that a force . . . adequate to the exigency, is already in motion to the scene of disaffection; . . . And I do, moreover, exhort all individuals, officers, and bodies of men, to contemplate with abhorrence the measures leading directly or indirectly to those crimes, which produce this resort to military coercion." Activity 2. Consult relevant sections of the Constitution As Washington's consultations with the Supreme Court suggest, the Whiskey Rebellion raised questions about governmental authority under the new Constitution. Was this a situation in which the President was empowered to "take care that the laws be faithfully executed" (Article II, Section 3)? Was it a situation in which the Congress was required "to provide for calling forth the militia to execute the laws of the Union, suppress insurrections and repel invasions" (Article I, Section 8, Number 15)? Or was it simply a local matter, a breakdown of law and order in western Pennsylvania which the state should deal with on its own (as implied by the Tenth Amendment)? Have students consult these sections of the Constitution, which is available through EDSITEment at the Avalon Project website. Activity 3. Read Washington's Diary Turn from these constitutional issues to Washington's handling of this crisis by having students read his Diary for the period from September 30 to October 20, when he rode west to review his troops at their assembly points and issue commands for their march into western Pennsylvania. Focus attention on Washington's entry for October 6 to 12, where he describes a meeting with two representatives from the insurgent region, William Findley and David Redick, both prosperous landowners who had infiltrated the rebel movement. Divide the class into study groups and have each group outline the arguments made on either side. Then stage a Nightline-style investigative report into what happened at the meeting. Assign students to speak for Findley, Redick, and Washington, and after each explains his objectives in the meeting and his views on why negotiations "broke down," have members of the class raise questions about the positions each side took and about options they might have considered. For example: - Why did Findley and Redick attempt to halt Washington's army? Were they seeking to protect the rebels? If they were really as frightened by the rebels as they claimed, why did they resist this chance to restore law and order? - Why did Washington reject the argument that he should turn back since the insurgency was losing steam and would soon blow over anyway? Wasn't he worried that bringing federal troops into the area might stir up fresh trouble, as federal tax inspectors had done in August? What was he trying to accomplish by marching 13,000 men into this remote part of the country against a rag-tag array of small farmers? - When Findley and Redick asked Washington what proof he wanted that the rule of law had been restored in their region, he answered that "they knew as well as I did." What proof did Washington require? What would have satisfied him? Could any community produce proof that its citizens are in "absolute submission" to the law? - Why did Findley and Redick emphasize that the rebels were ignorant and "men of little or no property"? Were they implying that the insurgency was really the work of an underclass, people who would sink back into impotence and insignificance now that the excitement was over? To what extent did Washington share this "blame it on the riff-raff" view of the situation? Would he have marched an army into a prosperous community that took up arms when its petitions had been ignored? - Were there other options open to Washington? Why couldn't he keep his army at the ready to see if the rebellion had really run its course? Why couldn't he have told Findley and Redick that he would withdraw if the people of their region would hand over the rebel leaders? Why did he feel it necessary to press on with the invasion? - What was really at stake for Washington in this confrontation? The security of the nation? Civil order and tranquillity? Respect for federal authority? What mattered so much that he was willing to run the risk of war between the government and its citizens? Activity 4. Produce and share newspapers probing aspects of Washington's decision After students have probed these aspects of Washington's decision, remind them that during this period he faced increasing political controversy as well. The excise tax had been a Federalist measure, after all, designed to help pay the costs of Hamilton's financial policies, and its opponents included those who were organizing what would soon become the Democratic-Republican party under Jefferson. Antagonism between these groups deepened over Washington's handling of the Whiskey Rebellion: "An insurrection was announced and proclaimed and armed against, but could never be found," Jefferson said of it, whereas Hamilton argued that suppressing the rebellion "will do us a great deal of good and add to the solidity of everything in this country." - Have students explore this dimension of Washington's decision by reading his Sixth Annual Message to Congress, delivered soon after his return from western Pennsylvania. The speech is available through EDSITEment at the Presidential Speeches website. (At the website's homepage, click on "George Washington," then select "Annual Message, 1794-11-19.") - Divide the class into two "factions," as they were called at the time, Federalist and Democratic-Republican. Form study groups within each faction and have each group produce a partisan newspaper reporting on Washington's address and his recent actions against the insurgents. In their newspapers, students should comment on Washington's analysis of the situation, his justification for employing military force, and his claim that "consolations" have come out of this crisis. - Have students on both sides note in particular what Washington had to say about "the origin and progress of the insurrection," where he fixes the blame on "combinations of men who . . . have disseminated . . . accusations of the whole Government." Who are these "combinations of men," which he elsewhere describes as "certain self-created societies"? And how did his suspicions about them influence his decision to carry through with the use of military force? To what degree were his actions, in other words, a show of political power designed to send a message to his political opponents as well as an exercise of executive power against those who would defy the law? Activity 5. Discussion reporting on closing chapters of Whiskey Rebellion After students have produced and shared their newspapers, discuss in class how each side might have reported on the closing chapters in the Whiskey Rebellion: - November 17, 1794: Hamilton writes to Washington from western Pennsylvania that "the list of prisoners has been very considerably increased, probably to the amount of 150. . . . Subsequent intelligence shews that there is no regular assemblage of the fugitives . . . only small vagrant parties . . . affording no point of Attack. Every thing is urging for the return of the troops." - November 19, 1794: Hamilton notifies Washington that the army "is generally in motion homeward," leaving behind a regiment to maintain order. - July 10, 1795: Washington issues a pardon to those insurgents who were taken prisoner but were not yet sentenced or indicted. By this time, most had already been acquitted for lack of evidence. Activity 6. Applying Washington's policy to later examples of civil unrest Conclude the lesson by having students consider how Washington's policy for dealing with the Whiskey Rebellion, and the reasoning that motivated his actions, would apply to later examples of civil unrest. Would he have viewed those who fought for racial equality in the Civil Rights Movement in the same light as the Pennsylvania insurgents who fought against an onerous tax? Have students offer more recent examples of citizens threatening civil order in the belief that their cause is just. What can we learn from Washington's precedent-setting response to the dilemma that arises in such situations? Did the insurgents' cause weigh in his decision to force submission to the law? Should the cause for civil disobedience determine how government responds?
Drawing with simple geometric shapes is not as easy as it seems. No wonder one of the first subjects in art schools is "drawing", it is very important to learn to see the components of the image and be able to isolate its simple parts. Let's talk about how to draw with ovals. The first step is to learn how to draw the oval itself. A regular oval is a shape without sharp corners or parallel sides. The far part of the oval should be drawn less, the near one should be more. First, draw a vertical line - the main line of symmetry to build an oval. Then draw a horizontal line, then mark the widest part of the shape on it. Now define the proportions, mark the length and width of the oval with dots. Draw the smaller (far) and larger (near) arcs. With a developed eye, there is no need to build axes. Now let's try to draw a clown fish with ovals. The body of this fish is in the shape of an elongated oval. Therefore, first draw an elongated oval, then "cut off" the excess from the oval, drawing the desired proportions. The dorsal fin of a clown fish is unusual in shape. Draw arched lines, and the closer to the tail, the shorter. Then connect the dorsal fin lines, shade the fins, then draw lines on the body. Now try drawing a turtle. First draw an oval, step back a little from the bottom line and draw another line. You will get a turtle shell. Now draw a circle to the left of the oval shell. Connect the head and shell with thin lines - this is the neck. Draw paws in the form of ovals at the bottom. Draw a small ponytail at the back. Draw cheeks, mouth, eyes. Erase all unnecessary. The turtle is ready!
Despite all our smarts and scientific advancements, there is still a lot we don’t know about the phenomenon of human language. We don’t know what the first human language sounded like. We don’t know exactly where, how, or when it came to be. We may never be able to find out—there’s an overwhelming lack of data to work with. What we can say, however, is that once we figured out how to create language, we went ahead and created a bunch of them. And we’re still doing it today. Where There’s a Need, There Is a Way Two of the main senses for using language are speech, our ability to create sounds, and hearing, our ability to perceive sounds created by others. We can also give language a visual form by writing, but the visual element is important even in face-to-face communication. We send off and receive a number of non-verbal communication signs, such as facial expressions, postures, and gestures. A number of our fellow humans aren’t able to communicate with all three senses, but doesn’t stop them from finding effective ways of communication. People who are deaf learn sign language—a language of hand gestures and signs that allows them to communicate with great fluency. But what happens in a community of deaf people who don’t have a sign language they can use to communicate? They come up with their own. When a group of deaf children in Nicaragua was taught to lip-read and use American Sign Language, they shunned lip-reading and quickly developed a sign language of their own—behind the backs of their teachers. The result was a completely new language, developed in the 1980s by Nicaraguan kids. And just like that, Idioma de Señas de Nicaragua, or ISN, was born. People who can’t hear or see have an even bigger challenge—they can’t rely on signs and gestures. In the United States, people who are deaf and blind have been developing a sign language that is based on American Sign Language but has a tactile twist to it. A person speaking in Pro-Tactile ASL, which is what the new language is called, uses her own hands and arms as well as the hands and arms of the person she’s talking to create gestures and signs. It’s a contact language that allows speakers to communicate nuances such as nodding and other gestures. Other Reasons to Invent a Language Constructed languages have been created with different agendas, apart from the basic human need to communicate. Ludwik Lejzer Zamenhof, the creator of arguably the best-known constructed language in the world, Esperanto, wanted to make a language that was easy to learn, could be used as an international second language, and could help overcome cultural misunderstandings. Robot Interaction Language, or ROILA, is a language currently under development at the Eindhoven University of Technology’s Department of Industrial Design. It is the first language created specially for use by talking robots. Loglan, created Dr. James Cooke Brown, is a language used by linguists to research linguistic relativity. But new languages also pop up spontaneously when conditions are right. People living in Lajamanu, a small and isolated town in Australia, already had a heritage language they could speak, Walpiri. They also spoke both English and Kriol, an English-based creole. When parents spoke to their kids in a mixture of the three languages, the kids took the words they heard and married them with a syntax that wasn’t present in any of the three parent languages, creating a new native language for about 350 of Lajamanu’s residents. It’s spoken only by people who are around thirty-five years old. Artistic Languages That Entertain Fantasy settings invite us to create new languages. Alien cultures, alternative histories, dystopian futures, worlds of magic and swordplay—these settings are often very different from the world we live in. So, it only makes sense to, at least from time to time, populate these strange worlds with their own languages. Occasionally, you’ll get fantasy languages that really work (kind of). You might call them artistic languages, or artlangs. If you’re a fan of the Star Trek franchise, you probably know there are Trekkers who can speak Klingon, a language created for a Star Trek movie by the American linguist Marc Okrand. If you’re familiar with the works of J. R. R. Tolkien, you’re probably aware of Quenya and Sindarin, two Elvish languages of Middle Earth. Na’vi, created by Dr. Paul Frommer, is what the big blue aliens speak in James Cameron’s movie Avatar. The Verdurian language was created by Mark Rosenfelder for a role-playing game, and it contains 400,000 words. There’s no end to human inventiveness when it comes to language. What new languages do you speak?
Most states require secondary students to pass standardized assessments in English language arts and math. Teachers are instrumental in their students’ success in standardized testing situations. It is the responsibility of teachers to properly prepare students and the testing environment before tests are administered. The purpose of this assignment is to analyze standardized testing to improve student performance and confidence. Research a state standardized assessment for secondary students. Create a 10-12 slide digital presentation for new teachers explaining required assessments in your state, including the following: Describe the universal testing conditions required or allowed. Explain the accommodations permitted and how you will implement the accommodations to meet the needs of students with IEP’s and 504’s, and English language learners. Explain how to prepare the classroom to maximize student confidence and success. Locate sample or practice tests and explain how they could be implemented in a classroom. (Include links and resources.) Provide 2-3 examples of instructional activities to prepare students for testing based on their intellectual, social, and emotional developmental needs. Describe how student interest and prior knowledge will be utilized in preparing students for testing. Identify challenges that can be anticipated in testing preparation and implementation. How would you prepare for the unexpected? Title slide, reference slide, and presenter notes. Support your work with 2-3 credible resources. APA format is not required, but solid academic writing is expected.
Chapter 4: Speech Sounds in the Mind Each speech sound can be analyzed in terms of its phonetic features, the parts of the sound that can each be independently controlled by the articulators. We can represent the features of each sound using a feature matrix, or we can use a feature matrix to represent a class of sounds that have features in common. 1. Which feature distinguishes the segments [w] and [o]? 2. Which feature distinguishes the segments [p] and [f]? 3. Which feature distinguishes the segments [p] and [b]? In our thinking about speech sounds so far, we’ve focused almost entirely on segments. Segments are the individual speech sounds, each of which gets transcribed with an individual symbol in the IPA. We’ve seen that any given segment can influence the segments that come before and after it, through coarticulation and other articulatory processes. And we’ve also seen that segments can be grouped together into syllables, which we look at in more detail in another unit. Within the grammar of any language, two different segments might contrast with each other or might not. So we’ve been talking as if segments are the smallest unit in speech, but in fact, each speech segment is made up of smaller components called features. Each feature is an element of a sound that we can control independently. To see how features work, let’s look at a couple of examples. We can describe the segment [b], for example, as being made up of this set of features. First, [b] is a consonant (meaning it has some obstruction in the vocal tract), so it gets the feature consonant indicated with a plus sign to show that the consonant feature is present. Looking at the next feature, sonorant, notice that it’s indicated with a minus sign, meaning that [b] is not a sonorant. The feature sonorant, of course, has to do with sonority. We know that stops have very low sonority because the vocal tract is completely closed for stops, so stops are all coded as [-sonorant]. The next feature, syllabic, tells us whether a given segment is the nucleus of a syllable or not. Remember that the most common segments that serve as the nucleus of a syllable are vowels, but stops certainly cannot be the nucleus, so /b/ gets labelled as [-syllabic]. These first three features, consonant, sonorant, and syllabic allow us to group all speech segments into the major classes of consonants, vowels, and glides. We’ll see how in a couple of minutes. This next set of features has to do with the manner of articulation. The feature continuant tells us how long a sound goes on. Stops are very short sounds; they last for only a brief moment, so [b] gets a minus sign for continuant. We also know that [b] is not made by passing air through the nasal cavity, so it also gets a minus sign for the feature nasal. And [b] is a voiced sound, made with vocal folds vibrating, so it is [+voice]. The last feature we list for [b] is [LABIAL] because it’s made with the lips. (Stay tuned for an explanation of why some features are listed in lower-case and some in upper-case.) This whole list of features is called a feature matrix; it’s the list of the individual features that describe the segment [b], in quite a lot of detail! Because features are at the phonetic level of representation, we use square brackets when we list them. You often see a feature matrix listed with a large pair of square brackets, like this, but we’ll just use individual square brackets on each feature. Now I want you to notice something. If we take this whole feature matrix and change the value of just one feature, changing the feature voice from plus to minus, now we’re describing a different segment, [p]: [p] has every feature in common with [b] except for voicing. Likewise, if we take the feature matrix for [b] and change the value of the feature continuant from minus to plus, now we’re describing the segment [v], which has all the same features as [b] except that it can continue for a long time because it’s a fricative. Or if we take the feature matrix for [b] and change the feature nasal from minus to plus, this has the effect of also changing the sonorant feature to plus as well, because circulating air through the nasal cavity adds sonority. Now, this feature matrix describes the properties of the segment [m]. So each feature is something that we can control independently of the others with our articulators. And changing just one feature is enough to change the properties of a segment. That change might lead to a phonemic contrast within the mental grammar of a language, or it might just result in an allophone of the same phoneme. It turns out that segments that have a lot of features in common tend to behave the same way within the mental grammar of a language. And we can use these features to group segments into natural classes that capture some of these similarities in their behaviour. Let’s look again at the feature matrix for /b/. If we take away the feature that describes its place of articulation, we end up with a smaller list of features. This smaller list describes not just a single segment, but a class of segments: all the voiced stops. By not mentioning the place feature, we’ve allowed this matrix to include segments from any place of articulation, as long as they share all these other features. These three segments have all these features in common: they’re a natural class. If we remove another feature, the voicing feature, the natural class gets bigger: now we’ve got a feature matrix that describes all the stops in English, including those that are [+voice] and those that are [-voice]. So you can see that this system of features is very powerful for describing classes of segments that have things in common. We’ll learn more about natural classes in the next unit.
Life on Earth is most diverse at the equator. This pattern, where species biodiversity increases as we move through the tropics towards the equator, is seen on land and in the oceans and has been documented across a broad range of animal and plant groups, from mammals and birds to ants and even trees. Despite this pattern being so striking today, the distribution of biodiversity across latitudes called the latitudinal biodiversity gradient hasn't always been like this. Studies looking at the evolution of biodiversity by latitude have shown that during some intervals in Earth's history, species biodiversity was actually highest at latitudes far from the equator. Understanding why latitudinal biodiversity has shifted over hundreds of millions of years, often linked to mass extinction events, is critical in today's world, where we're facing climate change, habitat loss, and decreasing biodiversity worldwide. Looking back in geological time reveals an alarming picture of what we're set to lose if we fail to address increasing global temperatures. Several different hypotheses have been proposed to explain why high biodiversity clusters around certain latitudes, but the climate is often regarded as a key driver, both in the present day and through history as shown by the geological record. Climate affects organisms in many ways, including where they can live, when they reproduce, and even how they control their internal processes such as temperature regulation. Modern biodiversity peaks in low-latitude equatorial regions, such as in the tropical rainforests of the Amazon and central Africa. This pattern is more likely to be recorded during icehouse times when ice sheets are present in both poles simultaneously like today. During warmer intervals called hothouse or greenhouse Earth states, bimodal peaks have been recorded. This means there were two bands where biodiversity was highest, and these wrapped around the Earth at mid-latitudes, or regions sitting between 25° and 65° north and south of the equator. The fossil record provides our best window into Earth's ancient biodiversity. But estimating patterns of biodiversity from the fossil record has been tricky because it's riddled with gaps and biases that limit our understanding. But in the past two decades, new analytical techniques have allowed paleontologists to estimate what prehistoric biodiversity patterns might have looked like, even from data that might appear, superficially at least, a little patchy. These techniques have recently revealed what latitudinal diversity looked like over 200 million years ago, in the aftermath of the most devastating mass extinction events ever recorded. The end-Permian mass extinction, which took place 251 million years ago, resulted in the extinction of over 80% of species on Earth. The extinction event was caused by an unstable climate after widespread volcanic eruptions. At this time, and for the following 50 million years of the Triassic period, the continents were arranged into a single landmass, known as Pangaea. The climate of the period was generally hotter and arider than the present day, and vast deserts surrounded the equator. Instead of ice sheets, polar regions had temperate climates, like those we find at mid-latitudes today. Life in the oceans, meanwhile, was not only subjected to equatorial sea surface temperatures as high as 40°C but also falling oxygen levels and ocean acidification. The period following the end-Permian mass extinction was one of recovery. A recent study found a latitudinal diversity gradient in the oceans similar to today's was present for much of the Triassic (251–201 million years ago). Immediately following the mass extinction event, however, the researchers found a flat biodiversity gradient. There was no peak in species biodiversity at any latitude, which they attributed to high extinction rates near the equator due to extreme warming and ocean anoxia when oxygen in ocean water is depleted. On land, the vertebrates that survived the mass extinction soon developed a bimodal latitudinal biodiversity gradient, with the highest peak occurring in low-latitude regions of the northern hemisphere, but with a second peak in mid-latitude regions of the southern hemisphere. This pattern is likely to have been driven by the extreme climatic conditions on Pangaea, including high temperatures and strongly seasonal rainfall, associated with the formation of a megamonsoon. Later in the Triassic, on the approach to yet another mass extinction event, most land vertebrates, including early mammals and early dinosaurs, exhibited high diversity at mid-latitudes, both north and south of the equator. This pattern is similar to that recorded for land vertebrates during the Permian period, just before the mass extinction. One exception was the pseudosuchians, the group that consists of crocodilians and their fossil relatives. Interestingly, while the latitudinal biodiversity of other species shifted over the subsequent 200 million years, arriving at the equator in the present day, pseudosuchian biodiversity has remained highest at low latitudes throughout their entire evolutionary history. This is likely due to their physiology, specifically their tolerance to high temperatures. Reptiles are ectotherms, or cold-blooded organisms, that rely on their external environment to regulate their internal body temperature. Today, crocodiles and other reptiles are restricted to areas of the world with warmer, more stable temperatures, and the same would have been true of their fossil relatives. These insights into past mass extinction events are critical for understanding how Earth's current patchwork of biodiverse regions could change. As global temperatures continue to rise, some studies have predicted that species will disperse towards the poles from equatorial regions but if the pace of change is too rapid, they risk going extinct. Others suggest that global warming might lead to the climate becoming more similar across different latitudes, potentially producing a peak in biodiversity at mid-latitudes. There's already evidence that marine latitudinal biodiversity has become increasingly bimodal over the last 50 years. With a possible sixth mass extinction looming, or even already taking hold, a long-term perspective will be critical for understanding how to sustain Earth's biodiversity into the future.
We can define vertigo as a sensation of movement that causes people to feel that their environment is spinning. It is a sensation similar to when we turn around ourselves several times and, when we stop, we find it difficult to stay upright without staggering. This disorder causes imbalance , nausea, dizziness and a feeling of fainting and is related to an alteration of the vestibular system , which is located within the inner ear and whose function is to maintain balance. This alteration can be momentary, lasting hours or even days, depending on the degree of disorder. Keep in mind that anyone can have vertigo, but its development is different. While in children it tends to be spontaneous or short-lived vertigo, in the elderly it tends to develop longer, leading to greater difficulties and influencing the quality of life of those who suffer from it. Sometimes the disorder is accompanied by nystagmus, an involuntary spasm in one or both eyes that can be vertical, horizontal, or rotating. In FastlyHealwe are going to tell you what are the causes, types, symptoms and existing treatments of this disorder so that you know it more closely. Table of Contents Causes associated with vertigo The sense of balance depends on the correct functioning of the vestibular system, which connects the inner ear with the brainstem, along with the visual one. Vertigo is the result of incorrect reception of messages from the brain . The cause of this disorder may be related to alterations in the ear, in the connection of the nerves with the brain or in the brain itself. Next, we will name some of the causes : - Head trauma In the event of trauma, some of the areas of the brain associated with balance function may be impaired. - Viral or bacterial infections in the ear. An ear infection that is not treated properly can lead to vertigo, since, as we have said before, the ear plays a fundamental part in balance as it is connected to the brain stem through the vestibular system. - Dizziness People sensitive to sudden or sudden movements. - Some medications such as aminoglycoside antibiotics, diuretics, or salicylates. - Meniere’s disease . It is a disorder of the inner ear that usually only affects one ear. It usually causes severe dizziness, sounds in the ear, and hearing loss, which comes and goes, as well as the sensation of pressure in the ear. - Abnormal regulation of blood pressure. This cause is common in older people who need drugs for heart disease or hypertension and are at risk of fainting when incorporated as a result of a drop in blood pressure. - Labyrinthitis. Hearing disorder involving irritation and inflammation of the inner ear. It leads to difficulty in focusing the eyes as a result of involuntary eye movements. - Neurological disorders. Those like multiple sclerosis , tumors, or stroke. Types of vertigo The most common way to distinguish this disorder is by taking into account the area in which the disease that causes it occurs. Taking this into account we can differentiate between: - Peripheral vertigo . It is the most frequent and derives from the condition of the inner ear that controls balance and the vestibular nerve. As consequences, this type of vertigo has hearing loss and pressure in the ears. - Central vertigo . It is related to a direct problem in the brain, especially in the brainstem or the back part of the brain known as the cerebellum. It can be accompanied by double vision and a severe headache. Below we will explain the symptoms of this disorder to learn more about it. When it comes to peripheral vertigo : - The main symptom of vertigo is the sensation of turning on oneself or that the environment revolves around us, which can cause dizziness and vomiting. - Another common symptom is related to sight and is the difficulty to focus the eyes on a specific point. - Loss of balance Staying on your feet can be a difficult task when you have vertigo, so the sufferer may need some additional support in order not to fall. - Ringing in the ears. When the affected area is the inner part of the ear, that is, when it is a pereric type vertigo, it is common to experience this symptom. - Hearing loss It consists of hearing loss in one or both ears. This symptom can be caused by different elements, such as the accumulation of wax in the outer ear, damage to the bones located just behind the eardrum, fluid in the ear after an ear infection or a hole in the eardrum, among others. When vertigo is central, we face problems of a different nature than the previous one, in addition to those mentioned in the previous point: - Difficulty chewing food, bringing it to the back of the mouth and making it go down the esophagus, which is responsible for transferring food to the stomach. - Double vision. - Problems moving the eyes. - Facial paralysis caused by damage to the facial nerve, which carries signals from the brain to the muscles of the face, or by damage to the part of the brain that sends signals to the muscles of the face. - Poor articulation of language. - Limb weakness. As we can see, while the symptoms related to peripheral vertigo are the consequence of elements that are not directly linked to the brain, the symptoms of central vertigo are the consequence of damages that are directly related to the brain, impairing its functions. Existing treatments related to vertigo The treatment of vertigo will depend on the cause that has led to its development. The drugs are used in order to decrease the uncompensated vestibular activity. These do not eliminate the cause of vertigo, but rather reduce the imbalance caused by vestibural dysfunction. They can be classified into two groups: - Modifiers of nerve transmission in the vestibular pathway. This group includes drugs with antihistamine or anticholinergic activity, or both. - Drugs whose focus of action acts on the cause of vertigo, such as vasodilators, diuretics to reduce fluid pressure, antibacterials to combat ear infection, among others. Any of these options must be accompanied by a diet low in salt , something that is very effective. When none of the above options is effective, a surgical intervention is necessary : - Vestibular neuroctomy. The balance nerve is cut so the patient maintains hearing. - Labyrinthectomy, which involves loss of hearing by removing all sensory receptors from balance. On the other hand, there is benign postural paroxysmal vertigo, which is a consequence of the existence of calcareous remains in some of the circular canals of the inner ear. In this case, the treatment consists of extracting the material to eliminate the discomfort. In addition, in the following article we will talk about the best home remedies for vertigo . This article is merely informative, at FastlyHeal .com we do not have the power to prescribe medical treatments or make any type of diagnosis. We invite you to see a doctor in the case of presenting any type of condition or discomfort. If you want to read more articles similar to Vertigo: causes, types, symptoms and treatment , we recommend that you enter our Ear, Nose and Throat category . I am a Surgeon with a diploma in comprehensive ultrasound and surgical care residency, an area I am specializing in. During the exercise of my profession, I have realized the need for patients to know the diseases they suffer, and I can tell you that a large part of their complications is due to a lack of information. Being a health web writer allows me to transmit my experience, without borders, to all those readers eager for knowledge, educate them in the prevention of diseases and promote a healthy lifestyle.
Scientists have finally solved the puzzle of the mysterious Wallace Line that runs through Indonesia, providing an explanation for the uneven distribution of animal species on either side of this boundary. This invisible but impactful line, first mapped out by British naturalist Alfred Russel Wallace more than 160 years ago, has puzzled researchers for centuries. But now, a new study sheds light on its origin and the factors that shaped it. The Wallace Line You've likely heard of famed naturalist Charles Darwin, but not a lot of people know that Alfred Russel Wallace, also a British naturalist, independently proposed a theory of evolution due to natural selection around the same time as Darwin. He is best known, however, for something you might find quite intriguing: the Wallace Line. In the 19th century, while on an expedition, Wallace noted a surprising contrast in animal species on either side of an invisible boundary running between the Indonesian islands of Borneo and Sulawesi. To the west, the islands— including Borneo, Java, and Sumatra— are home to animals commonly found in Southeast Asia. However, when you move east past the line to islands like Sulawesi, New Guinea, and the Moluccas, the animals are more akin to species found in Australia. The Wallace Line delineates two distinct zones of animal and plant life. But the curious thing about this line is that it exists despite the geographical proximity of the islands. One might expect a gradual transition of species between areas so close, but that's not the case here. This clear division of wildlife has puzzled scientists for over a century. Now, a new study may have finally explained the conundrum: extreme climate change triggered by tectonic activity around 35 million years ago played a crucial role in creating the Wallace Line. Around that time, Australia drifted away from Antarctica and collided with Asia, causing significant changes in geography and also Earth's climate. The continental collision birthed the volcanic islands of Indonesia while also opening up a deep ocean surrounding Antarctica. In turn, this led to the formation of the Antarctic Circumpolar Current, which dramatically cooled the climate. Stepping stones across Indonesia The findings were made by biologists at the Australian National University (ANU) and ETH Zurich in Switzerland, who ran a computer model that predicts how this ancient tectonic event affected the range and diversification of species. This model revealed that the changing climate affected species differently on both sides of the Wallace Line. If you travel to Borneo, you won't see any marsupial mammals, but if you go to the neighboring island of Sulawesi, you will. Australia, on the other hand, lacks mammals typical of Asia, such as bears, tigers or rhinos," Dr Alex Skeels, from ANU, said. Although the global cooling caused by the merger of Australia and Asia unleashed a mass extinction event, the climate on the newly formed Indonesian islands was relatively welcoming for life: it was warm, wet, and tropical, much like today. "So Asian fauna were already well-adapted and comfortable with these conditions, so that helped them settle in Australia," Skeels said. "This was not the case for the Australian species. They had evolved in a cooler and increasingly drier climate over time and were therefore less successful in gaining a foothold on the tropical islands compared to the creatures migrating from Asia." The researchers hope that their computer model can help forecast how modern-day climate change will impact living species. By understanding how species adapted to historical climate changes, scientists can better predict which species may be more adept at adapting to new environments in the future. The Wallace Line serves as a demonstration of how geographical and geological factors can influence biodiversity. But it is not the only example. Closer to the Wallace Line, you'll find two other lines named after the scientists who discovered them: Weber's Line and Lydekker's Line. Weber's line runs east of the Wallace Line and shows a gradual transition from Asian to Australian species. Further east is Lydekker's Line, which borders the edge of the Australian continent. Beyond this point, the species are predominantly Asian. The Aïr and Ténéré Line is another fascinating biogeographical boundary. It runs through the Sahara Desert in Niger, separating the Western Saharan flora from the Eastern Saharan flora. Despite the harsh conditions, the regions on either side of this line boast different plant species adapted to their specific environments. As the climate shifts at an alarming pace, it has perhaps never been more important to study these lines. Although their boundaries may be invisible, their impact is very much real. The findings appeared in the journal Science.
Ethics is an essential component of nursing practice. It guides nursing professionals in delivering compassionate, high-quality care, and maintaining professional integrity. Nursing ethics is a branch of applied ethics that focuses on the moral values, principles, and decision-making processes specific to the nursing profession. This blog post will explore the principles of nursing ethics, ethical theories, decision-making processes, and how to navigate ethical dilemmas in nursing practice. Various ethical theories can inform nursing practice. Some of the most relevant theories include: Deontology: Deontological ethics focus on duties, rules, and principles. It argues that some actions are inherently right or wrong, regardless of their consequences. In nursing, deontology can emphasize adhering to professional guidelines and respecting patient autonomy. Utilitarianism: Utilitarianism focuses on the consequences of actions, seeking to maximize overall happiness and minimize suffering. In nursing, utilitarianism may involve weighing the benefits and harms of treatment options to determine the best course of action for a patient. Virtue Ethics: Virtue ethics emphasizes the development of moral character and virtues, such as compassion, honesty, and courage. In nursing, this theory can encourage nurses to cultivate virtuous qualities to provide excellent care. Feminist Ethics: Feminist ethics critiques traditional ethical theories for perpetuating gender-based biases and emphasizes the importance of relationships, care, and empathy in ethical decision-making. Ethics of Care: Similar to feminist ethics, the ethics of care emphasizes the importance of relationships, empathy, and the responsibility to care for others. In nursing, this approach can encourage a holistic approach to patient care. Principles of Nursing Ethics Several ethical principles underpin nursing practice: Autonomy: Respecting patients’ rights to make informed decisions about their healthcare, including the right to refuse treatment. Beneficence: Acting in the best interests of patients by promoting their well-being and striving to provide the best possible care. Nonmaleficence: “Do no harm” by avoiding actions that may cause harm or suffering to patients. Justice: Ensuring fair distribution of resources and providing equal treatment to all patients, regardless of race, gender, socioeconomic status, or other factors. Fidelity: Maintaining trust and loyalty by honoring commitments and promises made to patients and colleagues. Veracity: Telling the truth and providing accurate information to patients, even when it is difficult. Privacy and Confidentiality: Protecting patients’ personal information and respecting their privacy. Ethical Decision-Making Process in Nursing Nurses frequently encounter ethical dilemmas in their practice. A systematic decision-making process can help navigate these challenges: - Identifying Ethical Issues: Recognize situations that involve moral values or principles. - Gathering Information: Collect relevant facts and perspectives from patients, families, and colleagues. - Evaluating Options: Analyze potential actions and their consequences based on ethical principles and theories. - Making and Implementing Decisions: Choose the most ethically appropriate option and carry it out. - Reflecting on the Outcome: Assess the results of the decision and consider potential improvements for future ethical dilemmas. Common Ethical Dilemmas in Nursing Nurses may face various ethical dilemmas, including: End-of-Life Decisions: Balancing the patient’s autonomy, quality of life, and potential for recovery in decisions regarding life-sustaining treatments. Informed Consent: Ensuring patients have the necessary information to make informed decisions about their care. Resource Allocation: Distributing limited resources fairly among patients. Patient Confidentiality: Navigating situations where maintaining confidentiality may conflict with the well-being of the patient or others. Cultural Competence: Respecting diverse cultural beliefs and practices while providing appropriate care. The Role of Nursing Codes of Ethics Professional nursing organizations have established codes of ethics to guide nursing practice: International Council of Nurses (ICN) Code of Ethics: A global code that outlines ethical standards and responsibilities for nurses worldwide. American Nurses Association (ANA) Code of Ethics: A comprehensive framework for ethical nursing practice in the United States. Other National Codes of Ethics: Many countries have their own nursing codes of ethics, reflecting local cultural values and legal frameworks. Developing Ethical Competence in Nursing Nurses can enhance their ethical competence through: Incorporating ethics courses and training into nursing education and continuing professional development. Ethics Committees and Consultations: Participating in hospital ethics committees or seeking guidance from ethics consultants to address complex ethical dilemmas. Role Modeling and Mentorship: Learning from experienced nurses who demonstrate ethical practice and providing guidance to less experienced colleagues. Case Studies and Simulations Case studies and simulations can be a valuable tool for developing ethical competence. By examining real-life scenarios and discussing potential solutions, nurses can apply ethical principles and theories to practice. These activities encourage critical thinking, problem-solving, and collaboration, all essential skills for navigating ethical dilemmas in the nursing profession. Reflective practice involves examining one’s actions, thoughts, and emotions to gain a deeper understanding of ethical issues and decision-making processes. Nurses can engage in reflective practice by journaling, participating in peer discussions, or seeking feedback from mentors and supervisors. This process can help nurses identify areas for improvement, recognize personal biases, and develop strategies for addressing ethical challenges. Ethics Committees and Consultations Participating in hospital ethics committees or seeking guidance from ethics consultants can help nurses build their ethical competence. These committees often review complex ethical cases, develop institutional policies, and provide education and support to healthcare professionals. By engaging with ethics committees, nurses can gain exposure to diverse perspectives and learn from the experiences of colleagues and experts. Role Modeling and Mentorship Learning from experienced nurses who demonstrate ethical practice is an invaluable way to develop ethical competence. Role models can provide guidance, share insights, and offer constructive feedback to help less experienced nurses navigate ethical dilemmas. Similarly, mentoring less experienced colleagues can also reinforce one’s understanding of ethical principles and promote a culture of ethical practice within the nursing profession. Professional Development Workshops and Conferences Attending workshops, conferences, and seminars focused on nursing ethics can help nurses expand their knowledge and skills. These events often feature expert speakers, panel discussions, and interactive workshops, providing opportunities for networking, collaboration, and learning from diverse perspectives. Nurses can take the initiative to engage in self-directed learning to enhance their ethical competence. This can involve reading books, articles, and research papers on nursing ethics, as well as engaging in online courses, webinars, and podcasts. Self-directed learning enables nurses to explore topics of interest and stay current with emerging ethical issues and best practices. The Importance of Advocacy in Nursing Ethics Nursing advocacy involves standing up for patients’ rights and well-being, ensuring their needs are met, and promoting social justice. Advocacy is a critical aspect of ethical nursing practice, as it empowers nurses to address systemic issues and support vulnerable populations. Addressing Moral Distress in Nursing Moral distress occurs when nurses are unable to act according to their ethical beliefs due to external constraints, such as institutional policies or resource limitations. Addressing moral distress involves recognizing its signs, seeking support from colleagues and supervisors, and advocating for changes that enable ethical practice. Nursing ethics is a crucial aspect of professional nursing practice, as it guides nurses in delivering high-quality, compassionate care and navigating complex ethical dilemmas. By understanding ethical theories, principles, and decision-making processes, nurses can strengthen their ethical competence and uphold the highest standards of integrity in their practice.
Limestones in the South of the Island were formed, in the Carboniferous period, around 330 million years ago When Limestones were formed the Isle of Man was positioned close to the equator with much of the Island submerged beneath a shallow, tropical sea. This sea was inhabited by organisms such as corals, crinoids (a type of brittle star), numerous shellfish, primitive sharks and algae. The algae would grow together as clumps of slime on the sea bed. The slime would accrete calcium carbonate and trap mud, building up a solid mound of limestone representing an early form of reef.
This free ESL lesson plan on books has been designed for adults and young adults at an intermediate (B1/B2) to advanced (C1/C2) level and should last around 45 to 60 minutes for one student. Books have the power to change the world; some even have the power to destroy the world. They can take you to wherever your imagination will let them. But these days, it appears many people have given up on reading, instead preferring to wait for the movie or series to come out. Perhaps at some point in the future, people will stop reading altogether? In this ESL lesson plan on books, students will have the opportunity to discuss and express their opinions on issues such as their favourite books and authors, and thoughts on reading. This lesson plan could also be used with your students to debate these issues for World Book Day, which takes place in April. For more lesson plans on international days and important holidays, see the calendar of world days to plan your classes for these special occasions. Before the English class, send the following article to the students and ask them to read it while making a list of any new vocabulary or phrases they find (explain any the students don’t understand in the class): The article lists what it considers to be the most influential books in history, including On the Origins of Species by Charles Darwin, The Complete Works of Shakespeare and 1984 by George Orwell. At the start of the class, hold a brief discussion about what the students thought about the article. Do they agree with the list? Can they think of any entries they disagree with? Which other books should be on the list? To save time in class for the conversation activities, the English teacher can ask the students to watch the video below and answer the listening questions in Section 3 of the lesson plan at home. There are intermediate listening questions and advanced listening questions so teachers can decide which would be more appropriate for their students. Check the answers in the class. The video for this class is called “The world’s most mysterious book” by TED Ed which explores some of the theories around the Voynich Manuscript, which nobody seems to be able to decipher. The focus in the class is on conversation in order to help improve students’ fluency and confidence when speaking in English as well as boosting their vocabulary. This lesson opens with a short discussion about the article the students read before the class. Next, the students can give their opinion on the quote at the beginning of the lesson plan – what they think the quote means and if they agree with it. This is followed by an initial discussion on the topic including what the students like to read, the books they read as a child, and their experience reading books in English. After this, students will learn some vocabulary connected with books such as bookworm, page turner and e-book. This vocabulary has been chosen to boost the students’ knowledge of less common vocabulary that could be useful for preparing for English exams like IELTS or TOEFL. The vocabulary is accompanied by a cloze activity and a speaking activity to test the students’ comprehension of these words. If the students didn’t watch the video before the class, they can watch it after the vocabulary section and answer the listening questions. Before checking the answers, ask the students to give a brief summary of the video and what they thought about the content. Finally, there is a more in-depth conversation about books. In this speaking activity, students will talk about issues such as whether governments should continue to fund local libraries, the difference between the book and the movie, and whether or not we will stop reading books altogether in the future. After the class, students will write about their opinion of books. This could be a short paragraph or a longer piece of writing depending on what level the student is at. The writing activity is designed to allow students to practise and improve their grammar with the feedback from their teacher. For students who intend to take an international English exam such as IELTS or TOEFL, there is an alternative essay question to practise their essay-writing skills.
Recently unmanageable wildfires have destroyed forests in Australia and the United States costing billions of dollars. Does a similar fate await B.C. Many foresters say yes. The Growing Wildfire Threat into B.C.: How to Sustain’s Ecosystems, Economies, and Society Canadian Silviculture Magazine Summer 2002 by B.A. Blackwell and R.W. Gray Over the past two decades millions of hectares have been damaged by wildfire in the United States due to conditions similar to those now occurring in BC. TheUS fires have resulted in significant human and economic losses, and have cost taxpayers billions of dollars to suppress. The US put in place a $1.2 billion annual fuels and forest health management program. Without similar preventative intervention, a similar fate may await BC. Since 1994, the U.S. has seen 15 million hectares burned in wildfire, thousands of homes lost to fire in the wildland-urban interface, forty fire fighter’s lives lost, and $4.5 billion spent on the direct suppression and immediate rehabilitation of wildfires (United States General Accounting Office 1999). The escalation of wildfire activity has been defined as a crisis, and has been associated withhas been associated with a loss of ecosystem health and stability. Between 1988 and 1999 the National Forest Health Monitoring Team inventoried over 9.5 million hectares of forestland mortality caused by insects, disease, air pollution, and abiotic factors (United States Department of Agriculture National Forest Health Monitoring Team 2000). In addition to current attack levels, the Team has documented that another 24 million hectares nation wide is at risk to insect and disease attacks. These elevated levels of insect and disease attacks, and increased wildfire activity over the past two decades have partially been attributed to the impacts of long-term fire suppression; this has been well documented in the literature (Mutch 1994, Society of American Foresters 2000). The U.S. has embarked on an ambitious program of fuels and forest health management in response to what researchers believe is a ”brief window of opportunity”, spanning 15 to 30 years, for effective and aggressive action before uncontrollable, catastrophic wildfires become widespread (Covington et al. 1994). The U.S. government plans to address 1.25 million hectares per year, of National Forests alone, with mechanical, manual, and prescribed fire treatments. Congress has appropriated $12 billion over the next 10 years to fund the planning and implementation of treatments. As of June 3, 2002, the natural resource managers in the U.S. have prescribed burned 500,000 ha. Are these problems applicable to British Columbia? That is the question that many fire managers and foresters are currently asking. In B.C., many ecosystems in the southern half of the province have more in common with ecosystems of the Pacific Northwest than they do with the rest of Canada. Similar to the U.S., the dry Interior forests of B.C. have been negatively influenced by the interruption of historic fires. In many of these ecosystems, fire suppression has resulted in excessive tree ingrowth into forest stands and encroachment into areas that were historically grasslands. Associated with the processes of ingrowth and encroachment is a growing accumulation of both surface and overstory fuels. Increased fuel loadings and changes in forest structure result in a shift away from forests that were previously influenced by low severity surface fire, to forests where high severity stand replacement fires are now the norm. This shift in fire severity has many negative ecological consequences including increased nutrient losses, altered soil properties, destruction of below ground flora and fauna, and an overall long-term loss of site productivity. The ecological outcomes of high severity fires are in great contrast to the historic low severity surface fires, which typically resulted in a nutrient flush, a vigorous plant response, and limited net change in soil properties and site productivity. Associated with fire suppression related changes in forest structure is the increased incidence of insects and disease. Higher stocking levels have resulted in increased competition for moisture and nutrients, which typically has increased tree stress and hence susceptibility to attack. Over the past decade we have seen a dramatic acceleration in the attack levels of Douglas-fir bark beetles, Spruce Budworm, and Mountain Pine Beetle throughout British Columbia. In many stands higher insect attack incidence levels can be attributed directly to changes in stand conditions associated with fire suppression in combination with successive mild winters. The current Mountain Pine Beetle epidemic in the central interior of the Province has been linked to global climate changes, but there has been limited discussion regarding the ecological changes associated with a changing fire regime. It would appear that protecting pine forests from fire has shifted the successional pathway of these forests so that they are potentially more susceptible to the Mountain Pine Beetle. Other important considerations include changes in wildlife habitats and species distribution guilds. The open condition of many of the dry interior forests has been significantly altered by the changes in composition and structure outlined so far. This has impacted both the quality and quantity of available habitat for those species that depend on these types of forest and has caused a species shift, allowing species that are adapted to closed forests to expand into what were historically open forests. The current fuels and fire management dilemma that we face in B.C. can be attributed to a lack of public awareness, and our poor understanding of ecological change associated with resource management practices over the past 50 years. Awareness has been heightened by the efforts of the Auditor General’s reporting on ”Managing Interface Fire Risk” and Provincial government initiatives to create a Wildland Fire Act, however the scale and level of effort required to address the complexity of these problems is daunting. It is apparent that management of forests must shift to an ecosystem-based approach that integrates an improved understanding of historic disturbance regimes, structure, function, and forest composition. In addition to an improved understanding, large scale application of fuel treatments are required to reduce the current landscape-level risk of catastrophic fire in many parts of the Province. Land managers in B.C. must re-introduce prescribed fire as a viable fuel treatment alternative. The re-introduction of prescribed fire will not be easy and will require a significant shift in public, private sector and government attitudes toward prescribed fire. In addition to developing an improved understanding of the problem and the appropriate treatments, new tools and applications are required to assess risk and prioritize treatment. Tools such as the Wildfire Threat Rating System (Hawkes and Beck 1997) and improved fuel and fire behavior modeling are required. The cost of planning, designing and implementing these treatments will be significant but as demonstrated in the U.S., the cost of ignoring the problem will be significantly greater through the loss of human life, public and private property, and the opportunity cost of resources foregone. Covington, W.W., Everett, R.L., Steele, R., Irwin, L.L., Daer, T.A., and A.N.D. Auclair. 1994. Historical and anticipated changes in forest ecosystems of the Inland West of the United States. Jor. Sus. For. 2(1/2):13-63. Hawkes, B.C., Beck, J., and Sahle, W. 1997. A wildfire threat rating system for the McGregor Model Forest. Final report submitted to the McGregor Model Forest Association, Canadian Forest Services, Project 3015, Victoria, B.C. Mutch, R.W. 1994. Fighting fire with prescribed fire: a return to ecosystem health. Journal of Forestry 92:31-33 Society of American Foresters. United States General Accounting Office. 1999. Western National Forests: a cohesive strategy is needed to address catastrophic wildfire threats. Report to the Subcommittee on Forests and Forest Health, Committee on Resources, House of Representatives. Washington, D.C.
OBJECTIVES: At the end of this laboratory, you should be able to: 1. Identify the types of bone and the components of an osteon. 2. Identify osteoblasts, osteocytes and osteoclasts, and to describe their relationships to each other and their role in bone remodeling. 3. Thoroughly describe the way in which bone develops and grows, including intramembranous versus endochondral ossification. 4. Understand how this mineralized tissue is vascularized. SLIDES FOR THIS LABORATORY: 11, 69, 70, 74, and Supplemental Slide 109 Bone is specialized connective tissue with a calcified extracellular matrix (bone matrix) and 3 major cell types: the osteoblast, osteocyte, and osteoclast. The first type of bone formed developmentally is primary or woven bone (immature). This immature bone is later replaced by secondary or lamellar bone (mature). Secondary bone is further classified as two types: trabecular bone (also called cancellous or spongy bone) and compact bone (also called dense or cortical bone). Slide 70 Developing bone. Primary bone (or woven bone) is characterized by the irregular arrangement of collagen fibers, large cell number, and reduced mineral content. Note the primary bone is deposited on hyaline cartilage. Primary bone is acidophilic while the hyaline cartilage is basophilic. Slide 69 Bone, femur. The trabecular bone present in this slide is found mostly within the epiphysis and some in the bone marrow cavity. Osteoblasts are located immediately above the osteoid (newly formed bone matrix). Osteocytes are found within lacunae. Giant multinucleated osteoclasts, which break down bone, are occasionally found in lacunae termed Howship's lacunae. These are readily found in the ossification zone of the growth plate. The compact bone in this slide surrounds the marrow cavity and spongy bone. Locate the periosteum (external) and endosteum (internal) linings of the bone. Note the separation of these linings is artifact of slide preparation. Slide 74 Bone, ground preparation. Observe the Haversian sytems (or osteons) of compact bone in this slide. The lamellae are concentrically located around a central canal (haversian canal) which contained blood vessels, nerves, and loose connective tissue. Volkmann's canals may be seen connecting haversian canals. The other lamellae of compact bone are organized into inner circumferential, outer circumferential, and interstitial lamellae. Only interstitial lamellae are seen in this slide. Also in this section, note the empty lacunae and canaliculi that housed the osteocyte and its cell processes, respectively. Slide 11 Nasal mucosa. Intramembranous ossification is visible in the nasal conchae on this slide. Bone arises directly within mesenchymal condensations. This process can be identified by the appearance of bone spicules (islands of bone) among mesenchymal cells. Look for the eosinophilic bone matrix. The surrounding mesenchymal cells are stellate in appearance. Slide 69 Bone, femur. Endochondral bone formation is represented in this slide. Bone arises by replacement of a small hyaline cartilage model. Locate the epiphyseal plate; it is the site for bone elongation. First, find the hyaline cartilage and move toward the bone marrow. Identify the 5 overlapping zones: 1. Zone of Reserve or Resting Cartilage - young small cells evenly distributed, appears as typical hyaline cartilage. 2. Zone of Cell Proliferation - chondrocytes divide, forming parallel columns. 3. Zone of Cell Maturation and Hypertrophy - cells produce collagen and ground substance 4. Zone of Cartilage Calcification - septa of cartilage matrix become calcified, cells die. 5. Zone of Ossification - osteoblasts invade cavities, and deposit bone matrix. Supplemental Slide 109 Developing bone. Another example of endochondral bone formation.
How learning Musical Instrument Boost Memory in Kids “Music is an art that goes well beyond science. Proof can be found in the huge amount of studies that have been carried out throughout the world based on music therapy and the important results achieved.”- Andrea Bocelli Boosting memory is big business. The big names such as Cogmed, Luminosity, and BrainHW are the names holding multimillion-dollar business which is, however, surpassing. But, the question here is- are their offerings really benefiting the brain? Many researchers do not give their nodes for this! However, Illinois University has found that there is no or less evidence that these brain games enhance nothing more than some tasks being instructed. Certainly, the Lumosity’s maker has been punished with a penalty of $2 million for false claims. The point here is if these brainstorming exercises do not work then what does? How can one keep the brain active or sharp? Learning a musical instrument is the answer! A study published in October 2020 has revealed that kids with musical training improve in cognitive control areas activation of the brains. Additionally, performs better in visual memory and auditory tasks than those without musical training. “Learning and performing a musical instrument can affect every part of a child’s development. Multiple studies confirm that engaging in private or group music instruction can promote improved cognitive skills and academic performance,” says Bree Gordon, a board-certified music therapist (MT-BC) and the director of Creative Arts Therapies of the Palm Beaches in Florida. Music therapy is said to enhance the mental challenges during hospitalization, including growing anxiety, stress, overwhelming, social withdrawal, depression, etc. Table of Contents - 1. What the Research Found? - 2. Start learning Musical Instrument - 3. How playing musical instruments boost Memory in Children - 4. How Online Music Composition Lessons helps Kids improve memory - 5. Wrapping Up 1. What the Research Found? The children between the ages of 10 and 13 (half received music training and half received no music training) participated in the study. Kids who have received music training have played an instrument for at least two years, constantly play in an ensemble or orchestra, and exercise for around two hours a week. Children with no musical training verified that they were unable to write or read music scores and held no musical experience outside of conventional school teaching. Researchers consider that playing musical instruments can develop visual and auditory attention and working memory. In addition, the neural networks of musically trained students related to these skills will be enhanced. For testing, they used functional Magnetic Resonance Imaging (fMRI) to follow neural activity. Whereas the children engaged in the memory retrieval tasks and encoding phase. In the encoding phase, students were presented with visual and auditory stimuli. And, they were authorized to remain attentive to visuals only, Or, both auditory and visual at the same time. Later, they were provided with both auditory and visual stimuli; the melody and an evolving abstract. Then, the children have got the memory task to know where the attention is being directed. It is being discovered that the musically-trained students have performed better than the auditory and visual retrieval tasks. Furthermore, they have more activation in the brain’s cognitive control regions than the control groups. “When children are making music and having musical experiences the auditory cortex of their brain is being stimulated. This is similar to working out a muscle and over time it gets bigger and stronger,” confirms Erin Layton, a Georgia-based music tutor with 16 years of experience in music technology and middle school chorus. 2. Start Learning Musical Instrument When the online music tutor connects with the student, the first thing they do is to know them personally and find their abilities. Initially, when the tutors meet the child, they get to know each other; what are the particular abilities, needs, behaviors or interests, and so on? Then, the tutors make a different yet effective plan to support in reaching the recovery milestones by writing songs, playing instruments, singing, listening to music, etc. With this, the private music tutor tracks and examines the progress of students, make changes and updates to the goals as required. Not just learning to play an instrument holds a positive influence on the overall mental and physical well-being of the kid, other important benefits are related. To know more, keep on reading; maybe you will pick the flute or violin for your child to play soon after. 3. How Playing Musical instruments boost Memory in Children Research in neuroscience has confirmed that music can improve brain functioning in kids. The different musical activities, like, listening to music, singing, or playing an instrument assures brain stimulation. The brain workout results in refining the brain structure with new neural connection formation. Music also aids in mathematical skills development. Listening to music makes the kids learn problem-solving techniques, pattern recognition, and basic fractions. Those kids who learn music, additionally boost their spatial intelligence and the capacity to draw rational images of an object. These are the skills that are critical to study advanced maths. While the musical instrument is playing, the brain starts working at a high-level speed. Music reading is reformed in the brain into the body movement of playing an instrument. Also, the hand-eye coordination ability of people who play musical instruments is better than that of people who do not play musical instruments. There are several instruments that demand repair and maintenance. It could be anything, from cleaning to tuning to oiling. Allowing students to remain on the top with basic instrument maintenance makes them responsible. When they are accountable for something, they can easily deal with it on their own without the need for parental reminders. The researches have proved that the youngsters who are taking the one to one music lessons have grown themselves in speech development and can read properly. Learning to play musical instruments grows the left side of the brain which is related to reasoning and language, teaches rhyme and rhythm and gives support in sound recognition. Furthermore, the songs help the kids remember the information. Kids learning music can find themselves correlatively. Although, the exceptional advantage of learning music is that it lets the students express themselves. By knowing the tactics of expressing complicated feelings such as anxiety with music or expressing themselves, they can discover the method to play or show how it has improved them overall. Enhances Listening Skills Learning to play musical instruments demands the kids to carefully listen to various things. They do not just listen to guidelines from the music therapists or tutors, they have to listen for speed, pitch, and rhythm. This level of concentration will enhance skills in life and music. Creating music with different people enhances the kid’s (such as choir and band) emotional and social skills. They learn to work collaboratively and create a sense of compassion with others. Studies have discovered that when the kids learn to play music from easy rhythms to group performances. They can tune into different people’s sentiments. Discipline and achievement Learning to play music teaches kids to work collaboratively on short-term goals, practice self-discipline, and build a routine. Setting a fixed time for practice can foster patience and commitment. Understanding the music brings a sense of satisfaction and accomplishment, and encourages kids to learn the importance of self-discipline. Attention, concentration, and memory The researchers have cleared the kids who are trained musically with better memory skills, assisting them to retain things even while their minds are busy with others. The essential elements of reading comprehension and mental arithmetic are, however, assured. Studying music also needs an essential level of concentration, training kids to fix their attention for sustained periods. 4. How Online Music Composition Lessons helps Kids improve memory Piano Learning Online Learning to play Piano has been popular for many years now. It improves motor control, memory, and listening. The advantages reach above the activity of online piano lessons in daily lives. They affect the ability to coordinate, alertness, attention span, language skills, etc. For many learners, getting connected to the experienced and skilled Piano tutor is challenging to learn piano. However, with the growth and popularity of online teaching, hiring a piano tutor is resulted to be easy. Online learning platform such as Easylore is here for your kid to help them learn piano with a holistic approach. Online Guitar Lessons The basic guitar lesson is responsible for managing the kid’s stress, and boost memory. In addition, it enhances motor skills and communication to make them easily manage life consequences. Early brain scan research has demonstrated that playing guitar between some musical instruments not only enhances the volume of the grey matter in different brain regions. However, it increases the long-range connections among them. Now, the kids can learn beginner guitar lessons such as classical, rock, blues, R&B, jazz, pop, etc with online tutoring. Study the fundamentals of guitar with preferred location and preferred schedule. Online Clarinet lessons Choose the clarinet to smile more. It is not because of the reason that it is pleasant when sound, however, it is because it makes your mouth in the position of a smile for the right tone. Indeed, smiling is delightful for your health. It releases endorphins, relieves pain, boosts the immune system, reduces blood pressure, and makes you happier. Choose the clarinet instructor to remain healthy and reinforce the core muscles. With the online clarinet classes, you can learn the basics and various clarinet styles. Get your kid to connect with the Edtech platform so that they know the advanced lessons or fundamentals of the course. Start your musical journey with the clarinet now. Online Trumpet classes Neuroscience has revealed that learning Trumpet from a trumpet teacher online is great for your brain. The kids who learn to play trumpet perform better in concentration, greater brain activation, memory recall associated with executive functions, auditory encoding, attention control, etc. with greater creativity, higher resilience, improved reading, and better life quality. The kids can learn trumpet despite high level or moderate students from the experienced and skilled tutors. The Edtech platforms following the holistic approach to education link the learners with the tutors of their choice. However, the classes can also be carried according to their convenient schedule or time. Ukulele classes online To learn Ukulele, hiring a private tutor is worth it. When it is about boosting memory in kids, then, Ukulele is not an exception. It allows the kids to enhance their concentration and let them stick to the creative or task process. The enhanced focus, however, gives rise to more brainstorming, particularly with higher neurological pathways. It is said that learning to play the Ukulele from a ukulele teacher online has a great influence on the kid’s memory. These one-to-one classes encourage the mind to learn quickly and assure a positive attitude in kids, creating compassionate behavior towards others. Therefore, if you are searching for a Ukulele tutor for your child, then, you must get them to connect to an experienced and efficient one. And, this all is possible when you have signed up them to a leading online learning platform. 5. Wrapping Up Music therapy consists of an abundance of benefits, including creativity and abstract reasoning. The researches have revealed that learning to play education assures children with great musical skills have a sharp brain and exceptional problem-solving skills. The music employs both sides of the brain and challenges kids to focus on different tasks simultaneously. With the advent of technology, many online learning platforms have come upfront. They are offering Music Theory Lessons Singapore according to the learner’s age and ability. Sign up to the one and make music the part of your kid’s life!
Your Digestive System and How It Works On this page: - Why is digestion important? - How is food digested? - How is the digestive process controlled? - For More Information The digestive system. The digestive system is made up of the digestive tract—a series of hollow organs joined in a long, twisting tube from the mouth to the anus—and other organs that help the body break down and absorb food (see figure). Organs that make up the digestive tract are the mouth, esophagus, stomach, small intestine, large intestine—also called the colon—rectum, and anus. Inside these hollow organs is a lining called the mucosa. In the mouth, stomach, and small intestine, the mucosa contains tiny glands that produce juices to help digest food. The digestive tract also contains a layer of smooth muscle that helps break down food and move it along the tract. Two “solid” digestive organs, the liver and the pancreas, produce digestive juices that reach the intestine through small tubes called ducts. The gallbladder stores the liver’s digestive juices until they are needed in the intestine. Parts of the nervous and circulatory systems also play major roles in the digestive system. Why is digestion important? When you eat foods—such as bread, meat, and vegetables—they are not in a form that the body can use as nourishment. Food and drink must be changed into smaller molecules of nutrients before they can be absorbed into the blood and carried to cells throughout the body. Digestion is the process by which food and drink are broken down into their smallest parts so the body can use them to build and nourish cells and to provide energy. How is food digested? Digestion involves mixing food with digestive juices, moving it through the digestive tract, and breaking down large molecules of food into smaller molecules. Digestion begins in the mouth, when you chew and swallow, and is completed in the small intestine. Movement of Food Through the System The large, hollow organs of the digestive tract contain a layer of muscle that enables their walls to move. The movement of organ walls can propel food and liquid through the system and also can mix the contents within each organ. Food moves from one organ to the next through muscle action called peristalsis. Peristalsis looks like an ocean wave traveling through the muscle. The muscle of the organ contracts to create a narrowing and then propels the narrowed portion slowly down the length of the organ. These waves of narrowing push the food and fluid in front of them through each hollow organ. The first major muscle movement occurs when food or liquid is swallowed. Although you are able to start swallowing by choice, once the swallow begins, it becomes involuntary and proceeds under the control of the nerves. Swallowed food is pushed into the esophagus, which connects the throat above with the stomach below. At the junction of the esophagus and stomach, there is a ringlike muscle, called the lower esophageal sphincter, closing the passage between the two organs. As food approaches the closed sphincter, the sphincter relaxes and allows the food to pass through to the stomach. The stomach has three mechanical tasks. First, it stores the swallowed food and liquid. To do this, the muscle of the upper part of the stomach relaxes to accept large volumes of swallowed material. The second job is to mix up the food, liquid, and digestive juice produced by the stomach. The lower part of the stomach mixes these materials by its muscle action. The third task of the stomach is to empty its contents slowly into the small intestine. Several factors affect emptying of the stomach, including the kind of food and the degree of muscle action of the emptying stomach and the small intestine. Carbohydrates, for example, spend the least amount of time in the stomach, while protein stays in the stomach longer, and fats the longest. As the food dissolves into the juices from the pancreas, liver, and intestine, the contents of the intestine are mixed and pushed forward to allow further digestion. Finally, the digested nutrients are absorbed through the intestinal walls and transported throughout the body. The waste products of this process include undigested parts of the food, known as fiber, and older cells that have been shed from the mucosa. These materials are pushed into the colon, where they remain until the feces are expelled by a bowel movement. Production of Digestive Juices The digestive glands that act first are in the mouth—the salivary glands. Saliva produced by these glands contains an enzyme that begins to digest the starch from food into smaller molecules. An enzyme is a substance that speeds up chemical reactions in the body. The next set of digestive glands is in the stomach lining. They produce stomach acid and an enzyme that digests protein. A thick mucus layer coats the mucosa and helps keep the acidic digestive juice from dissolving the tissue of the stomach itself. In most people, the stomach mucosa is able to resist the juice, although food and other tissues of the body cannot. After the stomach empties the food and juice mixture into the small intestine, the juices of two other digestive organs mix with the food. One of these organs, the pancreas, produces a juice that contains a wide array of enzymes to break down the carbohydrate, fat, and protein in food. Other enzymes that are active in the process come from glands in the wall of the intestine. The second organ, the liver, produces yet another digestive juice—bile. Bile is stored between meals in the gallbladder. At mealtime, it is squeezed out of the gallbladder, through the bile ducts, and into the intestine to mix with the fat in food. The bile acids dissolve fat into the watery contents of the intestine, much like detergents that dissolve grease from a frying pan. After fat is dissolved, it is digested by enzymes from the pancreas and the lining of the intestine. Absorption and Transport of Nutrients Most digested molecules of food, as well as water and minerals, are absorbed through the small intestine. The mucosa of the small intestine contains many folds that are covered with tiny fingerlike projections called villi. In turn, the villi are covered with microscopic projections called microvilli. These structures create a vast surface area through which nutrients can be absorbed. Specialized cells allow absorbed materials to cross the mucosa into the blood, where they are carried off in the bloodstream to other parts of the body for storage or further chemical change. This part of the process varies with different types of nutrients. Carbohydrates. The Dietary Guidelines for Americans 2005 recommend that 45 to 65 percent of total daily calories be from carbohydrates. Foods rich in carbohydrates include bread, potatoes, dried peas and beans, rice, pasta, fruits, and vegetables. Many of these foods contain both starch and fiber. The digestible carbohydrates—starch and sugar—are broken into simpler molecules by enzymes in the saliva, in juice produced by the pancreas, and in the lining of the small intestine. Starch is digested in two steps. First, an enzyme in the saliva and pancreatic juice breaks the starch into molecules called maltose. Then an enzyme in the lining of the small intestine splits the maltose into glucose molecules that can be absorbed into the blood. Glucose is carried through the bloodstream to the liver, where it is stored or used to provide energy for the work of the body. Sugars are digested in one step. An enzyme in the lining of the small intestine digests sucrose, also known as table sugar, into glucose and fructose, which are absorbed through the intestine into the blood. Milk contains another type of sugar, lactose, which is changed into absorbable molecules by another enzyme in the intestinal lining. Fiber is undigestible and moves through the digestive tract without being broken down by enzymes. Many foods contain both soluble and insoluble fiber. Soluble fiber dissolves easily in water and takes on a soft, gel-like texture in the intestines. Insoluble fiber, on the other hand, passes essentially unchanged through the intestines. Protein. Foods such as meat, eggs, and beans consist of giant molecules of protein that must be digested by enzymes before they can be used to build and repair body tissues. An enzyme in the juice of the stomach starts the digestion of swallowed protein. Then in the small intestine, several enzymes from the pancreatic juice and the lining of the intestine complete the breakdown of huge protein molecules into small molecules called amino acids. These small molecules can be absorbed through the small intestine into the blood and then be carried to all parts of the body to build the walls and other parts of cells. Fats. Fat molecules are a rich source of energy for the body. The first step in digestion of a fat such as butter is to dissolve it into the watery content of the intestine. The bile acids produced by the liver dissolve fat into tiny droplets and allow pancreatic and intestinal enzymes to break the large fat molecules into smaller ones. Some of these small molecules are fatty acids and cholesterol. The bile acids combine with the fatty acids and cholesterol and help these molecules move into the cells of the mucosa. In these cells the small molecules are formed back into large ones, most of which pass into vessels called lymphatics near the intestine. These small vessels carry the reformed fat to the veins of the chest, and the blood carries the fat to storage depots in different parts of the body. Vitamins. Another vital part of food that is absorbed through the small intestine are vitamins. The two types of vitamins are classified by the fluid in which they can be dissolved: water-soluble vitamins (all the B vitamins and vitamin C) and fat-soluble vitamins (vitamins A, D, E, and K). Fat-soluble vitamins are stored in the liver and fatty tissue of the body, whereas water-soluble vitamins are not easily stored and excess amounts are flushed out in the urine. Water and salt. Most of the material absorbed through the small intestine is water in which salt is dissolved. The salt and water come from the food and liquid you swallow and the juices secreted by the many digestive glands. How is the digestive process controlled? The major hormones that control the functions of the digestive system are produced and released by cells in the mucosa of the stomach and small intestine. These hormones are released into the blood of the digestive tract, travel back to the heart and through the arteries, and return to the digestive system where they stimulate digestive juices and cause organ movement. The main hormones that control digestion are gastrin, secretin, and cholecystokinin (CCK): - Gastrin causes the stomach to produce an acid for dissolving and digesting some foods. Gastrin is also necessary for normal cell growth in the lining of the stomach, small intestine, and colon. - Secretin causes the pancreas to send out a digestive juice that is rich in bicarbonate. The bicarbonate helps neutralize the acidic stomach contents as they enter the small intestine. Secretin also stimulates the stomach to produce pepsin, an enzyme that digests protein, and stimulates the liver to produce bile. - CCK causes the pancreas to produce the enzymes of pancreatic juice, and causes the gallbladder to empty. It also promotes normal cell growth of the pancreas. Additional hormones in the digestive system regulate appetite: - Ghrelin is produced in the stomach and upper intestine in the absence of food in the digestive system and stimulates appetite. - Peptide YY is produced in the digestive tract in response to a meal in the system and inhibits appetite. Both of these hormones work on the brain to help regulate the intake of food for energy. Researchers are studying other hormones that may play a part in inhibiting appetite, including glucagon-like peptide-1 (GPL-1), oxyntomodulin (+ ), and pancreatic polypeptide. Two types of nerves help control the action of the digestive system. Extrinsic, or outside, nerves come to the digestive organs from the brain or the spinal cord. They release two chemicals, acetylcholine and adrenaline. Acetylcholine causes the muscle layer of the digestive organs to squeeze with more force and increase the “push” of food and juice through the digestive tract. It also causes the stomach and pancreas to produce more digestive juice. Adrenaline has the opposite effect. It relaxes the muscle of the stomach and intestine and decreases the flow of blood to these organs, slowing or stopping digestion. The intrinsic, or inside, nerves make up a very dense network embedded in the walls of the esophagus, stomach, small intestine, and colon. The intrinsic nerves are triggered to act when the walls of the hollow organs are stretched by food. They release many different substances that speed up or delay the movement of food and the production of juices by the digestive organs. Together, nerves, hormones, the blood, and the organs of the digestive system conduct the complex tasks of digesting and absorbing nutrients from the foods and liquids you consume each day.
Teaching Students About Atalanta In an age where superheroes dominate popular culture, it’s essential to remind our students of the extraordinary heroes of the past. One such figure is Atalanta, a legendary Greek heroine who defied traditional gender roles and overcame formidable challenges. By incorporating her story into our teaching, we can inspire and empower students to question societal expectations and pursue their own path. Atalanta was a figure in Greek mythology renowned for her exceptional skills in hunting and athletics. Born to King Iasus of Arcadia, she was abandoned at birth due to her father’s disappointment with having a daughter. Nevertheless, Atalanta survived thanks to divine intervention and grew up in the wilderness raised by a bear. Despite her hardships, Atalanta emerged as a skilled huntress and combatant. When she reached adulthood, she joined the hunt for the dreaded Calydonian Boar and eventually played a vital role in slaying the beast. Her determination also extended to her love life: unwilling to settle down without a fair challenge, she demanded prospective suitors beat her in a footrace. 1. Begin with a story circle: Introduce Atalanta’s tale through storytelling or reading aloud from ancient texts such as Ovid’s Metamorphoses or Apollodorus’ Bibliotheca. Encourage students to actively listen, ask questions, and discuss their insights after hearing the story. 2. Encourage critical thinking: Ask students to identify key themes within Atalanta’s narrative that are still relevant today. Examples include gender roles, parental expectations, and individuality versus conformity. 3. Explore visual arts: Have students create illustrations depicting scenes from Atalanta’s story or research ancient art that includes portrayals of this fascinating heroine. 4. Compare and contrast: Challenge students to compare Atalanta’s tale with other myths. What similarities and differences can they discover among different heroes and heroines? 5. Modern-day parallels: Discuss the significance of using Atalanta as a role model in today’s world, emphasizing the importance of acknowledging our own strengths and seizing opportunities. 6. Analyze narratives: Invite students to create their own stories featuring Atalanta, exploring how her legend could have evolved if she had been born in contemporary times. The story of Atalanta offers valuable lessons on resilience, self-reliance, and defiance of societal expectations that resonate strongly with students today. By incorporating her inspiring narrative into your teaching, you can provide them with an empowering role model from ancient history who embodies the values of courage, conviction, and perseverance in the face of adversity.
About This Species Tench are native to Europe and West Asia. They have been invasive in the United States since the early 1800s and have spread to British Columbia from Washington through the Columbia River. They are currently only found in the Interior of BC, in the Osoyoos, Skaha, and Okanagan Lakes. Tench can survive low-oxygen conditions, and at in a wide water temperature range between 0-24 °C. They reproduce when water temperatures are between 10-16 °C. This makes them especially adapted to living in many BC waterways. They prefer slow-moving water with lots of vegetation. During the winter, they bury themselves in mud at the bottom of ponds and lakes. Tench are omnivores and decrease water clarity through their feeding activity. This makes them a threat to many aquatic species, as they compete for food, habitat, and reduce the amount of sunlight reaching aquatic plants, slowing or preventing their growth. How to Identify Tench are a large, robust fish (20-84 cm long) with small scales that are dark green near the dorsal fin and fade to a light yellow near the belly. The fins are darker than the body and are rounded, except the tail fin which is flat. They have a dark orange iris and a single pair of barbels (whiskers) at the edge of their mouth. Some Tench have been artificially bred to resemble Goldfish, with a light gold or red colour and black or red spots. These may be sold as stock fish for ponds and aquaria. Report an Invasive Species Use the app Help us track and defeat invasive species. Report through this website Use our form to tell us what you’re seeing and where. Prevention is the best approach. Aquariums & Water Gardens Factsheet If you need advice about invasive species on your property or you are concerned about reported invasives in your local area, contact your local municipality or regional invasive species organization. It is illegal to possess, breed, release, sell, or transport live Tench in BC. The use of live finfish as bait is strictly prohibited in BC. Don't Let It Loose Learn about best practices Invasive species are plants, animals or other organisms that are not native to BC, and have serious impacts on our environment, economy and society. Never release your plants and animals into the wild or dump aquariums or water garden debris into rivers, streams, lakes or storm sewers!
Our parathyroid glands are four tiny glands that lie in our neck, just to the sides of our thyroid gland. When normal, they are the size of a grain of rice or a small flat bean. These glands control calcium balance in our bodies. They do this by producing a hormone named parathyroid hormone (PTH). PTH acts on our bones, kidneys, and gut to keep the right amount of calcium in the right places. When one or more of these glands become abnormal, they produce too much of this hormone (PTH). This can cause our bones to be weak from calcium loss. This can also cause kidney stones and decreased kidney function. Often, a person will feel extremely tired and experience memory problems, decreased attention, or “brain fog.” Your calcium level is the simplest way to screen for parathyroid disease. Routine bloodwork often includes checking the level of calcium in our bloodstream. People with parathyroid disease often have high or high-normal calcium levels. The step after that is to have your PTH level checked.
Change Collision Shapes A collision shape is the surface area around an object that gets affected when two objects from different collision groups collide. For example, even though it looks like the ball in the image on the left can clearly make it over the cone obstacle, it will actually collide with it. If you click the Collision Shape Editor button in the scene, you will see in the image on the right that the cubic collision shape of the ball will graze the top of the conic collision shape of the obstacle. If not, see Viewing Collision Shapes. Here’s what you need to know about collision shapes: - A collision shape may be the same physical shape as the object or be of different shape, for example, a cone can have a cylinder collision shape. - You can expand a collision shape in size, as if to add padding to the object, typically to increase the collision surface of the object. - You can also move a collision shape away from the object. Thus, when two assets collide, for example, when a character runs into an obstacle, it is their collision shapes that get affected, not the objects themselves. To change a collision shape of an asset: - On the Mind Map, double-click the World node where you want to change a collision shape of an asset. The Scene Editor is displayed. - On the toolbar, click the Collision Shape Editor button. - Select the asset in the scene. - In the Options panel on the right, in the Collision Shape field, select the shape you need. - Mesh—A sophisticated shape, with outer and inner vertices created to reflect the shape details, such as a hole in a torus, for example, which would let objects go through it during a collision. - Hull—A “solid” shape that reflects only the outer frame of the selected shape, without details such as a hole in a torus, for example. No objects can go through a Hull collision shape. - If you select Mesh or Hull, in the Collision Mesh field that appears below, click the Edit icon and select the appropriate shape from Mesh Manager and click Save.
This package will lead your students through the inquiry process with a specific focus on Child Labor. PLEASE NOTE: I am aware that the QR codes on page six are no longer working (sad, because the site was an amazing resource). I do have a few videos that can be used but am working on finding the Child labour occurs when children get paid to do work. Although they have always had to work in some way, child labour started to become a major problem when children started to work in factories and mines in England during the Industrial Revolution of the 18th and 19th centuries. Contents : 9 This is a series of lesson activities on Children during the Industrial Revolution. This zip folder includes: - An online article worksheet on Britain's child labour during the Industrial Revolution by Annabel Venning, including review questions - Children during the Industrial Revolution Images wo This kit includes everything you need not only to prepare students for DRA and reading testing but to also make use of comprehension, summary, inference and writing strategies. Students will make use of the included non-fiction text complete with text features to help them answer the attached quest This is a reading passage with many text features that will help students prepare for comprehension and reading tests. Students can become engaged in the story of the newsies and how they were able to form a labour union. First students explore the meaning and examples of child exploitation. Then it’s time to take action and make a difference in your their own way. Some things you could do are:- Make a role play, present it to the year level- Make a poster for the classroom and other selected locations at school.- M This Reading Comprehension worksheet is suitable for higher elementary to proficient ESL learners or native English speakers. The text explores the deep routed problem of modern child labour. After carefully reading the text, students are required to complete some comprehension exercises including Students are issued with a News Article that discuss how young poor children are forced to work in appalling, dangerous and illegal conditions to make footballs for the Australian market. Students read through the article and complete a series of activities in groups, discussion of a given scenario' Welcome to an inquiry resource that has been refined through much classroom field work. With this inquiry model, students experience the excitement of taking more agency over their own learning; helping students to develop a 'relationship with learning' that:a) connects innate curiosity with learnin A power point presentation with two different worksheets about the rules of the first appeared Factory system during the Industrial Revolution and about the working conditions of the children working in the factories. Lesson plan and sources are included.Useful pieces of Information can also be acqu Children can use this template to compare and contrast working conditions in their country (answer sheet based on UK 2019) to the working conditions for children in Victorian Times. Also included are job description templates for Victorian Children working: - as chimney sweeps - in coal mines - in f Each Curiosity Lab is designed for:a) extremely high student engagementb) thinking sequences that lend themselves to group conversationsc) inquiry in which the whole class builds knowledge together and where students play a major role in the learning process.The inquiry model presented in this downl This is a PowerPoint introducing child labour for an Industrial Revolution Unit. It can also be used in conjunction with a novel study on 'In the Sea There Are Crocodiles'. It is designed to increase student's knowledge and encourage critical and ethical thinking. This is a photo essay assignment for middle scholars about child labour (or labor in the U.S.). My grade 6 Core English class really enjoyed this assignment. I assigned this project while we were reading the novel Iqbal by Francesco D'Adamo, which deals with child labour issues. This file include There are children across the world who want to go to school, but circumstances mean that they don’t. The free download uses two poems and photographs to introduce the topic of modern day child labour, provoke discussion and instigate research. They can be used an introduction to the questions rega This bundle covers the ENTIRE British Columbia Social Studies Curriculum - Grade 6 Unit: Global Issues and Governance. It has 2 types of products: Reading Comprehension Passages and Activity Worksheets. There are 9 products included and over 150 pages total in this bundle. The following topics/elabo This Canada’s Interactions in the Global Community resource is a 191 page unit intended to support the revised 2018 Ontario Grade 6 Social Studies Curriculum. This unit supports an inquiry-based approach as students develop guiding questions and work in “Expert Groups” to investigate and develop the FULL YEAR SPLIT GRADE BUNDLE - GRADES 5 & 6! This product contains 274 worksheets - Your entire science year planned! NO PREP - JUST PRINT! This bundle covers all expectations in the British Columbia Social Studies Grades 5 and 6 Curriculums.Students will practice literacy skills when demonstra FULL YEAR SPLIT GRADE BUNDLE - GRADES 6 & 7! This product contains 234 worksheets - Your entire science year planned! NO PREP - JUST PRINT! This bundle covers all expectations in the British Columbia Social Studies Grades 6 and 7 Curriculums.Students will practice literacy skills when demonstra MANITOBA SOCIAL STUDIES FULL UNIT! This product was created to cover the Manitoba Social Studies Grade 7 curriculum - People and Places in the World. The 4 clusters have been covered from the curriculum to save time for busy teachers! The worksheets are cross-curricular as students will work on read FULL YEAR SPLIT GRADE BUNDLE - GRADES 7 & 8! This product contains 225 Worksheets - Your entire science year planned! NO PREP - JUST PRINT! This bundle covers all expectations in the Manitoba Social Studies Grades 7 and 8 Curriculums.Students will practice literacy skills when demonstrating the
A catastrophic explosion takes place during the final stellar evolutionary stage of a massive star. This stellar explosion is known as a supernova and appears in the sky as a sudden illumination of a star with unusual brightness. Despite being made up of billions of stars, the occurrence of a supernova in the Milky Way is quite rare. Astronomical data shows that the Milky Way experiences about three supernovas each century. A total of three supernovas have been observed in the Milky Way galaxy in history. The shockwaves from a supernova have the potential of triggering the formation of a new star. Supernovas expel almost all the material from a star at speeds going as high as 10% the speed of light, and some are so luminous that they outshine their home galaxies. However, only a few supernovas have been observed with the naked eye, with the majority being only observed using powerful telescopes. The term is drawn from another astronomical event, the “Nova,” which is itself derived from the Latin word “nova” meaning “new” since it was once believed that a nova represented the birth of a new star. The addition of the prefix “super” is used to amplify the supernova’s superiority both in luminosity and energy relative to that of a nova. The term was introduced in 1931 by two scientists; Fritz Zwicky and Walter Baade. The SN 1604 The SN 1604 supernova was a stellar explosion that was witnessed between October 8th and 9th 1604. The astronomical event, which is commonly known as the Kepler’s Supernova (named after renowned 17th-century astronomer Johannes Kepler), occurred in the Ophiuchus constellation in the Milky Way. The supernova was the most recent one to have been visible to the naked eye and remained visible for 18 months. Records of its observation are found in Arabic, European, Chinese, and Korean sources. The SN 1604 is also recognized as the most recent supernova to take place in the Milky Way galaxy, with other subsequent recordings being from other galaxies. The most powerful supernova in history was the recently-observed ASSASN-15h. The stellar explosion from the supernova was so powerful that its brightness was 20 times that of all the stars in the Milky Way. The astronomical event’s name is derived from the acronym of the telescope survey that led to its discovery in 2015, the All Sky Automated Survey for Supernovae. Astronomers calculate that the supernova was about 3.82 billion light-years away and stated that if the stellar explosion were 10,000 light-years away, it would have the brightness of the crescent moon. The SN 2005ap Supernova One of the furthest supernova to be observed was the SN 2005ap supernova that was observed through a telescope at Texas-based McDonald observatory. The supernova occurred 4.7 billion light-years away from the earth, a mindboggling distance that made it invisible to the naked eye. The supernova was observed as part of the Coma Berenices constellation. That of the ASASSN-15h Supernova only surpassed the brightness of the supernova, since the peak absolute magnitude of the SN 2005ap was calculated to be -22.7.
I have been working on a seminar presentation for my latest class. The topic is close reading. I have done a little bit of reading on the topic and it just makes sense that teachers would use this idea for teaching children to read deeply and express their thoughts both verbally and in writing about the different types of texts they encounter during their matriculation through each grade level. Although the idea of close reading has primarily been done in secondary education, it can be and has been adapted for use with elementary school classrooms. I will spend the rest of my time writing what I have learned about close reading. What is Close Reading? According to Fisher & Frey (2012) close reading is an instructional routine in which students critically examine a text, especially through repeated readings. It is basically a way for readers to take a deeper look at what they are reading. The idea is that they discover a deeper meaning of the text with each repeated read. Close reading is not a new concept. It began in the mid twentieth century. It is not a stand-alone routine it is meant to be embedded in your literacy practices. (I.e. interactive read-alouds and shared readings, teacher modeling and think-alouds, guided reading with leveled texts) What are Features of a Close Reading? In order to effectively implement a close reading there are some key features that you need to have. Those key features are short passages, complex text, limited frontloading, repeated readings, text dependent questions, and annotation. Each of these features are an important part of an effective close reading. There are some modifications that need to be made to make close reading effective for elementary school setting and I will discuss them further as necessary. For close reading to be effective, students need to have a condensed version of a text to work with. This text can come from a longer piece of text or can be a stand-alone reading. The idea is that they take a close reading of a text that is anywhere from one paragraph to no more than 3 or 4 (that is also depending upon the grade and reading levels of your students). Students should be able to spend time reading and rereading the text without stamina being a problem. The amount of text is also determined by the age and grade level of your students. The text that are chosen for close reading seem to be more difficult for students—at the instructional level of most of the class. An adaptation for elementary classrooms would be for that text to be at the independent level of most students and that text would first be read as a shared reading with the teacher and the students to help with the complexity of the text for the students who might have a lower reading level. The same text teachers might choose as a good read-aloud; the ones with the rich vocabulary, a true story structure, complex plot and your informational text are good for close reading. Frontloading is basically, setting up the text. Teachers, especially elementary teachers do this before reading to get students reading to read the text they are giving. With close reading, secondary teachers just allow students to read through the text and then have an initial discussion which leads to the set up for the second and third readings of the text. For elementary students, there can be some limited frontloading. Frontloading may be necessary when students need to know the meaning of words and phrases to understand and follow the flow of the reading. Multiple meaning words may have students confused if they know one meaning of the word but not the one related to the meaning in the text they are reading. Burkins & Yaris, (2016) say that repeated readings are rereading for the purpose of recognizing details and nuances of text that may go unnoticed during a cursory first read so that new understandings and insights may reveal themselves. This is more than just rereading for the sake of increasing stamina and building fluency although that is also accomplished here. With close reading, each repeated reading has a specific purpose and children are reading again to look for something specific they may not have seen with the first and subsequent readings of the text. Text Dependent Questions To address text dependent questions, we use the QAR: Question-Answer Relationship (Raphael & Au, 2005) strategy. This strategy uses four different types of questions to direct student thinking starting with: Secondary and college students have been taught to “take notes” while reading. They may circle, underline, or highlight words and/or phrases in the text that stand out to them as important. They also make notes in the margins or with graphic organizers. Elementary aged students can be taught to annotate text as early as kindergarten. It would start as a shared or interactive experience. One suggestion was to use wiki sticks to underline key ideas in shared text read from big books. As students get older they learn to do that themselves in their own text and then move to using pencils, colored pencils, highlighters, or crayons to underline text. Eventually students will be underlining and circling key ideas that have been modeled for them. With a gradual release of responsibility happens by 4th and 5th grade. What is the close reading process? Close reading is repeated reading with each repeated reading getting more specific with the types of questions being asked. The repeated readings can happen over a few days or at separate time in one day (if the passage lends itself to that). The initial reading for elementary students can be a shared reading and a basic introduction of the text, remembering not to do too much setting up of the text or at the secondary level the students begin the initial reading of the text with minimal frontloading of the text by the teacher. The next reading is done after a specific purpose for reading – in the form of a question– is posed. Finally, another set up with a specific questions to begin a final reading of the text. As students are reading, they are making notes either on a separate document or on directly on the text to help them focus on the purpose for that reading. Close reading helps give students the skills necessary to synthesize new information with information already in their schema. It helps build habits that good readers need to be able to engage with a complex piece of text. The key features of a close reading lend themselves to a deeper understanding of text. Students learn to read for specific purposes while practicing the reading skills all good readers have. Close reading is not a standalone reading practice. It goes along with several instructional practices. Starting in kindergarten, students can begin using and practicing reading for specific purposes and begin to understand that some texts need to be read and reread to be totally comprehended. Thank you for reading... Fisher & Frey (2012). Close reading in elementary schools. The Reading Teacher. 66179-188 DOI: 10.1002/TRTR.01117 Raphael, T.E., & Au, K.H. (2005). QAR: Enhancing comprehension and test taking across grades and content areas. The Reading Teacher, 59, 206-221. Pearl Garden is a doctoral candidate at Texas A&M- Commerce. Follow along as she drops "pearls' of literacy and chronicles her pursuit of her Ed. D in Supervision-Curriculum and Instruction- Elementary Education. Just know that these are the ramblings of a doc student and a lot of what you read is a first draft and will go through some rewrites.
Soil – The Root of all Nutrition When we think of soil, we picture it as a large sponge soaking up water and nutrients for the plants that grow in it. Soil is a defining factor in what type of crops grow, how well they thrive and how bountiful the produce of a region can be. Collectively, this can be called ‘soil health.’ Let’s spend this ‘World Soil Day’ learning about this natural reservoir of nutrition. What is it made of? Just like buildings are made up of bricks & concrete; soil can be made up of gravel, sand, silt and clay. They are found in varying proportions around the world to make up all the different types of soil we see. They’re held together by old roots, bacteria, fungi and other biological matter to form a solid, aggregate structure. Good soil structure can retain air pockets, nutrients, organic carbon and water – all of which are essential for growing plants. What affects soil health? Several unhealthy farming practices like tilling and over-irrigation destroy this soil structure and affect the yield of farmland. Soil erosion is the primary effect of tilled soil. Nutrient-rich topsoil is blown away by wind leaving the soil barren and infertile. Tilled soil also loses the ability to retain water effectively and the life-cycles of useful soil bacteria are disrupted. Apart from this, plenty of useful soil carbon is also lost to the atmosphere – where it turns into an atmospheric pollutant. Over irrigation can clog up the pores in the soil and create a hazardous environment for plants. Excess water can also wash away essential nutrients and leave little for the plants themselves. How can we improve soil health? As an alternative, there are several sustainable practices such as conservation-tilling, growing cover crops and more. Drip irrigation is one such practice where adequate quantities of water and fertilizer is directed right to the roots of the plant and minimizing the effects of over-irrigation. At Monsanto, we work with farmers to increase adoption of such techniques. We promote sustainable practices by spreading awareness among farmers and helping them procure the equipment they would need to employ such techniques. Learn more about conservation tilling here: Practices such as planting cover crops between primary crop cycles prevent soil erosion and keep the fertile topsoil in place. Growing highly productive crops absorbs more carbon and convert it into more plant matter – keeping soil healthy and the atmosphere clean. Why should we do it? Along with the growing world population, comes a growing need to meet the nutritional demands of billions in a sustainable manner. Our CEO, Hugh Grant, explains the need for sustainable agriculture by carving an apple. If that tiny sliver of apple skin is all we have, we must make the most of it. Let’s employ more sustainable agricultural practices and carbon neutral crop production to increase the yield of our farms and ensure everyone on the planet has access to a healthy, fulfilling meal. If you have any questions regarding Monsanto and what we do, join the conversation here:
A pandemic refers to a disease prevalent over the whole country or the world. Humanity has been suffering such pandemics from 165 AD with the Antonine Plague which took 5 Million lives, to the current pandemic – Covid-19 which has already crossed 40 thousand fatalities. The biggest pandemics in history have been in Europe which can be related to poor hygiene and sanitation. Here, we will talk about the 5 biggest pandemics in the history of the world. 1. Black Death / Bubonic Plague [1347-1351 – 200 Million] This was the most devastating pandemic recorded in human history. It struck Europe and Asia in the mid 1300s. This outbreak wiped out 30-50% of Europe’s population which took more than 200 years to recover. It originated in rats and spread to humans via infected fleas. 2. Smallpox [1520 – 56 Million] Smallpox was brought by European invaders to the Americas. An estimated 90% of Native Americans were killed in this pandemic. In the 1800, around 400 million people were being killed annually. The first ever vaccine was created for Smallpox. It is said to have been eradicated now after successful vaccination campaigns in the 19th and 20th century. 3. Spanish Flu [1918 – 40-50 Million] It was a deadly influenza pandemic which occurred in 1918 affecting 500 million people, about a quarter of the world’s population at the time. It affected young adults disproportionately. It was caused by H1N1 influenza virus. Main countries affected were Germany, UK, France and the United States. 4. Plague of Justinian [541-542 AD – 30-50 Million] This plague affected eastern roman empire as well as the Sassanian empire and port cities around the entire Mediterranean sea. It spread from the merchant ships harbouring rats carrying infected fleas. Many believe it may have helped hasten the fall of the Roman Empire. The death toll is still under debate. 5. HIV AIDS [1981-present – 25-35 Million] HIV originated in west-central Africa during the late 19th or early 20th century. Till date there is no cure or vaccination developed for HIV and the death toll continues to increase world wide. AIDS interferes with the immune system of a person increasing the risk of developing common infections like TB and tumours also accompanied by unintended weight loss. Read more about the History of Pandemics. Over 90% Pandemic Viruses in the last 100 years originated from Africa or China but none from India. Also read what astrologers say about when the Coronavirus pandemic will end. Here are some Hindu practices which can help in preventing the spread of infections.
Did you know that almost half of the toddlers can develop a cavity between the ages of 2 and 11? Cavities occur as a result of tooth decay, which is damage that occurs from bacteria eating away at the teeth. Your child doesn’t have to be one of those children who develops a cavity. There are steps you can take to assure that your child’s teeth stay strong and protected against harmful bacteria with a preventative procedure called dental sealants. Here’s how to prevent tooth caries with dental sealants. What Are Dental Sealants? Dental sealants are protecting layers that are custom-fit to cover the chewing surfaces of the teeth. When your child’s permanent teeth grow in, these teeth have pits and fissures that make it easy for food to get stuck. If food gets into these cavities, the bacteria that causes tooth decay will feed on it. How can you prevent tooth cavity in your child’s new permanent teeth? Talk to your child’s dentist about dental sealants. These sealants are plastic-like coverings that are fitted over the bottom of your child’s teeth. They fill in all the tiny spaces and crevices so that no food can get caught, and they’re impervious to bacteria, saliva, and food particles. Along with regular dental cleanings, dental sealants are the best answer to how to prevent tooth decay. When to Consider Dental Sealants? Your child’s acceptability for dental sealants depends on when their adult teeth erupt. Usually, children get their first set of adult molars at around age 6, and they get another set around age 12. Due to the nature of dental sealants, they work best when applied to teeth with deep groove, large chewing surfaces such as molars. To provide the maximum level of protection against tooth decay, dental sealants should be applied as soon as possible after the eruption of the molars. Dental sealants will provide protection as soon as they’re applied, so it behooves you to make your child’s appointment early. In order to protect both sets of molars, you’ll need to make two appointments: one after they get their first set at age 6, and another when they get their second set around age 12. Your child may be very thorough when it comes to brushing and flossing their teeth, but they may still not be properly cleaning all of the fissures in their new teeth, which puts them at risk for tooth decay. Don’t take a chance on your child developing a cavity. Your child only gets one set of permanent teeth to last their whole life, so you should take every precaution to protect them. The Application Process of dental sealants: Dental sealants can normally be applied in just one visit to the dentist, and they don’t require any drilling or anesthetic. For children who are particularly anxious or have trouble sitting still, sedation dentistry is an option that you can discuss. However, most children have no trouble with the procedure since it is entirely painless. In order to apply the sealants, your child’s dentist will: - Clean and polish your child’s teeth to remove any debris or plaque. - Detach and dry the teeth that are receiving dental sealants. - Roughen the surface of the teeth using an acid etc. - Use bonding material to each tooth to assure that the sealant adheres correctly. - Apply the dental sealant to each tooth. - Use a special light to cure the sealant and bond it to the tooth. After all of the dental sealants have been cemented, your dentist will check each tooth to ensure the sealants were applied properly. The dentist can tell your child how to prevent tooth decay with their new sealants. While the dental sealants will protect against bacteria for a long time, they aren’t a permanent solution. After about 10 years, the sealants will fall out on their own. By then, your child’s oral health habits should be adequate to prevent tooth decay without the help of sealants. There are virtually no risks associated with dental sealants aside from the possibility of a slightly unpleasant taste right after the sealants are applied. Your child can eat and drink normally right after the procedure. If your child suffers from a lost or impaired dental sealant, be sure to make an appointment with their dentist soon. The dentist should be able to restore any broken or lost sealants with a new one. Other Tips for Preventing Tooth Decay: In addition to having your child applied with dental sealants, there are other steps that you can take to keep your child’s teeth free from tooth decay. These include: - Brushing twice a day with fluoride toothpaste - Flossing between your teeth twice a day - Eating a nutritious, balanced diet - Staying away from sticky or starchy snacks - Avoiding sugary fruit juices and soda - Getting supplemental fluoride treatments from your child’s dentist - Scheduling regular dental appointments to get your child’s teeth cleaned When paired with dental sealants, these tips can help keep your child’s teeth cavity-free for years to come. Make sure to encourage good brushing and flossing habits. The oral health habits that are developed in childhood will carry over into adulthood, so set your child up for success. Once you learn how to stop tooth decay with dental sealants, the procedure seems like a simple choice. The downsides are minimal, and the benefits are healthy teeth that will last a lifetime. So much of parenting is spent preparing your child for the rest of their lives, and their oral care is no different. Dental sealants will assure that their adult teeth stay healthy until your child is old enough to care for them on their own.
Venn diagrams have made the leap out of math and logic classes and into the world of memes. Venn diagrams often include a series of circles to show where elements overlap or don't. If you want an easy explanation of Venn diagrams and sets, take a look at this page. The popular Venn diagram on concerns about COVID-19: Lately a Venn diagram has been making the rounds that shows three equally sized circles that intersect. One circle is about taking COVID-19 seriously, one about economic devastation, and the third about expansion of "authoritarian" government policies. Here's a look at it: Feelings: First, all feelings about a situation are valid. People feel a wide range of emotions in a pandemic and their concerns land in different places. The diagram is a clear and compelling appeal to feelings and is not an assessment of the scale of the issues or even a diagram that shows the cause-and-effect relationship among the three circles. How do we know the diagram is about feelings and not an analysis? It's all in the language. With a header that affirms "It's OK," we are tipped off that it is we who are being validated, which can give the illusion that the merits of our beliefs are also being validated. In addition, each circle uses feeling language such as "taking seriously," "very concerned," and "worried about." There's nothing wrong with that per se. In fact, it's always important to attend to people's feelings, especially during a pandemic. It's important for mental health reasons and it's important for policymakers and advocates to understand people's feelings in order to persuade the public. But we should ask critical questions so that we don't allow others to manipulate our feelings to the point of distorting the reality of the situation and the need for specific policy interventions. Study Questions: Here are a few study questions to consider that might help you explore whether this diagram is a manipulation or whether it is accurate in important ways. 1. What is implied when the circles are equal in size? Does it mean all three concerns are equal in their harm? Does it mean all three threats are equally likely to take place? Based on your information, is there one circle or are there two circles that are more of a real threat? If so, which one or ones? How big would you make the different circles if you drew your own diagram? 2. Who or what sectors of our society benefit if we treat the three circles as equal? Who benefits if you resized the circles based on what you think the greatest threat is? 3. If the circles are all the same size, what effect does it have on people who wish to take action? Does it stall or spur action? If the circles are resized with the greatest concern represented in the largest circle, what effect would that have as a call to action? 4. By showing how the three concerns overlap, does the diagram hide ways that one circle causes another? In other words, is COVID-19 a concern in its own right AND a cause of the other two circles? Or are the three circles simply three different sets of concerns? 5. Who is the target of the lower right circle discussing "authoritarian" policies? This is an important point. In public debates, some are calling governors and mayors who issue safer-at-home orders authoritarian, while others view the President's seizure of PPE as authoritarian. Many view the extension of the Patriot Act during the pandemic as authoritarian. In Tennessee, questions have arisen about sharing health information with law enforcement. So it's important to be clear when talking about authoritarian policies whose policies you mean. Memes are here to stay. So are our feelings. They will be part of our public debates and the way we come to terms with all the challenges we face. But it is wise to raise questions about both when we're making decisions. What questions would you add to improve your understanding of the diagram and where it leads people?
The set of 32 permanent teeth (also known as adult teeth or secondary teeth) don’t begin to develop until after baby is born. In some cases, the last set of molars – the wisdom teeth – never develop, so a set of 28 permanent teeth is considered normal too. Losing the baby teeth After your child turns six, they will start to get regular visits from the tooth fairy! The baby teeth will begin to get wobbly and then fall out (usually in the same order that they first appeared) as the root is resorbed and the permanent tooth moves into a position to erupt through the gum. Eruption of permanent molars Just as your child’s baby teeth begin to fall out, the first permanent teeth erupt in their mouth. The four molars (not surprisingly, often called the six-year-old molars) – two in the upper jaw and two in the lower jaw – erupt through the gum that is at the back of the mouth behind his existing baby teeth. Like baby teeth, the timing of the appearance of permanent teeth is different for every child and cannot be hurried. Generally though, the order of eruption and rough timeline for each type of permanent tooth includes: - First molars – between six and seven years - Central incisors – between six and eight years - Lateral incisors – between seven and eight years - Canine teeth – between nine and 13 years - Premolars – between nine and 13 years - Second molars – between 11 and 13 years - Third molars (wisdom teeth) – between 17 and 21 years This article was written by Ella Walsh for Kidspot New Zealand.. Sources include Vic. Govt’s Better Health Channel.
Drought conditions have improved markedly across much of the western United States over the past several months, as a succession of storms have dumped heavy rain and mountain snow. In fact, the coverage of drought conditions in California are at their lowest point since 2011. However, it doesn’t take very long for drought conditions to return in the West, or indeed any part of the country. That’s why a new tool to monitor droughts could be an invaluable key going into the warm season. This new method, developed at Duke University, uses satellite remote sensing to alert scientists and land management officials of potential drought conditions in near real time, compared to the typical month or longer such monitoring takes currently. This new satellite method involves measuring the difference in the surface temperature of the plant and tree canopy at a particular location, and comparing it to the air temperature. This difference is called the thermal stress. When trees and other plants have adequate moisture the small pores in their leaves, called stomata, remain open. Water evaporating into the air from these pores helps cool areas near the canopy. However, in abnormally dry or drought conditions, the plants close their stomata to conserve moisture, and thus the plant canopy heats up. Thus, when a region shows an increase in the thermal stress, abnormally dry or drought conditions may follow. In evaluating this thermal stress metric against other drought predictors over the past 15 years, the team from Duke University found that thermal stress was the most accurate predictor of drought conditions when examining this past data. One caveat with the thermal stress method, though, is that it is less accurate in winter, especially in snow-covered regions. The current national thermal stress map can be found at the Drought Eye website.
What Is Atherosclerosis? You might hear it called arteriosclerosis or atherosclerotic cardiovascular disease. It’s the usual cause of heart attacks, strokes, and peripheral vascular disease -- what together are called cardiovascular disease. You can prevent and treat this process. Atherosclerosis Signs and Symptoms You might not have symptoms until your artery is nearly closed or until you have a heart attack or stroke. Signs can also depend on which artery is narrowed or blocked. Symptoms related to your coronary arteries include: - Arrhythmia, an unusual heartbeat - Pain or pressure in your upper body, including your chest, arms, neck, or jaw. This is known as angina. - Shortness of breath - Numbness or weakness in your arms or legs - A hard time speaking or understanding someone who’s talking - Drooping facial muscles - Severe headache - Trouble seeing in one or both eyes Symptoms related to the arteries of your arms, legs, and pelvis include: - Leg pain when walking Symptoms related to the arteries that deliver blood to your kidneys include: - High blood pressure - Kidney failure Your doctor will start with a physical exam. They’ll listen to your arteries and check for weak or absent pulses. You might need tests, including: - Angiogram, in which your doctor puts dye into your arteries so they’ll be visible on an X-ray - Ankle-brachial index, a test to compare blood pressures in your lower leg and arm - Blood tests to look for things that raise your risk of having atherosclerosis, like high cholesterol or blood sugar - Chest X-ray to check for signs of heart failure - CT scan or magnetic resonance angiography (MRA) to look for hardened or narrowed arteries - EKG, a record of your heart’s electrical activity - Stress test, in which you exercise while health care professionals watch your heart rate, blood pressure, and breathing You might also need to see doctors who specialize in certain parts of your body, like cardiologists or vascular specialists, depending on your condition. Arteries are blood vessels that carry blood from your heart throughout your body. They're lined by a thin layer of cells called the endothelium. It keeps the inside of your arteries in shape and smooth, which keeps blood flowing. Atherosclerosis begins with damage to the endothelium. Common causes include: - High cholesterol - High blood pressure - Inflammation, like from arthritis or lupus - Obesity or diabetes That damage causes plaque to build up along the walls of your arteries. When bad cholesterol, or LDL, crosses a damaged endothelium, it enters the wall of your artery. Your white blood cells stream in to digest the LDL. Over the years, cholesterol and cells become plaque in the wall of your artery. Plaque creates a bump on your artery wall. As atherosclerosis gets worse, that bump gets bigger. When it gets big enough, it can create a blockage. That process goes on throughout your entire body. It’s not only your heart at risk. You’re also at risk for stroke and other health problems. Atherosclerosis usually doesn’t cause symptoms until you’re middle-age or older. As the narrowing becomes severe, it can choke off blood flow and cause pain. Blockages can also rupture suddenly. That causes blood to clot inside an artery at the site of the rupture. Atherosclerosis Risk Factors Atherosclerosis starts when you’re young. Research has found that even teenagers can have signs. If you’re 40 and generally healthy, you have about a 50% chance of getting serious atherosclerosis in your lifetime. The risk goes up as you get older. Most adults older than 60 have some atherosclerosis, but most don’t have noticeable symptoms. These risk factors are behind more than 90% of all heart attacks: - Abdominal obesity ("spare tire") - High alcohol intake (more than one drink for women, one or two drinks for men, per day) - High blood pressure - High cholesterol - Not eating fruits and vegetables - Not exercising regularly Rates of death from atherosclerosis have fallen 25% in the past 3 decades. This is because of better lifestyles and improved treatments. Plaques from atherosclerosis can behave in different ways. They can stay in your artery wall. There, the plaque grows to a certain size and then stops. Since this plaque doesn't block blood flow, it may never cause symptoms. Plaque can grow in a slow, controlled way into the path of blood flow. Over time, it causes significant blockages. Pain in your chest or legs when you exert yourself is the usual symptom. The worst happens when plaques suddenly rupture, allowing blood to clot inside an artery. In your brain, this causes a stroke; in your heart, a heart attack. The plaques of atherosclerosis cause the three main kinds of cardiovascular disease: - Coronary artery disease: Stable plaques in your heart's arteries cause angina (chest pain). Sudden plaque rupture and clotting cause heart muscle to die. This is a heart attack. - Cerebrovascular disease: Ruptured plaques in your brain's arteries cause strokes with the potential for permanent brain damage. Temporary blockages in your artery can also cause something called transient ischemic attacks (TIAs), which are warning signs of a stroke. They don’t cause any brain injury. - Peripheral artery disease: When the arteries in your legs narrow, it can lead to poor circulation. This makes it painful for you to walk. Wounds also won’t heal as well. If you have a severe form of the disease, you might need to have a limb removed (amputation). Complications of atherosclerosis include: Once you have a blockage, it's generally there to stay. But with medication and lifestyle changes, you can slow or stop plaques. They may even shrink slightly with aggressive treatment. Lifestyle changes: You can slow or stop atherosclerosis by taking care of the risk factors. That means a healthy diet, exercise, and no smoking. These changes won't remove blockages, but they’re proven to lower the risk of heart attacks and strokes. Medication: Drugs for high cholesterol and high blood pressure will slow and may even halt atherosclerosis. They could also lower your risk of hearts attack and strokes. Your doctor can use more invasive techniques to open blockages from atherosclerosis or go around them: - Angiography and stenting: Your doctor puts a thin tube into an artery in your leg or arm to get to diseased arteries. Blockages are visible on a live X-ray screen. Angioplasty (using a catheter with a balloon tip) and stenting can often open a blocked artery. Stenting helps ease symptoms, but it does not prevent heart attacks. - Bypass surgery: Your doctor takes a healthy blood vessel, often from your leg or chest, and uses it to go around a blocked segment. - Endarterectomy: Your doctor goes into the arteries in your neck to remove plaque and restore blood flow. - Fibrinolytic therapy: A drug dissolves a blood clot that’s blocking your artery. These procedures can have complications. They’re usually done on people with major symptoms or limitations.
1. A Saxon was a member of a Germanic tribe living on the North Sea coastline of Germany which in the 5th and 6th centuries AD migrated to what later became southern England. They are often also known as 'Anglo-Saxons', as the migration involved three peoples: the Angles, who originated in Schleswig-Holstein in Germany/Denmark, and who settled in what is now northern England and southern Scotland; the Saxons; and the Jutes, from Jutland in modern Denmark, who settled on the south coast of Britain, principally on the Isle of Wight. 2. A Saxon is a modern-day German from the Saxony region (Sachsen).
WHERE CAN MOLDS BE FOUND INDOORS? Molds can grow on wood, insulation, in carpet, and even behind walls where they can continue to grow undetected. When excessive moisture accumulates in the home, mold growth will often occur. This moisture build-up can stem from plumbing leaks, from condensation in air conditioning and heating systems, or from ground water penetration. If damp or wet drywall becomes moist and is not dried out within two days, mold can possibly be growing within the walls, even if it is not visible. Also, when investigating for hidden mold problems, disturbing potential sites of mold growth could create even more issues! For example, if mold is growing behind wallpaper or drywall: removing the wall covering or drilling into walls to look for visible mold could lead to a massive release of spores. With the high sensitivity of the Home Air Check™ test, mold hidden and growing behind walls, ceilings, and flooring can be easily detected throughout the house without disturbing or spreading any mold growth. WHAT ARE THE HEALTH EFFECTS OF MOLDS ON PEOPLE? When mold is present in large quantities, it can become a health hazard, potentially causing allergic reactions and respiratory problems in people who have sensitivities. Molds produce allergens that cause hay fever-type symptoms such as a runny nose, itchy eyes, sneezing, and skin rashes. These allergic reactions can happen immediately upon exposure or they can be delayed. More severe reactions may occur in people who have mold allergies, and may include fever and shortness of breath. In addition, molds can trigger asthma attacks in people with asthma and who are allergic to mold. Some people with chronic lung illnesses can develop infections in their lungs with prolonged exposure to mold in the home. HOW DOES INSIGHT TEST FOR MOLD? The mold testing method most widely performed today generally only measures mold spores and not the chemicals they release into the air. Since mold can grow behind walls (due to small plumbing leaks or condensation build-up), it can be difficult to tell from a visual inspection if there is a problem. Insight uses Home Air Check monitors for Mold VOCs that present only when mold is actively growing, not for single mold spores. Mold VOCs cannot be detected or measured by traditional mold spore traps. Insight is now performing more tests for Mold Spores (types and concentrations of): Spore traps - This is the most common method of trapping spores (hence the name). They work by having a controlled amount of air pass a sticky surface. If there are spores in the air, they will be caught onto the adhesive plate and will be found in the analysis later. Tape Lifts - This is used if you can actually see the mold. Basically, tape is placed onto the mold and then is taken off and placed onto a microscope slide. Bulk Sampling - Again, this is if you can see the mold. We take a piece of it and send it in for analysis. Swab Sample - Take a cotton swab and drag it across the mold/sample area. Unfortunately, there are no set government or EPA regulations, standards, or threshold limits for airborne concentrations of mold. Mold VOCs detected above 30 Nano grams per liter (ng/L - air is measured in liters) indicate significant active mold growth and most occupants of the home will be affected — particularly those with mold allergies or respiratory illnesses like asthma or COPD. Therefore, it is imperative that a homeowner to test for the presence or any visible or hidden mold in their home -- So let us help you! Molds are microscopic fungi that can be found almost anywhere - both indoors and outdoors! Mold growth occurs mainly in warm, damp, and humid conditions. They reproduce by making spores that are released into the air to be transported to other places where they can germinate and grow. When mold is in an active growth phase, it releases gases into the air called Mold Volatile Organic Compounds (MVOCs or Mold VOCs). Some MVOCs you can smell; however, not all of these gases can be detected by smell.
Squint is a misalignment of the eye where the two eyes are pointed towards different directions. This misalignment may not be constant and could be intermittent for some. If not diagnosed and treated at an appropriate time, a condition called Amblyopia (Lazy eyes) develops - which leads to permanent loss of vision. Once the cause of the squint is known, the treatment plan can be suggested accordingly. If there is a refractive error- giving the child a number, helps reduce the deviation of the eyes and automatically corrects the squint. In cases of Amblyopia, treating the child with occlusion or patching therapy might be beneficial. In case there is a weakness of the muscles, a surgical treatment might be needed to correct the deviation. If the squint is treated as early as possible (before the age of 2 years), loss of vision can be prevented in children.
Seizures are the result of abnormal activities in the brain that may sometimes go unnoticed. In some cases, the condition can also lead to unconsciousness and convulsions. Epilepsy is a group of related disorders of the brain that are characterized by a tendency for recurrent seizures. Seizures could come in suddenly and last for a certain amount of time whose duration solely depends on its severity. Seizures can occur just once or recurrently. If they seem to be coming back, it is the sign of epilepsy. What causes Seizures? The cause behind seizures are generally unknown, but they do occur in the below cases: - Brain tumors - Head injuries - Low blood sugar levels Types of Seizures Seizures can be classified into Generalized Seizures and Partial Seizures. They are the most common type of seizure. It involves the stiffening of the limbs followed by jerking of the limbs and face. Partial seizures begin in specific areas of the brain and spread to the entire region of the brain. What causes Epilepsy? Epilepsy occurs as a result of abnormal electrical activities originating in the brain - Brain infection Types of Epilepsy Epilepsy of all types has seizures as a symptom in them. They come as surges of electricity in the patient's brain. This type of epilepsy is experienced on both sides of the brain. In focal epilepsy, seizures develop in a particular area on one side of the brain. Treating Seizures and Epilepsy Most cases of epileptic seizures are contained and controlled through medication along with restrictions in diet. Cases of seizures and epilepsies that don't subside through changes in diet and medication, surgery is often advised. The type of treatment prescribed would depend on several factors, severity, age, overall health, and medical history. We help diagnose conditions of seizures and epilepsy right from the start of signs and symptoms. To understand more about your condition visit our practice for further assistance with your condition. Call us today to request an appointment at (669) 235-4188.
Francis Darwin was a keen botanist who explored phototropism - how plants grow towards sunlight - with his father Charles Darwin. Their work led to the discovery of the important plant hormone auxin. Francis Darwin was the seventh child born to Charles Darwin and Emma Darwin (née Wedgewood) in 1848. Darwin inherited his father’s passion for biology, and was described by his sister Henrietta Litchfield as “the only one of my father’s children to have a strong taste for natural history”. After a grammar school education in Clapham he was accepted into Trinity College, Cambridge University, where he initially studied mathematics. However, Darwin soon changed his specialisation and graduated with a degree in Natural Sciences in 1870. During his time at Cambridge Darwin is said to have dissected a dead porpoise, sent to him by his brother, in his college bedroom. After his undergraduate degree, Darwin studied medicine at St George’s Medical School in London, and although he initially hoped to become a practising physician, “happily for me the Fates willed otherwise”. Upon being awarded a Bachelor of Medicine (MB) in 1875 Darwin moved back to his family home at Down House where his father took him on as secretary and assistant. At Down House, Francis worked alongside his father to investigate plant physiology, in particular the factors that dictate the direction of plant growth. They used simple experiments to investigate touch and light sensitivity in grass seedlings. Their key finding was that light was detected by the tip of the plant, which causes bending in the seedling’s hypocotyl, or stem. From this work Frances and Charles concluded that light sensed at the shoot tip causes a ‘messenger’ chemical to be transmitted down the stem to cause a response: these early studies formed the basis of research that led to the discovery of the growth hormone auxin. Charles Darwin published “The Power of Movement in Plants” in 1880, a landmark book for botanical studies in which the inscription reads “by Charles Darwin assisted by Francis Darwin”. Francis Darwin also worked in the laboratory of the eminent plant physiologist Julius Sachs, in Würzburg from 1878 to 1879. These years contributed to his botanical studies with his father and, after Charles’ death in 1882, in his independent career. Elected a Fellow of the Royal Society in 1882, Darwin worked at Cambridge as a Lecturer of Botany and was later to become Reader. His research into tropism in plants, and plant water loss, led to prestige amongst fellow botanists and, among other honours, Francis Darwin was awarded the Royal Society's Darwin Medal in 1912. He was knighted a year later. A kindly man known for his straightforward nature and love of dogs, Darwin suffered great personal tragedy. He married his first wife, Amy Ruck, in 1874; however, it was not long before she died in 1876 giving birth to their only son, Bernard Darwin. He remarried in 1883 to Ellen Wordsworth Crofts, who died in 1903 leaving one daughter Frances, with whom he was later buried. In 1913 he married for a third time to Lady Florence Henrietta Darwin. As well as his remarkable scientific accomplishments, Francis Darwin is known for his contribution to literature. His famous work “Life and Letters of Charles Darwin” gave a poignant insight into the life of one of Biology’s greatest leaders. Francis died in 1925 and was buried near Cambridge with his daughter France Cornford.
Cassini discovered flooded canyons on the Saturn’s moon that are filled with liquid hydrocarbons. The spacecraft has been orbiting Saturn ever since 2004 when it arrived close enough to the planet. Its mission will end in 2017. However, the device has sent enough data on Earth to keep scientists busy for years. The new research focuses on Titan’s canyons, which are flooded by liquid hydrocarbons. The study makes a further comparison with what happens at Lake Powell and Arizona’s Grand Canyon. The data was collected during the 2013 flyby of the Saturn’s moon and refers to the channels that spread out from the large Ligeia Mare. The lake is placed in the north polar region, and it contains liquid methane. The channels are narrow, with a width of less than half a mile and slopes of 40 degrees, and they are up to 1,870 feet in depth. In the images, the channels appear to be dark, which made scientists believe that they are also filled with liquid. No one knew if the material was saturated sediment made of ice or an actual fluid element. During this particular flyby, Cassini used the radar as an altimeter, measuring the heights on Titan. The researchers used these measurements and combined them with radar images and discovered that the channels contained fluids. The scientists think that the deep cuts were created by a long process or by a violent one, and proposed several scenarios surrounding their apparition. „It’s likely that a combination of these forces contributed to the formation of the deep canyons, but at present it’s not clear to what degree each was involved,” said Valerio Poggiali of the University of Rome, one of the researchers who studied the Cassini data. In order to discover what produced the flooded canyons on Titan, the scientists drew parallels to the processes on Earth that created channels inside the surface. For example, the Grand Canyon from Arizona was caused by powerful erosions combined with a surface uplift. The rising terrain made the river cut deeper into the landscape, and over several million years the canyon was completely formed. On the other hand, the Lake Powell artificial reservoir was created by frequent water level variations. As the water level in the lake drops, the erosion rate of the river increases. The scientists are amazed to find on Titan formations that are similar to those on Earth, as there are vast differences between the compositions of the two planets. Earth is warm and with a surface full of rocks, and Titan has a cold weather and rivers of methane. However, the canyons appear in both worlds, making researchers wonder how the forces on other planets really work. Image Source: Wikipedia
Larger black holes could be the ordinary attention-getters, however the smaller ones could also be a minimum of as essential. A staff utilizing the Hubble Area Telescope has discovered a focus of small black holes within the NGC 6397 globular star cluster (pictured above) 7,800 light-years away — the primary to have its mass and extent recorded. Whereas the researchers had hoped to search out an elusive intermediate-mass black gap, this represents a breakthrough of its personal. A part of the problem got here from figuring out the mass. Scientists used the velocities of stars within the cluster, gathered over a number of years from each Hubble and the ESA’s Gaia observatory, to search out the plenty of the black holes. The normally invisible our bodies tugged stars round in “near random” orbits moderately than the neatly round or elongated paths you’d usually see with black holes. The group probably shaped because the black holes fell towards the cluster’s middle by way of gravitational interactions with smaller stars. Heavier stars are likely to gravitate towards the center even after they have not collapsed into black holes. The findings might develop humanity’s understanding of black holes and the phenomena they create. A bunch like this can be a key supply of gravitational waves, as an illustration. As long as researchers can acquire extra information, this shock discovery may pay loads of dividends.
In 1789 elites in the captaincy of Minas Gerais revolted, protesting the reassertion of imperial control and the imposition of new taxes. An early sign of Brazilian nationalism, the Minas Conspiracy involved prominent figures as well as military officers. The revolt failed and royal courts sentenced most of the conspirators to prison or exile. The only nonaristocratic member of the conspiracy, a military officer by the name of Joaquim Jose da Silva Xavier, became the scapegoat. Best known by his nickname, Tiradentes (Toothpuller)—one of his many professions was dentistry—he was hanged in 1793 and became a martyr for the cause of Brazilian independence. The connection between Portugal and Brazil was severed when Napoleon I and his armies invaded Portugal and Spain in 1807 and 1808. Napoleon, who had become emperor of France following the French Revolution (1789-1799), deposed and imprisoned the Spanish king Ferdinand VII in 1808. This left the Spanish American colonies isolated from royal control and set off a chain reaction that led to a series of long and bloody wars for independence. Brazil avoided a similar fate when the monarchy fled Lisbon shortly before French troops entered the city in 1807. With the help of their British allies, who were fighting Napoleon’s forces, the royal family and 10,000 Portuguese followers made an unprecedented voyage across the Atlantic to Brazil, transferring the center of the empire to Rio de Janeiro. For the first and last time in Western history, a European monarch would rule his empire from the colonies. Portugal’s prince regent, the future King John VI, arrived in Brazil in early 1808 and for the next 13 years ruled Portugal’s Asian, African, and American colonies from Rio de Janeiro. In 1815 John VI elevated Brazil to the status of a kingdom, placing it on an equal footing with Portugal. The presence of the monarchy and court in Rio brought Brazilian and Portuguese elites together and paved the way for a gradual transition to independence. By 1815 Napoleon had been defeated in Europe, opening the way for the monarchy to return to Lisbon. John VI, however, decided to remain in Brazil, but in 1820 the Portuguese army headed a revolution designed to bring about a constitutional government. The revolutionaries agreed that John VI would serve as constitutional monarch of the empire, but only on the condition that he return to Portugal. Threatened with the loss of his crown, John reluctantly left for Portugal in 1821. His 23-year-old son Pedro remained in the colony as prince regent of Brazil. Pedro and his advisers realized that revolutions in other Latin American countries were encouraging a movement for national independence in Brazil. A new and aggressive Cortes (parliament) in Portugal contributed to the demand for independence through a series of inept actions that offended many influential Brazilians. Portuguese members of the Cortes showed open hostility toward the Brazilian representatives, whom they regarded as unsophisticated residents of a backward province. The Cortes further alienated Brazilians by attempting to restore Brazil to colonial status. Rather than trying to resist the growing momentum for independence, Pedro and his advisers decided to take control of this movement. On September 7, 1822, after receiving orders from the Portuguese Cortes curtailing his authority in Brazil, Pedro declared Brazil’s independence. Thus Brazil became one of the few Latin American colonies to make a peaceful transition to independence. Pedro became Brazil’s first emperor as Pedro I. His greatest challenge was to keep this new nation of continental dimensions from fragmenting into several countries, as had happened in Spanish America. He hired Lord Thomas Cochrane, an admiral who had been thrown out of the British navy, to enforce his authority in Brazil. Cochrane defeated the small Portuguese fleet and crushed separatist revolts in the major regional centers along the coast. With a small, hired navy and very few battles, Brazil retained its unity after gaining its independence. Portugal recognized Brazil’s independence in 1825. Despite his role in leading Brazil to independence, Pedro soon lost much of his support. He had been a resident of Brazil since the age of ten, but he was still Portuguese. Although Pedro abdicated the Portuguese throne, which he inherited in 1826, many Brazilians remained suspicious of his continued involvement in the affairs of his native Portugal. Members of the Brazilian elite were dissatisfied with Pedro for a number of reasons. Many of them opposed the new constitution written under his supervision and enacted in 1824. They were also displeased when he overrode the decision of the newly created Brazilian parliament and surrounded himself with Portuguese-born cabinet ministers. In the 1820s Pedro chose to renew a longstanding struggle with Argentina over the southern border of Brazil. The struggle erupted into the Cisplatine War (1825-1828). The war was unpopular with many Brazilians, especially after Brazil suffered a major military defeat at the hands of the Argentines in 1827. Faced with widespread opposition to his rule, Pedro abdicated his Brazilian throne in 1831 and returned to Portugal. |site map privacy legal|
Science Fair Project Encyclopedia - This article is about flavor, the sensory impression. There is another article on Flavor (particle physics) for the particle property. Flavor (or flavour) is the sensory impression of a food or other substance. It is determined by the three chemical senses of taste, olfaction (smell), and the so-called trigeminal senses, which detect chemical irritants in the mouth and throat. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
recycling, © Brand X Pictures/Jupiterimages Corporationrecovery and reprocessing of waste materials for use in new products. The basic phases in recycling are the collection of waste materials, their processing or manufacture into new products, and the purchase of those products, which may then themselves be recycled. Typical materials that are recycled include iron and steel scrap, aluminum cans, glass bottles, paper, wood, and plastics. The materials reused in recycling serve as substitutes for raw materials obtained from such increasingly scarce natural resources as petroleum, natural gas, coal, mineral ores, and trees. Recycling can help reduce the quantities of solid waste deposited in landfills, which have become increasingly expensive. Recycling also reduces the pollution of air, water, and land resulting from waste disposal. © Eva Blanda/FotoliaThere are two broad types of recycling operations: internal and external. Internal recycling is the reuse in a manufacturing process of materials that are a waste product of that process. Internal recycling is common in the metals industry, for example. The manufacture of copper tubing results in a certain amount of waste in the form of tube ends and trimmings; this material is remelted and recast. Another form of internal recycling is seen in the distilling industry, in which, after the distillation, spent grain mash is dried and processed into an edible foodstuff for cattle. Jon Riley—Stone/Getty ImagesExternal recycling is the reclaiming of materials from a product that has been worn out or rendered obsolete. An example of external recycling is the collection of old newspapers and magazines for repulping and their manufacture into new paper products. Aluminum cans and glass bottles are other examples of everyday objects that are externally recycled on a wide scale. These materials can be collected by any of three main methods: buy-back centres, which purchase waste materials that have been sorted and brought in by consumers; drop-off centres, where consumers can deposit waste materials but are not paid for them; and curbside collection, in which homes and businesses sort their waste materials and deposit them by the curb for collection by a central agency. Society’s choice of whether and how much to recycle depends basically on economic factors. Conditions of affluence and the presence of cheap raw materials encourage human beings’ tendency to simply discard used materials. Recycling becomes economically attractive when the cost of reprocessing waste or recycled material is less than the cost of treating and disposing of the materials or of processing new raw materials. Paul GoyetteFerrous products (i.e., iron and steel) can be recycled by both internal and external methods. Some internal recycling methods are obvious. Metal cuttings or imperfect products are recycled by remelting, recasting, and redrawing entirely within the steel mill. The process is much cheaper than producing new metal from the basic ore. Most iron and steel manufacturers produce their own coke. By-products from the coke oven include many organic compounds, hydrogen sulfide, and ammonia. The organic compounds are purified and sold. The ammonia is sold as an aqueous solution or combined with sulfuric acid to form ammonium sulfate, which is subsequently dried and sold as fertilizer. In the ferrous metals industry there are also many applications of external recycling. Scrap steel makes up a significant percentage of the feed to electric arc and basic oxygen furnaces. The scrap comes from a variety of manufacturing operations that use steel as a basic material and from discarded or obsolete goods made from iron and steel. One of the largest sources of scrap steel is the reprocessing of old automobile bodies. Salvage operations on automobiles actually begin before they reach the reprocessor. Parts such as transmissions and electrical components can be rebuilt and resold, and the engine block is removed and melted down for recasting. After being crushed and flattened, the automobile body is shredded into small pieces by hammer mills. Ferrous metals are separated from the shredder residue by powerful magnets, while other materials are sorted out by hand or by jets of air. Only the plastics, textiles, and rubber from the residue are not reused. The same basic recovery procedures apply to washing machines, refrigerators, and other large, bulky steel or iron items. Lighter items such as steel cans are also recycled in large numbers. Secondary aluminum reprocessing is a large industry, involving the recycling of machine turnings, rejected castings, siding, and even aluminum covered with decorative plastic. The items are thrown into a reverberatory furnace (in which heat is radiated from the roof into the material treated) and melted while the impurities are burned off. The resulting material is cast into ingots and resold for drawing or forming operations. Beverage cans are another major source of recycled aluminum; in some countries, as many as two-thirds of all such cans are recycled. The primary source of used lead is discarded electric storage batteries. Battery plates may be smelted to produce antimonial lead (a lead-antimony alloy) for manufacture of new batteries or to produce pure lead and antimony as separate products. Though much used rubber was formerly burned, this practice has been greatly curtailed in most countries in order to prevent air pollution. Internal recycling is common in most rubber plants; the reprocessed product can be used wherever premium-grade rubber is not needed. External recycling has proved a problem over the years, as the cost of recycling old or worn-out tires has far exceeded the value of the reclaimed material. Shredded rubber can be used as an additive in asphalt pavements, and discarded tires may be used as components of swings and other assorted recreational climbing equipment in “tire playgrounds” for children. One of the most readily available materials for recycling is paper, which alone accounts for more than one-third by weight of all the material deposited in landfills in the United States. The stream of wastepaper consists principally of newspaper; office, copying, and writing paper; computer paper; coloured paper; paper tissues and towels; boxboard (used for cereal and other small boxes); corrugated cardboard; and kraft paper (used for paper bags). These papers must usually be sorted before recycling. Newsprint and cardboard can be repulped to make the same materials, while other types of scrap paper are recycled for use in low-quality papers such as boxboard, tissues, and towels. Paper intended for printing-grade products must be de-inked (often using caustic soda) after pulping; for some uses the stock is bleached before pressing into sheets. Smaller amounts of recycled paper are made into cellulose insulation and other building products. Bark, wood chips, and lignin from sawmills, pulp mills, and paper mills are returned to the soil as fertilizers and soil conditioners. The kraft process of papermaking produces a variety of liquid wastes that are sources of such valuable chemicals as turpentine, methyl alcohol, dimethyl sulfide, ethyl alcohol, and acetone. Sludges from pulp and paper manufacture and phosphate slime from fertilizer manufacture can be made into wallboard. Glass makes up about 6 percent by weight of the material in municipal waste streams. Glass is an easily salvageable material but one that is difficult to recover economically. Though enormous numbers of glass containers are used throughout the world, much of this glass is still not recycled, because the raw materials are so inexpensive that there is scant economic motive to reuse them. Even those glass containers that are returned by consumers in their original form sooner or later become damaged or broken. One problem in recycling glass is separating it from other refuse. Another problem is that waste glass must be separated by colour (i.e., clear, green, and brown) before it can be reused to make new glass containers. Despite these difficulties, anywhere from 35 to 90 percent of cullet (broken or refuse glass) is currently used in new-glass production, depending on the country. Plastics account for almost 10 percent by weight of the content of municipal garbage. Plastic containers and other household products are increasingly recycled, and, like paper, these must be sorted at the source before processing. Various thermoplastics may be remelted and reformed into new products. Thermoplastics must be sorted by type before they can be remelted. Thermosetting plastics such as polyurethane and epoxy resins, by contrast, cannot be remelted; these are usually ground or shredded for use as fillers or insulating materials. So-called biodegradable plastics include starches that degrade upon exposure to sunlight (photodegradation), but a fine plastic residue remains, and the degradable additives preclude recycling of these products. Construction and demolition (C&D) debris (e.g., wood, brick, portland cement concrete, asphalt concrete and metals) can be reclaimed and reused to help reduce the volume taken up by such materials in landfills. Concrete debris consists mostly of sand and gravel that can be crushed and reused for road subbase gravel. Clean wood from C&D debris can be chipped and used as mulch, animal bedding, and fuel. Asphalt can be reused in cold-mix paving products and roofing shingles. Recovered wallboard can be used as cat litter. As landfill space becomes more expensive, more of these materials are being recycled. Domestic refuse (municipal solid waste) includes garbage and rubbish. Garbage contains highly decomposable food waste (e.g., kitchen scraps), while rubbish is the dry, nonputrescible component of refuse. Once glass, plastics, paper products, and metals have been removed from domestic refuse, what remains is essentially organic waste. This waste can be biologically decomposed and turned into humus, which is a useful soil conditioner, and kitchen scraps, when decomposed with leaves and grass in a compost mound, make an especially useful soil amendment. These practices help reduce the amount of material contributed by households to landfills. Treated wastewater (domestic sewage) can be reclaimed and reused for a variety of purposes, including golf course and landscape irrigation. With achievement of appropriate (secondary) treatment levels, it may be reused for the irrigation of certain agricultural crops. After very high levels of advanced (or tertiary) treatment and purification, it may even be used to supplement drinking water supplies. However, because of public resistance to the direct reuse of treated sewage for domestic purposes, recovered water must be recycled indirectly. This is done by injecting it into the ground or storing it in ponds and allowing it to seep into naturally occurring aquifers so that it is further purified as it slowly moves through the geologic strata. In some regions of the world where water supplies are inadequate because of recurring drought and rapidly expanding populations, the recycling and reuse of treated wastewater is a virtual necessity. See wastewater treatment.
“Everyone” knows that 3×3 + 4×4 = 5×5. This little factoid, and other Pythagorean triplets, can be the basis of a nice set of puzzles. Here’s the first. If you draw a 5×5 square on graph paper, how can you cut it up (following the lines on the graph paper) so that the pieces can be rearranged to form a 3×3 square and a 4×4 square? This is not so hard to do. Here’s one possible solution : Well, there’s the 5×5 square. These four pieces can be rearranged like so : As it turns out, you can’t do it with fewer pieces than this – each corner of the 5×5 square must be in a different piece. But there are other pairs of square numbers that add up to a square number. For example, 5×5 + 12×12 = 13×13. I managed to find a way to cut a 13×13 square into five pieces that could make a 5×5 square and a 12×12 square. Can it be done with only four? I don’t know. And what about 7×7 + 24×24 = 25×25 ? How few pieces can you cut a 25×25 square into, so the pieces can be rearranged into a 7×7 square and a 24×24 square? This is already enough, by way of puzzles, to keep some schoolkids occupied for a loooooong time – but there are an infinite number of pythagorean triplets, so there’s an infinite number of puzzles of this type. To find more pythagorean triplets, use these steps : - choose two numbers, call them M and N. Make sure M is bigger than N. - work out 2xMxN, and call this P - work out MxM – NxN, and call this Q - work out MxM + NxN, and call this R. - You’ll notice that PxP + QxQ = RxR. Then the puzzle is this : how can you chop up an RxR square, respecting the lines on the graph paper, so that the pieces can be rearranged into a PxP square and a QxQ square, using as few pieces as possible? Note that if you allow the squares to be cut into pieces of any shape, or with straight cuts in any direction (not just parallel to the sides of the squares), then at most 7 pieces are enough to solve this puzzle. This picture (public domain, from wikipedia) shows how : Some other formulae that give interesting puzzles : - 1×1 + 7×7 = 5×5 + 5×5 : how can you chop up two 5×5 squares, so the pieces make a 7×7 square and a 1×1 square? - 3x3x3 + 4x4x4 + 5x5x5 = 6x6x6 : how can you chop up a 6x6x6 cube, so that the pieces can be rearranged to make the three smaller cubes? - 1x1x1 + 12x12x12 = 9x9x9 + 10x10x10 : how can you chop up two cubes, 9 and 10 units on their sides, and make up two cubes, 1 and 12 units on their sides? Numbers like the last one (that can be written as the sum of two cubes in two different ways), are called “taxicab” numbers (due to an interesting incident involving two mathematicians, a hospital and taxicab number 1729). There are infinitely many taxicab numbers, but I don’t have a neat formula for them handy. If you have students who’ve exhausted the puzzles on this page, ask them to find one for you!
If you’ve ever tried to untangle a pair of earbuds, you’ll understand how loops and cords can get twisted up. DNA can get tangled in the same way, and in some cases, has to be cut and reconnected to resolve the knots. Now a team of mathematicians, biologists and computer scientists has unraveled how E. coli bacteria can unlink tangled DNA by a local reconnection process. The math behind the research, recently published in Scientific Reports, could have implications far beyond biology. E. coli bacteria can cause intestinal disease, but they are also laboratory workhorses. E. coli’s genome is a single circle of double-stranded DNA. Before an E. coli cell divides, that circle is copied. Opening up the double helix to copy it throws twisting strains elsewhere down the molecule — just as uncoiling a cord in one place will make it over-coil somewhere else. The process results in two twisted loops of DNA that pass through each other like a “magic rings” trick. To separate the rings, E. coli uses an enzyme called topoisomerase IV, which precisely cuts a DNA segment, allows the loops to pass through the break and then reseals the break. Because topoisomerase IV is so important to bacteria, it’s a tempting target for antibiotics such as ciprofloxacin. But when topoisomerase IV is absent, another enzyme complex can step in to carry out this unlinking, although less efficiently. This complex introduces two breaks and unlinks by reconnecting the four loose ends. “There are other ways to unlink the rings, but how do they do it?” said Mariel Vazquez, professor of mathematics and of microbiology and molecular genetics at the University of California, Davis. One pathway, Vazquez said, is that the reconnection enzymes remove one link at a time until they get to zero. That solution was favored by the biologists. But mathematicians look at the problem slightly differently. They understand the DNA as a flexible curve in three-dimensional space. Certain points on the curve can be broken and reconnected. To a mathematician, there are many potential routes for reconnection processes to work — including some where the number of links actually goes up before going back down. “These are all the same to a mathematician, but not to a biologist,” Vazquez said. To determine the most likely route and resolve the problem, they turned to computational modeling. Vazquez and colleagues developed computer software with DNA represented as flexible chains to model the possible locations where reconnection enzymes could cut and reconnect the chains. Overall, they modeled millions of configurations representing 881 different topologies, or mathematical shapes, and identified hundreds of minimal pathways to get two DNA circles linked in up to nine places down to two separate circles. The computer model confirmed the biologists’ hunch: Undoing one link at a time is the preferred route to separate the circles of DNA. The results could have implications far beyond DNA biology, Vazquez said. There are other examples in nature of objects that collide, break and reconnect — like the dynamics of linked fluid vortices, or the patterns formed by smoke rings, for example. When solar flares are ejected from the sun, powerful magnetic field lines cross and reconnect. “The math is not DNA specific, and the computation can be adapted,” Vazquez said. Co-authors on the paper are: at UC Davis, Robert Stolz and Michelle Flanner; Masaaki Yoshida and Koya Shimokawa, Saitama University, Japan; Reuben Brasher, Microsoft, San Francisco; Kai Ishihara, Yamaguchi University, Japan; and David Sherratt, University of Oxford, U.K. The work at UC Davis was supported by the National Science Foundation, National Institutes of Health, Japan Society for the Promotion of Science and The Wellcome Trust. This strory originally appeared on UC Davis News. Listen to this story on our podcast, Three Minute Egghead
Check here frequently for updates on Appalachian Fire Science and summaries written in plain language! SEARCH by author, keyword, title Topics & regions Author: Waldrop, T. Source: Fire Science Brief No. 117, July 2010 1) More than 3 million acres have burned EVERY YEAR since 1999. From 2002-2013, 3.5% of the total acreage of the 50 states burned. (That's bigger than the size of Wyoming.) Controlled burns are a powerful tool in the fight against non-native species, but are they the best way to promote oak regeneration? In short — it's complicated. There is a general consensus that we don’t know enough about how fire affects Appalachian amphibians and reptiles. In a 2013 paper, Clemson University researchers took a step in the right direction by tracking American toads (Anaxyrus americanus) to explore how fuel reduction treatments affect toad breeding, mortality, and movement. Wild pig populations are expanding across the US and recently pushed into the Appalachian Mountains. Most studies on pigs are concerned with their destructive behavior, but researchers are struggling to keep up with the range expansion. The fire science world has only caught a glimpse of what might happen if pigs move in after a controlled burn.
Around 252 million years ago, life on Earth collapsed in spectacular and unprecedented fashion, as more than 96 percent of marine species and 70 percent of land species disappeared in a geological instant. The so-called end-Permian mass extinction—or more commonly, the “Great Dying”—remains the most severe extinction event in Earth’s history. Scientists suspect that massive volcanic activity, in a large igneous province called the Siberian Traps, may have had a role in the global die-off, raising air and sea temperatures and releasing toxic amounts of greenhouse gases into the atmosphere over a very short period of time. However, it’s unclear whether magmatism was the main culprit, or simply an accessory to the mass extinction. MIT researchers have now pinned down the timing of the magmatism, and determined that the Siberian Traps erupted at the right time, and for the right duration, to have been a likely trigger for the end-Permian extinction. According to the group’s timeline, explosive eruptions began around 300,000 years before the start of the end-Permian extinction. Enormous amounts of lava both erupted over land and flowed beneath the surface, creating immense sheets of igneous rock in the shallow crust. The total volume of eruptions and intrusions was enough to cover a region the size of the United States in kilometer-deep magma. About two-thirds of this magma likely erupted prior to and during the period of mass extinction; the last third erupted in the 500,000 years following the end of the extinction event. This new timeline, the researchers say, establishes the Siberian Traps as the main suspect in killing off a majority of the planet’s species. “We now can say it’s plausible,” says Seth Burgess, who received his PhD last year from MIT’s Department of Earth, Atmospheric, and Planetary Sciences and is now a postdoc at the U.S. Geological Survey. “The connection is unavoidable, because it’s clear these two things were happening at the same time.” Burgess and Sam Bowring, the Robert R. Shrock Professor of Earth and Planetary Science at MIT, have published their results in the journal Science Advances. A singular event Around the time of the end-Permian extinction, scientists have found that the Earth was likely experiencing a sudden and massive disruption to the carbon cycle, abnormally high air and sea temperatures, and an increasingly acidic ocean—all signs of a huge and rapid addition of greenhouse gases to the atmosphere. Whatever triggered the mass extinction, scientists reasoned, must have been powerful enough to generate enormous amounts of greenhouse gases in a short period of time. The Siberian Traps have long been a likely contender: The large igneous province bears the remains of the largest continental volcanic event in Earth’s history. “It’s literally a singular event in Earth history—it’s a monster,” Burgess says. “It makes Yellowstone … look like the head of a pin.” It’s thought that as the region erupted, magma rose up through the Earth’s crust, essentially cooking sediments along the way and releasing enormous amounts of greenhouse gases like carbon dioxide and methane into the atmosphere. “The question we tried to answer is, ‘Which came first, mass extinction or the Siberian Traps? What is their overall tempo, and does the timing permit magmatism to be a trigger for mass extinction?'” Burgess says. For the answer, Burgess, Bowring, and colleagues traveled to Siberia on multiple occasions, beginning in 2008, to sample rocks from the Siberian Traps. For each expedition, the team traveled by boat or plane to a small Siberian village, then boarded a helicopter to the Siberian Traps. From there, they paddled on inflatable boats down a wide river, chiseling out samples of volcanic rock along the way. “We’d have a couple of hundred kilos of rocks, and would go to the market in Moscow and buy 15 sport duffle bags, and in each we’d put 10 kilos of rocks … and hope we could get them all on the plane and back to the lab,” Burgess recalls. Back at MIT, Burgess and Bowring dated select samples using uranium/lead geochronology, in which Bowring’s lab specializes. The team looked for tiny crystals of either zircon or perovskite, each of which contain uranium and lead, the ratios of which they can measure to calculate the rock’s age. The team dated various layers of rock to determine the beginning and end of the eruptions. They then compared the timing of the Siberian Traps to that of the end-Permian extinction, which they had previously determined using identical techniques. “That’s important, because we can compare green apples to green apples. If everything is done the same, there’s no bias,” Burgess says. “Now we’re able to say magmatism definitely preceded mass extinction, and we can resolve those two things outside of uncertainty.” Richard Ernst, a scientist-in-residence at Carleton University in Ottawa, Ontario, says the new timeline establishes a definitive, causal link between the Siberian Traps and the end-Permian extinction. “This paper nails it,” says Ernst, who was not involved in the study. “Given that they have dated a portion of the Siberian Traps occurring just before, during, and only for a short time after the extinction, this is the ‘smoking gun’ for this large igneous province being fully correlated with the extinction. At this point, additional dating and other studies will simply provide more details on the link.” Now that the team has resolved the beginning and end of the Siberian Traps eruptions, Burgess hopes others will take an even finer lens to the event, to determine the tempo of magmatism in the 300,000 years prior to the mass extinction. “We don’t know if a little erupted for 250,000 years, and right before the extinction, boom, a vast amount did, or if it was more slow and steady, where the atmosphere reaches a tipping point, and across that point you have mass extinction, but before that you just have critically stressed biospheres,” Burgess says. “Now we’ve pinned it down in time, and others can go in with other techniques to get a more fully fleshed out timeline. But we need it to start someplace, and that’s what we’ve got.” This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.
Tumors of the skull base were at one time linked to a poor prognosis. Advances in microsurgical techniques, an increased understanding of both the skull base anatomy and behavior of these disease processes, and improvements in neuroimaging have allowed such lesions to be successfully treated. The skull base is a complex irregular bone surface on which the brain rests. Contained within this region are major blood vessels that supply the brain with essential nutrients and important nerves with their exiting pathways. The floor of the skull is divided into three regions from front to back: the anterior, the middle, and the posterior compartments. The anterior compartment is the region above a person’s eyes, the middle compartment is the region behind the eyes and centered around the pituitary gland, an organ required for proper hormonal function. The posterior compartment contains the brainstem and the cerebellum. The brainstem is the connection between the brain and spinal cord, containing the origin of nerves involved in the control of breathing, blood pressure, eye movements, swallowing, etc. This connection occurs through the large hole, known as the foramen magnum, within the center of the posterior compartment. The cerebellum, lying behind the foramen magnum, is involved with coordination and balance. The roof of the skull base is composed of the brain itself and a thick sheet of tissue on which the brain rests, called the tentorium. Adding to the complexity of this region is the fact that each compartment of the skull base is at a different level. The anterior compartment is highest and the posterior compartment lowest, when a person is standing and looking forward. You have added pages to your clipboard. Please log in or create an account to share them or use later. You are now being taken to Columbia Neurosurgery's site dedicated to the spine. Use this button to save pages to your clipboard for future use.OK. Got it.
|Spain Table of Contents The anti-American sentiment that figured significantly in Spain's relations with NATO had its roots in the historical rivalry between the two countries for control of the territories of the New World. The Spanish-American War ended this rivalry, stripping Spain of its remaining colonies and leaving a residue of bitterness toward the United States. In the years following the Spanish-American War, economic issues dominated relations between Spain and the United States, as Spain sought to enhance its trading position by developing closer commercial ties with the United States as well as with Latin America. A series of trade agreements signed between Spain and the United States in 1902, 1906, and 1910 led to an increased exchange of manufactured goods and agricultural products that benefited Spain's domestic economy. Cultural contacts and tourism also increased. The emotions of the American public were stirred profoundly by the outbreak of the Civil War in Spain, and approximately 3,000 United States citizens volunteered to serve in the Spanish Republican Army, although the United States government remained adamantly neutral. Following the Nationalist victory, much of public opinion in the United States condemned Franco's regime as a fascist dictatorship, but the United States government participated in various Allied agreements with Spain, aimed at ensuring that Franco would not permit the Iberian Peninsula to be used by Adolf Hitler against Allied forces. The 1953 Pact of Madrid between Spain and the United States provided for mutual defense as well as for United States military aid, and it brought to an end Spain's postwar isolation. It did not end anti-Americanism in Spain, however. Francoist leaders resented having to accept what they considered to be insufficient military supplies in return for basing rights. They also chafed at United States restrictions against the use of American equipment in defending Spain's North African territories in 1957. This anti-American sentiment was bipartisan in Spain. Whereas Francoists resented the United States for its democratic form of government, the opposition parties in Spain perceived the United States as the primary supporter of the Franco regime and therefore as a major obstacle to the democratization of Spain. Following the death of Franco in 1975, the United States welcomed the liberalization of the Spanish regime under King Juan Carlos and sought to bring Spain further into Western military arrangements. In 1976 the bilateral agreement between Spain and the United States was transformed into a Treaty of Friendship and Cooperation. In addition to renewing United States basing rights in return for United States military and economic aid, this treaty provided for a United States-Spanish Council intended to serve as a bridge to eventual Spanish membership in NATO. During the early years of democratic rule, the government's focus was on consolidating the parliamentary system, and foreign policy issues received less attention. However, a point of contention persisted between the governing UCD and the Socialist opposition over Spain's relations with NATO and with the United States. When Calvo replaced Suarez as prime minister in 1981, he made vigorous efforts to gain approval for Spanish membership in NATO, and shortly after this was accomplished a new executive agreement on the use of bases in Spain was signed with the United States in July 1982. This agreement was one of a series of renewals of the basic 1953 arrangement, providing for United States use of strategic naval and air bases on Spanish soil in exchange for United States military and economic assistance. Many Spaniards resented the presence of these bases in Spain, recalling the widely publicized photograph of United States president Dwight D. Eisenhower, throwing his arms around Franco when the first agreement on bases was signed. There were occasional popular protests against these reminders of United States support for the dictatorship, including a demonstration during United States president Ronald Reagan's 1985 visit to Spain. The Socialists had consistently advocated a more neutralist, independent stance for Spain, and when they came to power in October 1982, Gonzalez pledged a close examination of the defense and cooperation agreements with the United States. A reduction in the United States military presence in Spain was one of the stipulations contained in the referendum, held in 1986, on continued NATO membership. In keeping with this, the prime minister announced in December 1987 that the United States would have to remove its seventy-two F-16 fighter-bombers from Spanish bases by mid-1991. Spain also had informed the United States in November that the bilateral defense agreement, which opinion polls indicated was rejected overwhelmingly by the Spanish population, would not be renewed. Nevertheless, in January 1988 Spain and the United States did reach agreement in principle on a new base agreement to last eight years. The new military arrangements called for a marked reduction of the United States presence in Spain and terminated the United States military and economic aid that had been tied to the defense treaty. More about the Government of Spain. Source: U.S. Library of Congress
ISCII and Unicode Representation - It was proposed in the eighties - It is a single representation code for all scripts in Indian language. - It assigns code for the matras. - Devanagiri code was kept area for the development of these types of code. - The code is assigned in Upper case of ASCII data. - The ISCII code is suited for representing the syllables of Indian languages. - The following table gives the sample code in an Indian language. |Note: Matras means vowels extension.| - It is a standard for multilingual documents - It came from ASCII. - It is a shorthand code of 7 to 8 bit to represent the letters of the alphabet in many languages. - In this code more than 65000 different characters are available. - It is viewed as a stack of planes and multiple chucks of 128 consecutive codes. - Data processing software uses unicode to identify the language of the values. - The following is the sample of Unicode.
Multiple Representations Sums, Differences and Products Lesson 7 of 9 Objective: SWBAT use multiple representations to express the relationships in the constant sum and difference problems. It has been both a surprise and a delight for me to realize that simply by providing students with different representations of the same problem they have already been working on, you can get them to think about the problem in different ways. At first, I would have thought that this warm-up would be too easy. To me creating one representation from the others is very straight-forward. After trying this approach in my class, I now realize that for my students it provides a new cognitive challenge and a new way for them to think. During the class, my students initially thought that each of the three representations I gave them related to the same problem. After I clarified that these were three different problems, the lesson got much more interesting. I gave the students thirty minutes to figure out as much missing information about each function as they could. On this day, the transition from the warm-up to the investigation was fluid. Once students accomplished the warm-up, they transitioned directly to the main investigation. This is a great chance for reflection: - What did you learn today? - How does looking at the multiple representations of each problem help you understand the problem? - What is still confusing for you? - Out of all the representations, which one helps you understand the most: data tables, graphs, or equations? Choose some questions that you think will help your students reflect (or let them choose) and give them time to write about their answers.
Vocabulary Review: Making Comparisons In this vocabulary practice learning exercise, students rephrase 11 sentences by replacing the boldfaced word with a word or phrase that has the same meaning. 3 Views 7 Downloads New Review Island of the Blue Dolphins: Vocabulary and Spelling Accompany a classroom novel study of Island of the Blue Dolphins by Scott O'Dell with a spelling and vocabulary packet. Scholars define several terms and choose three of nine spelling activities to complete over the course of a week. 4th - 6th English Language Arts CCSS: Adaptable Language Focus and Vocabulary Unit 4.1: Present Continuous and Animals Combine a lesson about animals and the present continuous (progressive) tense with a grammar and vocabulary instructional activity. Kids complete sentences with the correct -ing verb, then finish a crossword puzzle about certain animals.... 3rd - 8th English Language Arts CCSS: Adaptable Smart Solutions: Challenge Activities (Theme 6) Smart Solutions is the theme of unit comprised of activities designed to challenge minds and stretch writing muscles. Enthusiastic writers create their own comic strip, write a problem-solution essay, an opinion letter, a funny story, a... 3rd English Language Arts CCSS: Adaptable 5-Day Vocabulary Teaching Plan Reinforce important reading skills with a set of vocabulary lesson plans. Middle schoolers complete sentences, play word games, finish analogies, and build their growing vocabulary with a packet of helpful and applicable graphic organizers. 5th - 8th English Language Arts CCSS: Adaptable ¡Celebramos Kwanzaa! Celebrate Kwanzaa through the fictional story Celebra Kwanzaa con Botitas y sus gatitos to delightfully explain the seven principles of Kwanzaa. Dual language learners participate in reading and vocabulary activities... 3rd - 6th English Language Arts CCSS: Adaptable Vocabulary Games for Middle School Spice up your vocabulary instruction with some games! Here is a collection of various game descriptions, organized into three different groups. You will definitely be able to find a game or two to assist your pupils with that tricky task... 3rd - 9th English Language Arts CCSS: Adaptable Who's Got Game? The Lion or the Mouse? Discuss bullying, folk tales, and more using this resource. Learners read the story The Lion and the Mouse by Toni and Slade Morrison, engage in cause and effect activities, make predictions, and discuss bullying. This is a motivating... 3rd - 5th English Language Arts
From the snail darter and the Tellico Dam to the spotted owls and oldgrowth forest loggers, the Endangered Species Act has seen its share of controversy. Often since it was enacted on Dec. 28, 1973, the ESA has been called the worlds most powerful law for species preservation, and with that title comes a lot of attention. While the ESA has been credited with saving 99 percent of listed species from extinction and has put hundreds more on the road to recovery, it is still misunderstood by many. So, as the act celebrates its 40th anniversary in 2013, lets set the record straight. Today, the ESA protects 1,400 domestic species and 614 foreign species. The story behind the decline of these species is much the sameone of habitat loss and degradation. The inevitable development pressures that accompanied our growth into a nation of more than 300 million people have threatened the health and wellbeing of native fish, wildlife and plants. Among other things, the ESA is trying to protect habitat and ecosystems that formed over eons and to reverse species declines that, in some instances, have been 200 years in the making. One of the most important things the U.S. Fish and Wildlife Service does for endangered species is to conserve their habitat. Thats where the National Wildlife Refuge System comes in. Fiftyeight refuges were established to protect endangered species; 248 refuges are home to more than 280 endangered or threatened species. Conserving the Future makes clear that endangered species recovery is central to the Services vision for planning and strategic growth of the Refuge System. The vision recognizes refuges key role in the recovery of several species, including the bald eagle, Aleutian Canada goose and brown pelican. This issue of Refuge Update celebrates ongoing endangered species conservation work that refuges are doingsuch as for the namesake species at Mississippi Sandhill Crane Refuge, for Moapa dace at Moapa Valley Refuge in Nevada, for the Delmarva fox squirrel at East Coast refuges and for the bluntnosed leopard lizard at Pixley Refuge in California. Without the Refuge System, many endangered species would not be recovering. Endangered species recovery is complex and difficult work, often requiring substantial time and resources. Many of the species that have fully recovered were those originally listed under federal protection 40 years ago, including the American alligator, bald eagle and gray wolf. But the number of species that have recovered and can be delisted is not a complete measure of the ESAs success. Stabilizing a species is also a success. So is preventing a species from going extinct. Since 1973, 25 species have recovered to the point that they no longer need ESA protection, two more have been proposed for delisting, and nine more have recommendations for delisting. In addition, 25 species have been reclassified, three are currently proposed and 39 have recommendations for reclassification from endangered to the less critical category of threatened. Plus, 647 species are considered stable or improving. Hundreds of others have been prevented from going extinct. Each one of these outcomes is a real measure of success. Successes will continue because, as recommended in Conserving the Future, the strategic growth of the Refuge System will be guided by priorities identified in threatened and endangered species recovery plans that have identified land acquisition as a conservation component. Valerie Fellows is a communications specialist in the Endangered Species Program. A statebystate listing of endangered species success stories is at http://www.fws.gov/endangered/map/index.html.
Back To CourseAnatomy & Physiology: Tutoring Solution 19 chapters | 326 lessons As a member, you'll also get unlimited access to over Angela has taught college Microbiology and has a doctoral degree in Microbiology. Reach down and put your hand on the back of your lower leg. What you should be feeling is the largest of your calf muscles, the gastrocnemius muscle. Now, start sliding your hand down your calf. The muscle should begin to narrow until all you can feel is a thin, cord-like structure that runs from the narrow point of your calf and seems to dead end on your heel bone, the calcaneus bone. You probably already know that this is your Achilles tendon. But what exactly is a tendon, anyway? The simplest answer is that a tendon connects skeletal muscles to bones. As is generally the case in science, however, the story of the tendon requires a few more chapters than that. Every structure in your body can be broken down into four basic types of tissues. Epithelial tissue covers surfaces and lines cavities. Muscle tissue generates force and movement. Nervous tissue detects bodily changes and relays messages. And connective tissue protects and supports organs and other tissues. Tendons fall into the connective tissue category. A complete tendon is built by building up and combining multiple layers of connective tissue. Let's examine the building process, beginning at the microscopic level. The primary building blocks of tendons are collagen fibers. These fibers are very strong, flexible, and resistant to damage from pulling stresses. Collagen fibers are usually arranged in parallel bundles, which helps multiply the strength of the individual fibers. Now, do you remember that the function of a tendon is to connect muscle to bone? Well, the structure of the tendon and the muscle are literally connected and intertwined. Deep inside a muscle are individual muscle fibers. Collagen, in conjunction with other types of connective tissue, forms very thin sheaths that keep the individual muscle fibers separate from each other. This layer is called the endomysium. 'Endo-' translates to 'within', and '-mysium' translates to 'muscle'. Groups of 10 to 100 muscles fibers securely wrapped in the endomysium sheets form fascicles. Collagen from the endomysium layers extends out and combines with a larger layer of collagen that covers each fascicle. This layer is called the perimysium, 'peri' meaning 'around.' By combining the many individual muscle fascicles, you get an entire muscle, such as the gastrocnemius, or calf muscle, from the introduction. Surrounding each muscle is another collagen layer called the epimysium ('epi-' means 'upon'). This layer is also composed of lengths of collagen fibers from the layers beneath it, the perimysium and endomysium. Now, we have one more layer to look at before we circle back to tendons. Often there is more than one muscle responsible for a specific movement. The muscle of your upper arm that bends your elbow is generally known as the biceps muscle. The bending of your elbow, however, requires two major muscles in your upper arm, the well-known biceps brachii, and the lesser-known brachialis. Each of these muscles is wrapped in its own epimysium, but they are also held to each other by another layer of collagen called deep fascia. This layer holds the muscles together, allows for free movement of those muscles, and provides the blood supply. The collagen of the deep fascia is also connected to the collagen from the lower muscle layers. Finally, we can get back to the tendon. Each of the four layers from above are composed primarily of collagen. Collagen from the deepest endomysium layer all the way up to the collagen of the deep fascia combine to form the tendon. So, you can imagine that where the cord-like structure of your Achilles tendon meets the calf muscle, it begins branching into the many collagen layers that infiltrate the muscle. When you flex or move your lower leg, you engage the Achilles tendon and the calf muscle, since they are inextricably linked. This ensures that the force of the muscle contraction is spread out throughout the entire length and depth of the muscle. It also ensures that no portion of the muscle is experiencing more stress than the others, protecting the muscle from tearing. We have examined the connection between muscle and tendon, but for the body to move, the bones must also move. There is a crucial final connection between muscle, tendon, and bone. The cord-like bundle of tendon collagen extends out of the muscle and attaches to the layer of connective tissue that surrounds the bones, the periosteum. Each muscle now has a strong, flexible attachment to bone, allowing for motion with minimal damage to the muscle fibers. There is one additional structural element that is found on specific tendons. In the wrists and ankles, many large and crucial tendons come together in a small space. These tendons are packed together tightly and are required to shift and move rhythmically during activities like walking and running. These tendons are encased in a layer of connective tissue called a tendon sheath. The sheaths contain a slippery film of synovial fluid that acts to smooth movement and reduce friction. Without this layer, the tendons would quickly be damaged by the high friction of movement. The cord-like structure of the tendon is not the only conformation a tendon can take. The muscles of the skull are held together by tendons with the same basic building blocks (collagen), but they take a different shape. Go ahead and feel your head until you locate a dense tendon cord like the Achilles. Instead of forming a tight cord, the collagen fibers spread out into a flat, fan shape. This type of tendon is called an aponeurosis. Specifically, the top of your skull is covered by the galea aponeurotica. We already stated that the primary function of a tendon is to connect muscles to bones. They also have another important function that is more sensory than structural. In the junction between the tendon and bone, a series of nerve cells wind around a layer of collagen fibers, creating a tendon organ. Flexing the muscle creates tension on the tendon. These nerve cells detect and measure this level of tension. If the tension gets too high and threatens to tear the muscle fibers, the tendon organ sends nervous impulses to the muscle, causing a reflexive relaxation of the muscle, thus protecting it from damage. You almost certainly have heard of tendonitis. If I told you that the '-itis' means 'inflammation,' could you figure out what tendonitis is? If you said 'inflammation of a tendon,' you are correct! Excessive use or damage can result in inflammation of a tendon, causing stiff and painful joints. By learning about the structure of tendons and the links between muscles, tendons, and bones, you can easily see how an inflamed tendon could limit movement and cause pain. Let's review. The main function of a tendon is to connect skeletal muscles to bones. Tendons are a type of connective tissue, and the primary building blocks of tendons are collagen fibers. These fibers build up to create a tendon through multiple layers, including the endomysium, the fascicles, the perimysium, the epimysium, and deep fascia. Small joints, like wrists and ankles, have a tendon sheath, which contains synovial fluid, and the muscles of your skull are connected via a type of tendon called an aponeurosis. In addition to connecting skeletal muscles to bone, tendons can sense when muscles are under too much strain. The tendon organ will then induce a reflexive relaxation of the muscle to protect it. Finally, a common tendon injury is tendonitis, which means inflammation of the tendon. After studying this lesson on tendons, take some time to make sure you can: To unlock this lesson you must be a Study.com Member. Create your account Did you know… We have over 95 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Back To CourseAnatomy & Physiology: Tutoring Solution 19 chapters | 326 lessons
Teaching J.D. Salinger's The Catcher in the Rye? Keep your students accountable and engaged with this in-depth comprehensive unit study guide that includes Common Core-aligned activities and questions on each chapter. Literary activities include creating a Holden Caulfield Facebook Profile page, creating an I-Am Poem, and writing a postcard from Holden to Phoebe. You and your students will love these activities enhancing Salinger's classic story of adolescence and identity. Answer key included. Terrific Teacher Tip: Everyone likes a surprise! Make a surprise book for your students by covering it with butcher paper. Slowly tear the paper away revealing pictures while students predict the topic/theme. Need more terrific teacher tips, classroom ideas, or just words of inspiration? Visit my author/teacher blog at Kimberly Dana - The Blog Zone
Vaginitis is usually characterized by a vaginal discharge or by vulvar itching and irritation; a vaginal odor may be present. The three diseases most frequently associated with vaginal discharge are trichomoniasis (caused by T. vaginalis), Bacterial Vaginosis (BV) (caused by a replacement of the normal vaginal flora by an overgrowth of anaerobic microorganisms and Gardnerella vaginalis), and candidiasis (usually caused by Candida albicans). Vulvovaginal candidiasis usually is not transmitted sexually, but it is included in this section because it is often diagnosed in women being evaluated for sexually transmitted diseases. Vaginitis is diagnosed by pH and microscopic examinations of fresh samples of the discharge. The pH of the vaginal secretions can be determined by narrow-range pH paper for the elevated pH typical of BV or trichomoniasis (i.e., a pH of greater than 4.5). One way to examine the discharge is to dilute a sample in one or two drops of 0.9% normal saline solution on one slide and 10% potassium hydroxide (KOH) solution on a second slide. An amine odor detected immediately after applying KOH suggests BV. A cover slip is placed on each slide and examined under a microscope at low- and high-dry power. The motile T. vaginalis or the clue cells of BV usually are identified easily in the saline specimen. The yeast or pseudohyphae of Candida species are more easily identified in the KOH specimen. The presence of objective signs of vulvar inflammation in the absence of vaginal pathogens, along with a minimal amount of discharge, suggests the possibility of mechanical, chemical, allergic, or other noninfectious irritation of the vulva. Culture for T. vaginalis is more sensitive than microscopic examination. Laboratory testing fails to identify the cause of vaginitis among a substantial minority of women. Bacterial vaginosis (BV) is the most common cause of vaginitis symptoms among women of childbearing age. BV–previously called nonspecific vaginitis or Gardnerella-associated vaginitis–can be transmitted through sexual activity, although the organisms responsible have been found in young women who are not sexually active, as well. BV is due to a change in the balance among different types of bacteria in the vagina. Instead of the normal predominance of Lactobacillus bacteria, increased numbers of organisms such as Gardnerella vaginalis, Bacteroides, Mobiluncus, and Mycoplasma hominis are found in the vagina in women with BV. Investigators are studying the role that each of these microbes may play in causing BV. The role of sexual activity in the development of BV is not understood. Additionally, intrauterine devices (IUDs) may increase the risk of acquiring bacterial vaginosis. BV can be diagnosed by the use of clinical or Gram stain criteria. Clinical criteria require three of the following symptoms or signs: - A homogeneous, white, non-inflammatory discharge that smoothly coats the vaginal walls; - The presence of clue cells on microscopic examination; - A pH of vaginal fluid greater than 4.5; - A fishy odor of vaginal discharge before or after addition of 10% KOH (i.e., the whiff test). When a Gram stain is used, determining the relative concentration of the bacterial morphotypes characteristic of the altered flora of BV is an acceptable laboratory method for diagnosing BV. Culture of G. vaginalis is not recommended as a diagnostic tool because it is not specific. The principal goal of therapy for BV is to relieve vaginal symptoms and signs of infection. All women who have a symptomatic disease require treatment, regardless of pregnancy status. BV during pregnancy is associated with adverse pregnancy outcomes. The results of several investigations indicate that treatment of pregnant women who have BV and who are at high risk for pre-term delivery (i.e., those who previously delivered a premature infant) might reduce the risk for pre-maturity. Therefore, high-risk pregnant women who do not have symptoms of BV may be evaluated for treatment. Although some experts recommend treatment for high-risk pregnant women who have asymptomatic BV, others believe more information is needed before such a recommendation is made. A large, randomized clinical trial is underway to assess treatment for asymptomatic BV in pregnant women; the results of this investigation should clarify the benefits of therapy for BV in women at both low and high risk for pre-term delivery. The bacterial flora that characterizes BV has been recovered from the endometria and salpinges of women who have PID. BV has been associated with endometritis, PID, and vaginal cuff cellulitis after invasive procedures such as endometrial biopsy, hysterectomy, hysterosalpingography, and placement of an intrauterine device, cesarean section, and uterine curettage. The results of one randomized controlled trial indicated that treatment of BV with metronidazole substantially reduced post-abortion PID. On the basis of these data, consideration should be given to treatment of women who have symptomatic or asymptomatic BV before surgical abortion procedures are performed. However, more information is needed before recommending whether or not patients who have asymptomatic BV should be treated before other invasive procedures are performed. Recommended Regimens for Non-Pregnant Women For treatment of pregnant women, see Bacterial Vaginosis, Special Considerations, Pregnancy. - Metronidazole 500 mg orally twice a day for 7 days, or - Clindamycin cream 2%, one full applicator (5 g) intra-vaginally at bedtime for 7 days, or - Metronidazole gel 0.75%, one full applicator (5 g) intra-vaginally twice a day for 5 days. NOTE: Patients should be advised to avoid consuming alcohol during treatment with metronidazole and for 24 hours thereafter. Clindamycin cream is oil-based and might weaken latex condoms and diaphragms. Refer to condom product labeling for additional information. - Metronidazole 2 g orally in a single dose, or - Clindamycin 300 mg orally twice a day for 7 days. Metronidazole 2-g single-dose therapy is an alternative regimen because of its lower efficacy for BV. Oral metronidazole (500 mg twice a day) is efficacious for the treatment of BV, resulting in relief of symptoms and improvement in clinical course and flora disturbances. Based on efficacy data from four randomized controlled trials, overall cure rates 4 weeks after completion of treatment did not differ significantly between the 7-day regimen of oral metronidazole and the clindamycin vaginal cream (78% vs. 82%, respectively). Similarly, the results of another randomized controlled trial indicated that cure rates 7 to 10 days after completion of treatment did not differ significantly between the 7-day regimen of oral metronidazole and the metronidazole vaginal gel (84% vs. 75%, respectively). FDA has approved Flagyl ER (TM) (750 mg) once daily for 7 days for treatment of BV is also an option. Some healthcare providers remain concerned about the possible teratogenicity of metronidazole, which has been suggested by experiments using extremely high and prolonged doses in animals. However, a recent meta-analysis does not indicate teratogenicity in humans. Some healthcare providers prefer the intra-vaginal route because of a lack of systemic side effects (e.g., mild-to-moderate gastrointestinal disturbance and unpleasant taste). Mean peak serum concentrations of metronidazole after intra-vaginal administration are less than 2% the levels of standard 500-mg oral doses, and the mean bioavailability of clindamycin cream is approximately 4%. Follow-up visits are unnecessary if symptoms resolve. Recurrence of BV is not unusual. Because treatment of BV in high-risk pregnant women who are asymptomatic might prevent adverse pregnancy outcomes, a follow-up evaluation, at 1 month after completion of treatment, should be considered to evaluate whether or not therapy was successful. The alternative BV treatment regimens may be used to treat recurrent disease. No long-term maintenance regimen with any therapeutic agent is recommended. Management of Sex Partners The results of clinical trials indicate that a woman’s response to therapy and the likelihood of relapse or recurrence are not affected by treatment of her sex partner(s). Therefore, routine treatment of sex partners is not recommended. Clindamycin cream is preferred in case of allergy or intolerance to metronidazole. Metronidazole gel can be considered for patients who do not tolerate systemic metronidazole. Patients allergic to oral metronidazole should not be administered metronidazole vaginally. BV has been associated with adverse pregnancy outcomes (e.g., premature rupture of the membranes, pre-term labor, and pre-term birth). The organisms found in increased concentration in BV also are frequently present in postpartum or post-cesarean endometritis. Because treatment of BV in high-risk pregnant women (i.e., those who have previously delivered a premature infant) who are asymptomatic might reduce pre-term delivery, such women may be screened and those with BV can be treated. The screening and treatment should be conducted at the earliest part of the second trimester of pregnancy. The recommended regimen is metronidazole 250 mg orally three times a day for 7 days. The alternative regimens are metronidazole 2 g orally in a single dose or clindamycin 300 mg orally twice a day for 7 days. Low-risk pregnant women (i.e., women who previously have not had a premature delivery) who have symptomatic BV should be treated to relieve symptoms. The recommended regimen is metronidazole 250 mg orally three times a day for 7 days. The alternative regimens are metronidazole 2 g orally in a single dose; clindamycin 300 mg orally twice a day for 7 days; or metronidazole gel, 0.75%, one full applicator (5 g) intra-vaginally, twice a day for 5 days. Some experts prefer the use of systemic therapy for low-risk pregnant women to treat possible sub-clinical upper genital tract infections. Lower doses of medication are recommended for pregnant women to minimize exposure to the fetus. Data are limited concerning the use of metronidazole vaginal gel during pregnancy. The use of clindamycin vaginal cream during pregnancy is not recommended, because two randomized trials indicated an increase in the number of pre-term deliveries among pregnant women who were treated with this medication. Thomas G. Stovall, M.D. Dr. Stovall is a Clinical Professor of Obstetrics and Gynecology at the University of Tennessee Health Science Center in Memphis, Tennessee and Partner of Women’s Health Specialists, Inc.
The Feminists and Evolution She was Lithuanian, Jewish and unmarried when she entered the United States in 1886 as an immigrant looking for work. The 17-year-old Emma Goldman married reluctantly at 18, but before her 20th birthday she had been divorced twice from the same man. Vowing never to be trapped by matrimony again, she argued that love in marriage was a contradiction; love should be free, not bonded, and she practiced her conviction with a succession of lovers until her 65th year. A century ago this was a radical lifestyle, but then Emma Goldman was not the average woman. When she finally found her niche in life, it was as a professional anarchist. Known as “Red Emma,” she was deported from the United States in 1917 for her Communist activities and sent to Russia, but here she was totally disillusioned. In her search for a milieu where she could led her chosen life-style, she wandered Europe for the next two decades before gravitating to Toronto, Canada, where she died unloved and unwanted in 1940 (Goldman 1970). Emma Goldman was one of the more notable in a long line of women who, beginning shortly after the French Revolution, formed what is known as the Feminist Movement. Their motivation was, and still is, based upon property rights. But “property” occupies a broad spectrum ranging from children to the State, and individual feminists are usually concerned with either one end of this spectrum or the other. There were, of course, gross injustices not just in Christian societies but also in Jewish and particularly Muslim societies as a result of living by the letter rather than by the spirit of the Law. This brief history introduces a few of the principal characters and shows how the theory of evolution has provided justification to reverse the traditional male and female roles. History has always been funneled through the minds of those who wrote it, and for practically the whole of the Middle Ages we find that many of the writers were clerics who had little interest in, and even less sympathy for, the female half of the population. It is not surprising then that history leaves an impression of women having always been second-class citizens. Modern historians are beginning to correct this impression, and it is becoming evident that, at least until the Industrial Revolution, women played a far more significant part in society than we have been led to believe. In many of the Christian communities, e.g. the Quakers, for example, women always had an equal role (Labarge 1990). However, the coming of the Industrial Revolution, beginning in the early 1700s, brought with it progress and prosperity for some at the expense of much misery for most. Wives and children suffered as much in grinding poverty as did the heads of families. At first it was the men who were employed outside the home, and eventually most of their exploitation was corrected by labor laws and trade unions. Women joined the work force much later, and the facts show that their lot has improved in a similar manner. The French Revolution of 1789 was an engineered affair from the start and justified by the hardships imposed by the rulers on the ruled; the injustices were often done in the name of Christianity. Proposals for the emancipation of women were first published in Paris in the year of the Revolution by Olympe de Gouges. Her Declaration of the Rights of Women influenced the English writer Mary Wollstonecraft, who was the wife of libertarian and anarchist William Godwin. Mary had been well-schooled in a skeptical view of the Bible, having been part of the congregation of the Rev. Dr. Richard Price, a leading Dissenter. Later she was employed as a proofreader and translator for the left-wing publisher Joseph Johnson. Johnson was also a Dissenter and the works of others of his kind, such as Joseph Priestly, William Blake, Thomas Paine and the young Wordsworth, all passed through his print shop and thus through the mind of Mary Wollstonecraft. Mary’s book by which she became best known was titled A Vindication of the Rights of Women. The first edition of 1790 was anonymous, but a second edition appeared the same year and bore her name on the title page. The book was popular, causing her to become a kind of matriarch of the female revolutionary tradition. A generation later and a continent away in America, 26-year-old Frances (Fanny) Wright wrote Views of Society and Manners in America and inspired the revolutionary-minded throughout Europe and America. The year was 1821. Tall, tart, free-thinking and free-loving, the statuesque Fanny Wright would often appear in a white toga, which inspired in her followers the perception of a classical goddess. She was opposed to marriage, the institution of the family and all organized religion. She worked tirelessly for the freedom of black slaves, but when she advocated a rapid mixing of the races to produce a uniformly mulatto population, she alienated both whites and blacks from the Feminist cause. Her humanist ideals confounded any real social progress she might have made, and she died largely forgotten by her own generation in 1852. Little realized today is the fact that in the 1800’s a woman entering marriage forfeited to her husband all property rights. Regardless of how much wealth and property she may have brought into the marriage, should she seek divorce, it was with the understanding that not one penny of her own could be retrieved. Even the children from the marriage were by law the property of her husband. Not surprisingly, the divorce rate in England prior to the passing of The Reform of the Marriage and Divorce Laws bill in 1857 was a mere one or two cases per year. Caroline Norton was an intelligent woman married to a man who turned out to be a drunken brute. With her very life at stake, she left penniless and spent her remaining years writing in behalf of herself and other women in her position to get the laws of England changed. Although the divorce bill of 1857 was largely the result of her work, it was not until 1882 that The Married Woman’s Property Act finally secured the woman’s right to property. It was, however, too late for Caroline Norton. She never divorced and she died in 1877. It is clear, then, that women had genuine grievances; and while there were some extremists like Fanny Wright, and later Emma Goldman, there were others, such as Caroline Norton, who took a more reasoned approach and patiently worked to change unfair laws. When Charles Darwin’s Origin of Species appeared in 1859, his theory of evolution gave justification to ideas that had long been guardedly entertained by some of the free-thinkers of Victorian society. Certain of these ideas have since been adopted by today’s feminists and have given a new and insidious twist to the Feminist Movement. Although Darwin was careful not to discuss human origins in his Origin of Species, it was clear to everyone that if the theory was correct, then humans must have emerged from the animal kingdom. The Swiss jurist Johan Jakob Bachofen had studied legal history at five universities and was appointed to the chair of Roman Law at Basel when only 29. In 1861 he published his most controversial work, Das Mutterrecht . . . . Never yet published in English, the translated full title is: Mother Right: An Investigation of the Religious and Judicial Character of Matriarchy in the Ancient World. Here Bachofen posits that all societies began with cave-dwelling females and their offspring; the males roamed promiscuously from cave to cave. Later family groups developed and were organized strictly by the mother. As families coalesced, the group became a matriarchy and, with the acquisition of property, laws were introduced; the matriarch was finally replaced by a patriarch. His theory was based upon the presumption that evolution was true and that Darwin had provided the world with convincing evidence. Bachofen’s ideas were challenged in his own day and have been since, but he influenced those who wanted to believe, including Lewis Henry Morgan in America, Friedrich Engels in England and Friedrich Nietzsche in Germany. Morgan developed Bachofen’s thesis further in his Ancient Society, published in 1877, and this inspired Engels to popularize the idea in his L’Origine de la Famille . . ., published in France in 1884 and later in England as The Origin of the Family. Engels’s objective was to justify Karl Marx’s 1848 Communist Manifesto by the argument that, since it was the acquisition of private property that necessitated laws, then by the abolition of private property, much of the laws could also be abolished. The abolition of private property is central to the Communist Manifesto, while Engels’s justification is in complete contradiction to the traditional thinking that from the beginning the man was the head of the family and the laws were God-given. Many scholars, such as law professor Sir Henry Maine (1883, 149) and, more recently, sociology professor Steven Goldberg (1973, 57) have pointed out that there is no evidence that the earliest societies were matriarchal; all the evidence shows that kinships and relationships have always been through the males. One of the more interesting justifications for the claim for original matriarchy is the myth of the Amazon woman. It was the Greek historian Herodotus, writing in the 5th century BC, who first described a tribe of warrior women living completely in the male roles, including capture of male love slaves (Book 4:110-117). Four centuries later, the Sicilian historian Diodorus added the charming detail that, because these women lived by hunting with the bow, their right breast had to be seared in infancy to prevent its later growth and interference with the bowstring! The name Amazon is Greek meaning “without breast” (Book 2:45). Tyrrel (1984) and many other writers have pointed out that there is no evidence whatsoever to support this story; it is a myth that fascinates by the reversal of every detail in male/female roles. Nevertheless, our textbooks continue to promote the early cave-dwellers myth, while the extreme element among the Feminist Movement insist that society be once more matriarchal. In this century the efforts of writers such as Simone de Beauvoir, with her 1949 world best-seller The Second Sex (In English 1953), then in 1963 Betty Friedan’s The Feminine Mystique, the feminist objective of producing a matriarchy, at least in North America, is now well on the way to being complete. Friedan’s message that women had allowed themselves to be intellectually unfulfilled and received principally by women in the home. Intentional or not, its effect was to make mothers and housewives discontent with their lot, and they began to leave the home by the thousands and seek “fulfillment” in the workplace. From 1966 onward Friedan’s message was conveyed more forcefully through the American media by the National Organization for Women (NOW). By the early1970s lesbian extremists had taken over, and Betty Friedan had retired, divorced and exhausted. The feminist objectives for power in the workplace have been more successful in Canada than in the United States, but a penalty is being paid. There has been a rise in housing prices to meet double-income families, a sharp rise in the divorce rate, a sinister aggression toward the male now being voiced by some of the women in power, and more females are showing up in the violent crime statistics (Steele 1987, 104-106). There is one final interesting facet to this rise and even dominance of the influence of women in today’s society: the reemergence of the cult of the pagan goddess. It was American suffragette Elizabeth Cady Stanton who, with the help of a Revising Committee, produced The Woman’s Bible in 1898; the result was to emasculate the image of God. Certain church organizations have taken this process a step further in referring to God as “He/She,” while in 1975 Edwina Sandys took the ultimate step and produced a four-foot tall bronze crucifix with the Christ figure replaced by a naked woman. Called Christa, it has been on an exhibition tour of most of the major liberal churches. In retrospect, we can see that such an effective reversal of their male/female roles took place by first emphasizing unfair exploitation and discrimination. While this was being corrected, the second stage was introduced and stressed equality. However, sociological equality has carried with it a subtle denial of biological differences by assuming that tradition and not sex has forced women into their roles. Plain common sense and experience tells us that sex does make a difference and than normal men are better suited to leadership roles while normal women are better suited to supportive roles. This observation fully supports the creation account that women were never intended to be rivals but rather helpers in partnership with men.
DURHAM, N.C. -- Researchers have found that how well a male songbird learns his song affects the female's mating response – the first evidence that female birds use song-learning ability as an indicator of male quality. The study goes beyond previous such studies, which have only demonstrated that very poor or absent male songs affect female mating response. According to the scientists, the finding offers broader insight into the role that traits learned by males play in sexual success. In an article in the September 22, 2002, Proceedings of the Royal Society of London: Biological Sciences (now online), biologists led by Duke University Professor of Biology Stephen Nowicki reported studies in which they tested the mating response of female song sparrows to songs of captive-raised males. Importantly, the scientists had analyzed the males' songs in detail to determine the degree of accuracy with which the males copied songs they attempted to learn. They found that the females preferred those songs that came closest to wild-type songs they heard when young and presumably learned as models. The scientists' research was sponsored by the National Science Foundation. According to Nowicki, he and his colleagues in the field have long theorized that female songbirds pay attention to male song as an indicator of fitness. "We've developed experimental evidence that there is a link between early stress, male brain development and song-learning," he said. "But until now, experimental and field observations showing that females were interested in song only contrasted the presence or absence of song, or relatively gross features of song, like the size of the repertoire. This is the first study to explicitly demonstrate that females care about song-learning quality," he said. To test the effects of fine differences in song quality on female response, the researchers trained captive-reared male song sparrows to sing by exposing them to the recorded songs of wild birds. To induce variation in stress among the birds, some were placed on a restricted diet during development. Using spectrographic analysis, the researchers rated the captive-reared birds on two measures of song quality -- how much of the wild-bird song they copied versus how much they invented, a practice common among song sparrows. Those birds who did invent more song elements also tended not to copy well those elements they did copy. -- how close the males had come to actually matching just the wild-bird song elements they were attempting to copy To determine the effects of song quality, the researchers exposed wild-caught adult females -- presumably experienced in listening to male songs -- to the captive-reared males' songs. The scientists measured female response to the songs by the amount they performed characteristic and distinctive female mating presentation display -- which includes a shivering of the wings, the lifting of the tail and a characteristic call. As a control, the scientists exposed the wild-caught females to what the scientists had judged as well-learned male songs, as well as the digitally recorded wild songs. The female birds responded equally to both. However, when the scientists exposed the females to the captive-reared males' songs, they found the females responded more strongly to male songs that had been better learned by both of the scientists' measures. "The females showed a strong preference for songs that had been copied well, as opposed to songs that had been copied poorly," said Nowicki. "And by our measures, the males got points taken off for originality. That seems to make sense because we would argue that males that deviate from original song haven't learned the song as well." In addition to insight into bird song, said Nowicki, such studies can give basic insight into the evolution of animal signals in general. "We know sexual selection is a very powerful evolutionary force that has led to phenomena such as the evolution of extravagant displays and the evolution of size differences between sexes. I believe that this work demonstrates that sexual selection might not be acting directly on the obvious trait that is expressed, but on the mechanisms that underlie the expression of that trait. In the case of bird song, a male's song reflects the birds' developmental history, and song expression is only the trait that the female can gain access to for information about -- in this case -- brain mechanisms." Also, said Nowicki, the discovery that females assess song quality emphasizes the importance of studying the neurobiology of song expression and placing it in an evolutionary context. While the current studies show clearly that females prefer well-learned songs, among the next research steps, said Nowicki, will be to determine how females learn to judge song quality. "There is only very thin evidence that females learn song, so it's a major scientific question whether females are learning something about the population that they're living in, and using that as a way of assessing males," he said. Such female studies also will reveal whether the female's ability to distinguish good songs from bad reflects the birds' fitness and influences evolution, said Nowicki. Materials provided by Duke University. Note: Content may be edited for style and length. Cite This Page:
Why were the modern police created? It is generally assumed, among people who think about it at all, that the police were created to deal with rising levels of crime caused by urbanization and increasing numbers of immigrants. John Schneider describes the typical accounts: The first studies were legal and administrative in their focus, confined mostly to narrative descriptions of the step-by-step demise of the old constabulary and the steady, but often controversial evolution of the professionals. Scholars seemed preoccupied with the politics of police reform. Its causes, on the other hand, were considered only in cursory fashion, more often assumed than proved. Cities, it would seem, moved inevitably toward modern policing as a consequence of soaring levels of crime and disorder in an era of phenomenal growth and profound social change.1 I will refer to this as the “crime-and-disorder” theory. Despite its initial plausibility, the idea that the police were invented in response to an epidemic of crime is, to be blunt, exactly wrong. Furthermore, it is not much of an explanation. It assumes that “when crime reaches a certain level, the ‘natural’ social response is to create a uniformed police force. This, of course, is not an explanation but an assertion of a natural law for which there is little evidence.”2 We cannot rule out the possibility that slave revolts, riots, and other instances of collective violence precipitated the creation of modern police, but we should remember that neither crime nor disorder were unique to nineteenth-century cities, and therefore cannot on their own account for a change like the rise of a new institution. Riotous mobs controlled much of London during the summer of 1780, but the Metropolitan Police did not appear until 1829—almost fifty years later. Public drunkenness was a serious problem in Boston as early as 1775, but a modern police force was not created there until 1838.3 So the crime-and-disorder theory fails to explain why earlier crime waves didn’t produce modern police; it also fails to explain why crime in the nineteenth century led to policing, and not to some other system.4 Furthermore, it is not at all clear that crime was on the rise prior to the creation of the modern police. In Boston, for example, crime went down between 1820 and 1830,5 and continued to drop for the rest of the nineteenth century.6 In fact, crime was such a minor concern that it was not even mentioned in the City Marshal’s report of 1824.7 And the city suffered only a single murder between 1822 and 1834.8 Whether or not crime was on the rise, after the introduction of modern policing the number of arrests increased.9 The majority of these were for misdemeanors, and most related to victimless crimes, or crimes against the public order. They did not generally involve violence or the loss of property, but instead were related to public drunkenness, vagrancy, loitering, disorderly conduct, or being a “suspicious person.”10 In other words, the greatest portion of the actual business of law enforcement did not concern the protection of life and property, but the controlling of poor people, their habits and their manners. Sidney Harring wryly notes: “The criminologist’s definition of ‘public order crimes’ comes perilously close to the historian’s description of ‘working-class leisure-time activity.’”11 The suppression of such disorderly conduct was only made possible by the introduction of modern police. For the first time, more arrests were made on the initiative of the officer than in response to specific complaints.12 Though the charges were generally minor, the implications were not: the change from privately-initiated to police-initiated prosecutions greatly shifted the balance of power between the citizenry and the state. A critic of this view might suggest that the rise in public order arrests reflected an increase in public order offenses, rather than a shift in official priorities. Unfortunately, there is no way to verify this claim. (The increase in arrests does not provide very good evidence, since it is precisely the fact which the hypothesis seeks to explain.) However, if the tolerance for disorder was in decline, this fact, coupled with the emergence of the new police, would be sufficient to explain the increase in arrests of this type.13 The Cleveland police offered a limited test of this hypothesis. In December 1907, they adopted a “Golden Rule” policy. Rather than arrest drunks and other public order offenders, the police walked them home or issued a warning. In the year before the policy was established, they made 30,418 arrests, only 938 of which were for felonies. In the year after the Golden Rule was instituted, the police made 10,095 arrests, one thousand of which were for felonies.14 Other cities implemented similar policies—in some cases, reducing the number of arrests by 75 percent.15 Cleveland’s example demonstrates that official tolerance can reduce arrest rates. This suggests an explanation for the sudden rise in misdemeanor arrests during the previous century: if official tolerance can reduce arrest rates, it makes sense that official intolerance could increase the number of arrests. In other words, during the nineteenth century crime was down, but the demand for order was up—at least among those people who could influence the administration of the law. Although the problems of the streets—the fights, the crowds, the crime, the children—were nothing new, the ‘problem’ itself represented altered bourgeois perceptions and a broadened political initiative. An area of social life that had been taken for granted, an accepted feature of city life, became visible, subject to scrutiny and intervention.16 New York city’s campaign against prostitution certainly followed this pattern. During the first half of the nineteenth century, the official attitude concerning prostitution transformed from one of complacency to one of moral panic. Beginning in the 1830s, when reform societies took an interest in the issue, it was widely claimed that prostitution was approaching epidemic proportions. Probably the number of prostitutes did increase: the night watch estimated that there were 600 prostitutes working in 1806, and 1,200 in 1818. In 1856, Police Chief George Matsell set the figure at 5,000. But given that the population of the city increased by more than six times between 1820 and 1860, the official estimates actually showed a decrease in the number of prostitutes relative to the population.17 Enforcement activities, however, increased markedly during the same period. In 1860, ninety people were committed to the First District Prison for keeping a “disorderly house.” This figure was five times that of 1849, when seventeen people were imprisoned for the offense. Likewise, prison sentences for vagrancy rose from 3,173 for the entire decade covering 1820–1830, to 3,552 in 1850 and 6,552 in 1860. As prostitutes were generally cited for vagrancy (since prostitution itself was not a statutory offense), the proportion of female “vagrants” steadily rose: women comprised 62 percent of those imprisoned for vagrancy in 1850 and 72 percent in 1860.18 This analysis does not solve the problem, but merely relocates it. If it was not crime but the standards of order that were rising, what caused the higher standards of public order? For one thing, the relative absence of serious crime may have facilitated the rise in social standards and the demand for order. “A fall in the real crime rate allows officially accepted standards of conduct to rise; as standards rise, the penal machinery is extended and refined; the result is that an increase in the total number of cases brought in accompanies a decrease in their relative severity.”19 Once established, the police themselves may have helped to raise expectations. In New York, Chief Matsell actively promoted the panic over public disorder, in part to quiet criticism of the new police.20 More subtly, the very existence of the police may have suggested the possibility of urban peace and made it seem feasible that most laws would be enforced—not indirectly by the citizenry, but directly by the state.21 And the new emphasis on public order corresponded with the religious perspective of the dominant class and the demands of the new industrialized economy, ensuring elite support for policing. This intersection of class bias and rigid moralism was particularly clear concerning, and had special implications for, the status of women. In many ways, the sudden furor over prostitution was typical. As Victorian social mores came to define legal notions of “public order” and “vice,” the role of women was redefined and increasingly restricted. “Fond paternalistic indulgence of women who conformed to domestic ideals was intimately connected with extreme condemnation of those who were outside the bonds of patronage and dependence on which the relations of men and women were based.”22 As a result, women were held to higher standards and subject to harsher treatment when they stepped outside the bounds of their role. Women were arrested less frequently than men, but were more likely to be jailed and served longer sentences than men convicted of the same crimes.23 Enforcement practices surrounding the demand for order thus weighed doubly on working-class women, who faced gender-based as well as class-based restrictions on their public behavior. At the same time, the increased demand for order came to shape not only the enforcement of the law, but the law itself. In the early nineteenth century, Boston’s laws prohibited only habitual drunkenness, but in 1835 public drunkenness was also banned. Alcohol-related arrests increased from a few hundred each year to several thousand.24 In 1878, police powers were extended even further, as they were authorized to arrest people for loitering or using profanity.25 In Philadelphia, meanwhile, “after the new police law took effect, the doctrine of arrest on suspicion was tacitly extended to the arrest and surveillance of people in advance of a crime.”26 Police scrutiny of the dangerous classes was at least partly an outgrowth of the preventive orientation of the new police. Built into the idea that the cops could prevent crime was the notion that they could predict criminal behavior. This preventive focus shifted their attention from actual to potential crimes, and then from the crime to the criminal, and finally to the potential criminal.27 Profiling became an inherent element of modern policing. So, contrary to the crime-and-disorder explanation, the new police system was not created in response to spiraling crime rates, but developed as a means of social control by which an emerging dominant class could impose their values on the larger population. This shift can only be understood against a backdrop of much broader social changes. Industrialization and urbanization produced a new class of workers and, with it, new challenges for social control. They also provided opportunities for social control at a level previously unknown. The police represented one aspect of this growing apparatus, as did the prison, and sometime later, the public school. Moreover, the police, by forming a major source of power for city governments, also contributed to the development of other bureaucracies and increased the possibility for rational administration. In sum, the development of modern police facilitated further industrialization, it led to the creation of other bureaucracies and advances in municipal government, it consolidated the influence of political machines, and it made possible the imposition of Victorian moral values on the urban population. Also, and more basically, it allowed the state to impose on the lives of individuals in an unprecedented manner. Sovereignty, and even states, are older than the police. “European kingdoms in the Middle Ages became ‘law states’ before they became ‘police states,’”28 meaning that they made laws and adjudicated claims before they established an independent mechanism for enforcing them. Organized police forces arose specifically when traditional, informal, or community-maintained means of social control broke down. This breakdown was always prompted by a larger social change, often by a change which some part of the community resisted with violence, such as the creation of a state, colonization, or the enslavement of a subject people.29 In other words, it was at the point where authority was met with resistance that the organized application of force became necessary. The aims and means of social control always approximately reflect the anxieties of elites. In times of crisis or pronounced social change, as the concerns of elites shift, the mechanisms of social control are adapted accordingly. So, in the South, following real or rumored slave revolts, the institution of the slave patrol emerged. White men were required to take shifts riding between plantations, apprehending runaways and breaking up slave gatherings. Later, complex factors conspired to produce the modern police force. Industrialization changed the system of social stratification and added a new set of threats, subsumed under the title of the “dangerous classes.” Moreover, while serious crime was on the decline, the demand for order was on the rise owing to the needs of the new economic regime and the ideology that supported it. In response to these conditions, American cities created a distinctive brand of police. They borrowed heavily from the English model already in place, but also took ideas from the office of the constable, the militia, and the semi-professional, part-time enforcement bodies like the night watch and the slave patrols. At the same time, the drift toward modern policing fit nicely with the larger movement toward modern municipal government—best understood in terms of the emerging political machines, and later tied to the rise of bureaucracies. The extensive inter-relation between these various factors—industrialization, increasing demands for order, fear of the dangerous classes, pre-existing models of policing, and the development of political machines—makes it obvious that no single item can be identified as the sole cause for the move toward policing. History isn’t propelled by a single engine, though historical accounts often are. Scholars have generally relied on one, or a set, of these factors in crafting their explanations, with most emphasizing those surrounding the sudden and rapid expansion of the urban population, especially immigrant communities. Urbanization certainly had a role, but it is not the role it is usually assumed to have had. Rather than producing widespread criminality, cities actually promoted widespread civility; as the population rose, the rate of serious crime dropped. The crisis of the time was not one of law, but of order—specifically the order required by the new industrial economy and the religious moralism that supplied, in large part, its ideological expression. The police provided a mechanism by which the power of the state, and eventually that of the emerging ruling class, could be brought to bear on the lives and habits of individual members of society. The new organization of police made it possible for the first time in generations to attempt a wide enforcement of the criminal code, especially the vice laws. But while the earlier lack of execution was largely the result of weakness, it had served a useful function also, as part of the system of compromise which made the law tolerable.30 In other words, the much-decried inefficiency and inadequacy of the night watch in fact corresponded with the practical limitations on the power of the state. With these limits removed or overcome, the state at once cast itself in a more active role. Public safety was no longer in the hands of amateur nightwatchmen, but had been transferred to a full-time professional body, directed by and accountable to the city authorities. The enforcement of the law no longer relied on the complaints of aggrieved citizens, but on the initiative of officers whose mission was to prevent offenses. Hence, crimes without victims needn’t be ignored, and potential offenders needn’t be given the opportunity to act. In both instances the new police were there doing what would have been nearly inconceivable just a few years before. It was in this way that the United States became what Allan Silver calls “a policed society.” A policed society is unique in that central power exercises potentially violent supervision over the population by bureaucratic means widely diffused throughout civil society in small and discretionary operations that are capable of rapid concentration.31 The police organization allowed the state to establish a constant presence in a wide geographic area and exercise routinized control by the use of patrols and other surveillance. Through the same organization, the state retained the ability to concentrate its power in the event of a riot or other emergency, without having to resort to the use of troops or the maintenance of a military presence. Silver argues that the significance of this advance “lay not only in its narrow application to crime and violence. In a broader sense, it represented the penetration and continual presence of central political authority throughout daily life.”32 The populace as a whole, even if not every individual person, was to be put under constant surveillance. With the birth of modern policing, the state acquired a new means of controlling the citizenry—one based on its experiences, not only with crime and domestic disorder, but with colonialism and slavery as well. If policing was not in its inception a totalitarian pursuit, the modern development of the institution has at least been a major step in that direction. - ↩ John C. Schneider, Detroit and the Problem of Order, 1830–1880 (Lincoln: University of Nebraska Press, 1980) 54. - ↩ Eric H. Monkkonen, Police in Urban America, 1860–1920 (Cambridge: Cambridge University Press, 1981) 50. - ↩ Richard J. Lundman, Police and Policing (NewYork: Holt, Rinehart, and Winston, 1980) 31. - ↩ Monkkonen, Police, 50–1. - ↩ Seldan Daskan Bacon, The Early Development of the American Municipal Police vol. 2. diss. Yale University, 1939. (Ann Arbor: University Microfilms International [facsimile], 1986) 455. - ↩ Roger Lane, “Crime and Criminal Statistics in Nineteenth-Century Massachusetts” Journal Of Social History (Winter 1968) 157. Lane bases this conclusion on an examination of lower court cases, jail sentences, grand jury proceedings, and prison records. - ↩ Roger Lane, Policing the City (Cambridge: Harvard University Press, 1967) 19. - ↩ James F. Richardson, Urban Police in the United States (Port Washington, New York: National University Press, 1974) 19. - ↩ Lane, “Crime,” 158–9. - ↩ Lane, “Crime,” 160; and Monkkonen, Police, 103. - ↩ Sidney Harring, Policing a Class Society (New Brunswick, New Jersey: Rutgers University Press, 1983) 198. - ↩ Monkkonen, Police, 103. - ↩ Lane, Policing, 222; and Lane, “Crime,” 161. - ↩ Richardson, Urban, 79-80. - ↩ Harring, Policing, 40. - ↩ Christine Stansell, City of Women (Urbana: University of Illinois Press, 1987) 197. - ↩ Stansell, City, 172–3. - ↩ Stansell, City, 173–4 and 276-7. - ↩ Lane, “Crime,” 160. - ↩ Stansell, City, 194–5. - ↩ Allan Silver, “The Demand for Order in Civil Society,” The Police ed. David J. Bordua (New York: John Wiley and Sons, 1976) 21; and Lane, Policing, 223. - ↩ Stephanie Coontz, The Social Origins of Private Life (London: Verso, 1991) 222. - ↩ Coontz, Social, 222. - ↩ Richardson, Urban, 30. - ↩ Lane, Policing, 173. - ↩ Allen Steinberg, The Transformation of Criminal Justice (Chapel Hill: University of North Carolina Press, 1989) 152. - ↩ Monkkonen, Police, 41. - ↩ David H. Bayley, “The Development of Modern Policing,” Policing Perspectives eds. Larry K. Gaines and Gary W. Cordner (Los Angeles: Roxbury Publishing Company, 1999) 60. - ↩ Bayley, “Development,” 66–7. - ↩ Lane, Policing, 84. - ↩ Silver, “Demand,” 8. - ↩ Silver, “Demand,” 12–3.
by Suzanne Labry Maya Embroidered Patchwork In 1566, a Spanish bishop in the Yucatán Peninsula in southeastern Mexico named Fray Diego de Landa, wrote a book about the region in which he stated that Maya women “wove with curiosity works of the pen to adorn their garments.” The “pen” he was referring to was the thorn of the maguey plant, a type of agave native to Mexico. Indians used the plant not only to make needles, but also to make thread and textiles. And archeological digs indicate that Maya women have been employing embroidery as a decorative technique since prehistoric times. Today, they continue to apply these ancient skills not only to adorn garments, but also to produce patchwork quilts for the tourist trade. The Spanish Conquest eventually saw the introduction of metal needles to the Yucatán, and Conceptionist nuns established schools and taught European embroidery techniques, such as the herringbone stitch and cross-stitch (known as “xocbichuy” which means “to count” in the Maya language), to native girls. Another popular technique known as “manicté” or deshilado, is a type of open work similar to Hardanger embroidery in which threads are drawn out and tied in an intricate design of figures or flowers. Among the indigenous embroiderers, interest in European patterns apparently would come and go at different times in different times in different communities, with traditional designs or a combination of the two gaining in prominence. Traditional Maya embroidery designs draw from nature, religious symbols and icons, everyday life, ancestry, Mayan mythology, and even from dreams. The vibrant palette—bright reds, greens, blues, purples, yellows, and oranges—reflect the exotic flowers of the tropics. Maya embroidery is a riot of color. Although the chief purpose of the centuries-old tradition of Maya embroidery was to decorate clothing, it is now common to see patchwork quilts made from embroidered squares. It is my understanding that these are made primarily as a means of earning income rather than as an item that the maker would use for herself and her family. This apparently, is not a new practice. However, as during the Colonial period it is said that Europeans developed a passion for Maya embroidered textiles. The appeal of these delightful items has not lessened these many years later.
The Mokelumne River has been running brown with sediment after each storm Calaveras and Amador Counties have seen this winter. For those who drive between Mokelumne Hill and Jackson, the brief glimpse of the river from the Highway 49 bridge provides a hint of what is trickling down from tributary streams and off the burned banks upstream. Since the Butte Fire was contained October 1, optimistic weather reports began spreading news of a huge El Nino event for the western United States. This was positive news for those hoping it would help make a dent in California’s ongoing drought. But those who watched or experienced the Butte Fire rip through our area feared what the potential deluges could bring down from the blackened hillsides. While doom and gloom and the potential for mud and debris flows resulting from massive predicted storms filled the media, on came some weather. It rained, it snowed, and early on, the storms yielded after providing some much-needed moisture in a much more gentle manner than feared. These smaller storms helped plants begin sprouting where they are able, and may be helping to minimize the extent to which soil moves from burned hillsides into waterways in this first winter after the fire. Heavy weather events after a burn can cause great damage to watershed health. Where vegetation once held hillsides in place, landslides caused by saturation of the soil can cause significant harm to property, human life, and ecosystems alike. Particularly after heavier rains, soil newly exposed after duff, protective leaf layers, or roots below ground are burned away can move in large flows or slowly and evenly down a sloped bank. Each bank drains to some form of stream course, be it an ephemeral stream that flows briefly in direct response to precipitation or a perennial stream like the Mokelumne or Calaveras Rivers, which flow year round. Regardless of how it moves downhill, with precipitation, sediment eventually reaches the rivers. Sediment loads in streams can damage aquatic habitat, food webs, and fish spawning grounds, and can directly kill fish. California Department of Fish and Wildlife Fisheries Biologist Ben Ewing said, “Increased sediment loads can eliminate suitable spawning grounds and can destroy redds (egg nests in the gravel) by suffocating them.” Sediment movement in rivers, however, is a natural process, and it is normal for Sierra rivers to turn turbid and brown when high spring flows from annual snowmelt carry sediment downstream. This is how California’s Central Valley was formed and why it has such nutrient-rich soil. However, these days most of that sediment is captured behind large dams on our Sierra rivers. Most aquatic organisms have adapted to these sediment-carrying natural events of peak runoff. Typically species have reproductive cycles evolved to not coincide with these events, and due to the greater volumes of water in the rivers swollen with spring snowmelt, many species are able to find refuges to wait out the flush of the high water and transported sediment. However, when large amounts of ash and soil are transported downstream during non peak flow times of year, the result can significantly harm aquatic species and ecosystems. Imagine yourself as a fish, filtering water through your gills to pull out oxygen, trying to find a nice spot to lay your eggs on the river bottom. When the concentration of sediment in the water is higher and moving in the stream during an unusual time of year, breathing becomes stressful and ash can clog your gills. In the Mokelumne River, kokanee salmon living in Pardee Reservoir, and the introduced brown trout, both popular sportfishing species, reproduce in the fall to early winter when they attempt to move up-stream to find suitable habitat to lay their eggs. When these fish are producing and carrying eggs upstream, they are already using a great deal of their energy—both from their journey upstream and from the energy costs associated with growing eggs. Added stress from hindered breathing and the increased difficulties of finding suitable habitat or simply moving upstream in sediment-laden water can easily tip the scale away from successful reproduction or survival. It’s not only fish that face harm. Depending on the duration, amount, and characteristics of ash material entering a system, macroinvertebrate populations (the small bugs that fish and other small critters eat) can also suffer dramatic reductions. When this part of the food chain is disturbed, there can be longer-term effects to each link of the food chain above them. In addition, sediment that ends up in streams often carries other pollutants or nutrients that can bind to sediment, contributing to additional stress for aquatic life. Ash and sediment decrease water quality and can even change water chemistry over shorter time periods. Elevated levels of phosphorus or nitrogen can over-stimulate growth of aquatic vegetation, leading to depletion of oxygen levels in the water that can further harm fish. The East Bay Municipal Utility District and other water purveyors are testing water in the Mokelumne and Calaveras Rivers to determine what, if any, changes they need to make in their water-treatment methods and systems as a result of the fire. Most recently, the Calaveras County Water District announced that it will need new water filtering equipment for its Jenny Lind Water Treatment Plant due to the ash and sediment flowing in the Calaveras River. The long-term water-quality effects of the Butte Fire won’t be known for some time. But in the short run, the sediment and ash do pose potential challenges for people and aquatic life.
Blips are beautiful The world is a messy place, so it's not surprising that evidence we collect shows some variability. Patterns can be important even if they have exceptions. For example, imagine that on most days, a city bus stops at the end of your block at 3:15 pm. Even if the bus is occasionally late, understanding that overall pattern is useful and suggests that there is an underlying mechanism behind the pattern (e.g., a bus schedule that drivers are expected to follow). In the same way, if a few more mammal species live at 35º north than 25º north, that doesn't change the fact that, overall, diversity tends to drop off as one moves away from the equator. Though it has exceptions, the overall pattern of the LDG helps us make useful predictions about the location of biodiversity hotspots and makes us wonder what the underlying mechanism might be why are the tropics so diverse? Are the tropics crowded? If you spend a few minutes looking at a globe, you'll probably come up with one possible explanation for why more species are found in the tropics than in the poles: area. There is more area between 0 and 10º north than there is between 50 and 60º north. This is the simple result of geometry when we're measuring along lines of latitude: the Earth's circumference is largest at the equator. Circling the Earth over the equator is a journey of 25,000 miles (40,000 km), but circling it along the 50º latitude line is a journey of just 18,500 miles (30,000 km). Even if species were spread out evenly over Earth's surface, there would still be more of them between 0 and 10º north than between 50 and 60º north just because there's more area at lower latitudes. Is the LDG more than simple geometry? Are the tropics actually more crowded with species than are the poles?
- Biotic elements of communities - Patterns of community structure - Interspecific interactions and the organization of communities - The coevolutionary process - Evolution of the biosphere Specialization in grazing On most continents, reciprocal evolutionary changes, or coevolution, between grasses and large grazing mammals have taken place over periods of millions of years. Many grass species have evolved the ability to tolerate high levels of grazing, which is evident to anyone who regularly mows a lawn. Simultaneously, they have evolved other defenses, such as high silica content, which reduces their palatability to some grazers. A number of herbivorous mammals have responded to these defenses by evolving the ability to specialize on grasses with high silica content and low nutritional value. Many large grazing mammals such as elephants have high-crowned (hypsodont) teeth that are constantly replaced by growth from below as the crowns are worn down by the silica in their food. Many of these species also have complicated digestive systems with a gut full of microflora and microfauna capable of extracting many of the nutrients from the plants. Not all grasslands, however, are adapted to grazing by large mammals. In North America, although the grasslands of the Great Plains coevolved with large herds of bison, the grasslands of the upper Intermontane West (which roughly includes eastern Washington and Oregon) have never supported these large grazing herds. The Great Plains had grasses that formed sods and could withstand trampling by large-hooved mammals. These sods were so tightly interwoven that early European settlers cut them to use as roofs for their houses. The grasses of the Intermontane West, however, were tuft grasses that did not form sods and quickly died when trampled. Consequently, when cows replaced bison as the large, grazing mammal of North America, the grasslands of the Great Plains sustained the grazing pressure, whereas those of the Intermontane West rapidly eroded. Similar problems have arisen in other parts of the world where cattle have been introduced into grasslands that did not have a history of coevolution with large grazing mammals. Plants have evolved more than 10,000 chemical compounds that are not involved in primary metabolism, and most of these compounds are thought to have evolved as defenses against herbivores and pathogens. Some of these chemical compounds are defenses against grazers, whereas others are defenses against parasites. Most of the chemical compounds that make herbs so flavourful and useful in cooking probably evolved as defenses against enemies. These compounds, called allelochemicals, are found in almost all plant species, and their great diversity suggests that chemical defense against herbivores and pathogens has always been an important part of plant evolution. Predation differs from both parasitism and grazing in that the victims are killed immediately. Predators therefore differ from parasites and grazers in their effects on the dynamics of populations and the organization of communities. As with parasitism and grazing, predation is an interaction that has arisen many times in many taxonomic groups worldwide. Bats that capture insects in flight, starfish that attack marine invertebrates, flies that attack other insects, and adult beetles that scavenge the ground for seeds are all examples of the predatory lifestyle. Cannibalism, in which individuals of the same species prey on one another, also has arisen many times and is common in some animal species. Some salamanders and toads have tadpoles that occur in two forms, one of which has a specialized head that allows it to cannibalize other tadpoles of the same species. Because predators kill their prey immediately, natural selection favours the development of a variety of quick defenses against predators. In contrast, the hosts of parasites and the victims of grazers can respond in other ways. A parasitized host can induce defenses over a longer period of time as the parasite develops within it, and a plant population subjected to grazing can evolve traits that minimize the effects of losing leaves, branches, or flowers. Therefore, the evolution of interactions between parasites and hosts, grazers and victims, and predators and prey all differ from one another as a result of the ways in which the interaction affects the victim. Specialization in predation Most predators attack more than one prey species. Nevertheless, there are some ecological conditions that have permitted the evolution of highly specialized predators that attack only a few prey species. The evolution of specialization in predators (and in grazers) requires that the prey species be predictably available year after year as well as easy to find and abundant throughout the year or during the periods of time when other foods are scarce. In addition, the prey must require some form of specialization of the predator to be captured, handled, and digested successfully as the major part of a diet. The most specialized predators attack prey that fulfill these ecological conditions. Examples include anteaters, aardwolves, and numbats that eat only ants or termites, which are among the most abundant insects in many terrestrial communities. Among birds, snail kites (Rostrhamus sociabilis) are perhaps the most specialized predators. They feed almost exclusively on snails of the genus Pomacea, using their highly hooked bills to extract these abundant snails from their shells. Some seed predators are also highly specialized to attack the seeds of only one or a few plant species. (Seed consumption is considered predation because the entire living embryo of a plant is destroyed.) Crossbills exhibit one of the most extreme examples of specialization. These birds have beaks that allow them to pry open the closed cones of conifers to extract the seeds. The exact shape of their bills varies among populations and species in both North America and Europe. Experiments on red crossbills (Loxia curvirostra) have shown that different populations of these birds have bill sizes and shapes that have been adapted to harvest efficiently only one conifer species. Hence, red crossbills are a complex of populations, each adapted to different conifer species.
Homes of the World The children will explore the different homes of people of the world. They will be shown how our homes can be of different designs and made of different materials pertaining to where we live in the world. Children love to draw their homes. It is one of the first things they paint when they get to paint at the easel in school. Show them pictures of homes throughout the world. Talk about the materials they are made from. Talk about textures on the surfaces of the homes, and why certain materials are used in different parts of the country. Tell them about an architect. Discuss how a home is part of the history of their family. Complete Lesson Plan Homes of the World (PDF)
Early childhood education is all part of the development of your child. Early Childhood Education: A Lot Happens When They’re Young During your child’s early childhood development, a lot happens. How much changes? Well, here is an example of just what happens during the first year from the Children’s Hospitals and Clinics of Minnosota website: “As babies grow, so do their skills. While not specific to your child, this education sheet can help you know what skills your baby is likely to develop at what age. Babies develop at their own rate, learning some skills much quicker than others. Progress also starts and stops. As some skills are learned, your baby might go back to an earlier stage in some other areas for a time. If you have any questions or concerns about your baby’s development, talk with your doctor or nurse practitioner. Gross motor skills — skills that use large muscles (legs, arms, trunk, and neck): • 1 month: head bobs when held upright, kicks feet when on back • 2 months: lifts head briefly when placed • 3 months: lifts head 2 to 3 inches off surface and pushes up with forearms when placed on tummy • 4 months: straightens legs when feet touch a flat surface; when on tummy, lifts head and chest off surface while • 5 months: rolls from tummy to back (always supervise to avoid falls); pulls self forward to sitting position when you hold baby’s hands • 6 months: sits briefly without support; rolls from back to tummy Remember: Infants should only be on their tummies when they are awake. If your baby falls asleep, turn him or her onto the back for safe sleeping. Fine motor skills — skills that use small muscles (hands and • 1 month: hands are in fists most of the • 2 months: holds a rattle when placed in • 3 months: reaches toward dangling objects and people’s faces • 4 months: holds and shakes rattle, plays with and watches their own fingers • 5 months: reaches for a toy with two hands; uses whole hand to grasp a toy • 6 months: passes a toy from one hand to another: bangs toy on table.” And that’s just the first year of childhood. But there is much more that can help your child grow and develop, namely the right education. How a Preschool in Gilbert AZ Can Help A preschool in Gilbert, AZ can help your child to grow in other ways. For example, children start to thrive in an environment where they learn from each other, and it isn’t always just sitting down and learning. Surprisingly, playing can help your child develop social skills, establish connections and even help with their language skills. That is something to look for in any preschool. Kid’s Corner Preschool & Childcare 1450 N Gilbert Rd Gilbert, AZ 85234
The Pagami Creek fire burning in the Boundary Waters is the biggest the area has seen in more than a century. But historically, big forest fires used to be commonplace in that area. In fact, they're part of a natural process that rejuvenates the ecosystem. Lee Frelich, a forest ecologist at the University of Minnesota, spoke with MPR's Morning Edition about the role fire plays in a forest. Cathy Wurzer: What do you find in the aftermath of these fires? Lee Frelich: You find that a lot of trees within the fire perimeter are still alive and they are the seed source for the future forest. You find areas that are as black as a moonscape and burn down to bare rock. It's very interesting the things you see pop up in the fire. We'll be having several Ph.D. students at the University of Minnesota looking into the aftermath of the big fires we've had in recent years. [[header: Wurzer: There have been several fires in the last decade. What is the frequency of these types of fires? Frelich: It looks like the frequency of big fires is increasing in the last decade. But if you look back historically for the last 300 years during the 1600s, 1700s, 1800s, a fire like the Pagami Creek fire occurred about once a decade on average, and then during the 1900s there were hardly any fires of significant size in the Boundary Waters, and now we seem to be returning to a period with big fires again, so I think the anomaly is the 1900s when we had fewer fires. Wurzer: Does the climate factor into why we're seeing more warm, dry weather? Frelich: With a much warmer climate that we're experiencing in northern Minnesota today, and the uneven distribution of rainfall where lots of rain falls in a short period then there's several weeks where no rain falls, I think this new pattern of extreme droughts and floods will be conducive to fires because you get those long gaps without rainfall. That's more characteristic of the climate you see on the great plains than a forest climate which tends to have more even rainfall. So it may be an indication that the savannahs and prairies of the west are beginning to shift into Minnesota. Wurzer: Should more have been done initially to put out the fire that started by lightning in the Pagami Creek area? Frelich: That's almost impossible to judge. This is one case where there's no 20/20 hindsight. You never know if they would have been able to put it out when it was small or if some tiny little pocket would have survived that was unnoticed and would have blown up later. It's just difficult to tell what the alternate trajectory would have been. Wurzer: Why are fires good for forests? Frelich: These fires are necessary because the species of trees that grow in the Boundary Waters are fire dependent. Jack pine and black spruce, for example, have closed cones that don't open unless there's a fire. Even the birch and aspen, their roots survive and they sprout after a fire. In fact, in the absence of fire, spruce, fir and cedar tend to take over and you have a more homogeneous landscape. When you have these fires occurring in different places across the landscape then you get a mosaic of forest in different stages of succession and there are species of wildlife that use all different stages. Moose would probably benefit from having a lot of younger forests. We think these recent fires will have a lot of young birch saplings, and that's one of the favorite foods of the moose. So having a mixture of different forest ages on the landscape is something that will happen if fires occur on a regular basis over time. Wurzer: It's been four years since the Ham Lake fire. What are we seeing in that area now? Frelich: There are birch saplings and aspen saplings. In a few places there are a few jack pines. It appears that white pine was hit very hard by the combination of blow-down followed by fire, and white pine is virtually exterminated, unfortunately. There's a little bit of red pine regeneration, a little bit of spruce and fir, but basically there's a lot of birch and aspen and some places there's a lot of shrubs too, like blueberries and raspberries. It's about four years after a fire that's the best berry-picking, actually. Wurzer: Are there any ecological dangers at all from a fire like this? Frelich: The only ecological danger is if some invasive species could take advantage of the fact that the forest has been disturbed and jump in at that point and take over. That's why it's important to keep invasive species away from the Boundary Waters so things like buckthorn, garlic mustard and Canada thistle have less of an opportunity to jump in. As long as native species are able to take advantage of them, it's generally a good thing for managing a healthy ecosystem. (Interview transcribed by MPR reporter Elizabeth Dunbar)
and technology 12 This year has an extra day - February 29th. But why? The short answer is an extra day is added to the calendar in order to keep the calendar year in better sychronisation with the position of the sun in the sky relative to the earth, otherwise known as the seasons. “Since the tropical [solar] year is 365.242190 days long, a leap year must be added roughly once every four years (four times the fractional day gives 4 – 0.242190 = 0.968760 [which is approximately =] 1). In a leap year, the extra day (known as a leap day) is added at the end of February, giving it 29 instead of the usual Without leap years, today’s date would be the 17th June, 2005! Now that might be dandy, having June weather and day lengths in February, but by August it would be like December. Moreover, the weather and daylength would shift each year, causing continuing and changing calculational problems for transport companies and anyone else relying on the weather and day lengths remaining fairly constant. - Leap days were instigated by Julius Caesar in 46 BC, as part of his reform of the calendar. leap day was to be added every four years, with 90 days added in 46 BC in order to ‘catch up’ the desynchronisation at that time. In about 9 BC, it was discovered that the priests had been adding a day every three, not every four, years; so no more leap days were added until 8 AD. Thus the dates of leap years from the start of the Julian 45 BC, 42 BC, 39 BC, 36 BC, 33 BC, 30 BC, 27 BC, 24 BC, 21 BC, 18 BC, 15 BC, 12 BC, 9 BC, 8 AD, 12 AD and every fourth year after that. - There is a leap year in every year divisible by four, except in years which are divisible by 100 but not by 400. This secondary rule is to provide a further correction for having added a whole day every four years, rather than the precise amount—0.242190 does not equal 0.25 (1/4). Thus, in our current Gregorian calendar, 97 years out of every 400 are leap years, giving the total number of days in 400 years as (400 x 365) [years given as days] + (100 – 3) [number of leap days in 400 years] = 146,097 days. Note that, if there were no leap years, the number of days in 400 years would be (400 x 365) = 146,000 days. Between 0 AD and 2000 AD, there have been 5 periods of 400 years. Thus, there have been (5 x 97) – 1 = 484 leap days. 484 / 365 = 1 year 119 days. So, without leap years and their leap days, today’s date would [(2004y60d + 1y119d) = 2005y179d], or 28 June, 2005. Except it would not, because the Gregorian calendar required 11 days to be subtracted from the Julian date, the Julian calendar introducing an error of 1 day every 128 years. So, without leap years, today’s date would be 17 June , 2005 [if my calculations are right J] the web address for this article is
|Sudan Table of Contents Until the thirteenth century, the Nubian kingdoms proved their resilience in maintaining political independence and their commitment to Christianity. In the early eighth century and again in the tenth century, Nubian kings led armies into Egypt to force the release of the imprisoned Coptic patriarch and to relieve fellow Christians suffering persecution under Muslim rulers. In 1276, however, the Mamluks (Arabic for "owned"), who were an elite but frequently disorderly caste of soldier-administrators composed largely of Turkish, Kurdish, and Circassian slaves, intervened in a dynastic dispute, ousted Dunqulah's reigning monarch and delivered the crown and silver cross that symbolized Nubian kingship to a rival claimant. Thereafter, Dunqulah became a satellite of Egypt. Because of the frequent intermarriage between Nubian nobles and the kinswomen of Arab shaykhs, the lineages of the two elites merged and the Muslim heirs took their places in the royal line of succession. In 1315 a Muslim prince of Nubian royal blood ascended the throne of Dunqulah as king. The expansion of Islam coincided with the decline of the Nubian Christian church. A "dark age" enveloped Nubia in the fifteenth century during which political authority fragmented and slave raiding intensified. Communities in the river valley and savanna, fearful for their safety, formed tribal organizations and adopted Arab protectors. Muslims probably did not constitute a majority in the old Nubian areas until the fifteenth or sixteenth century. Source: U.S. Library of Congress
The human body typically needs a significant amount of energy in the form of calories to perform basic functions. These functions are activities such as breathing, cell production, protein synthesis, blood circulations, and others. Basal Metabolic Rate, or your BMR, is an estimate of the amount of energy (i.e., calories) that your body would burn if it were to rest for 24 hours. BMR does not include the calories you burn from daily activities or exercises. How Basal Metabolic Rate is Measured You should be aware of your BMR if you’re trying to lose weight. There are three ways to measure it: - Can be tested in the lab under a restrictive setting - Using an online calculator - Using a formula given by scientists The most common formula used by experts to calculate BMR is the Harris Benedict Equation. It determines your total daily energy expenditure (TDEE). Equations differ for men and women: Men: 88.4+ (13.4 X weight in kilograms) + (4.8 X height in centimeters) – (5.6 X age) Women: 447.6+ (9.25 X weight in kilograms) + (3.10 X height in centimeters) – (4.33 X age) Note that the weight and height are in kilograms and centimeters respectively. You must convert if you want to use pounds and inches. Factors Influencing the Basal Metabolic Rate Age: BMR is great during the first few years of your life. As you grow older, your muscle mass decreases. Therefore, you need to make up for your slow metabolism by eating healthy and exercising. Gender: Women have less BMR compared to men regardless of age. This is because women have more body fat and less muscle mass. (1) Muscle mass: As you increase your muscle mass, the body requires more energy to perform its essential functions which will boost the metabolic rate. Body temperature: As your body temperature increases, the body takes up more energy to cool down. High body temperature increases BMR by 13 percent for each degree in Fahrenheit. Nutrition: The body needs to burn more calories than you consume to lose weight. If the body lacks sufficient calories, the BMR drops. To ensure that it is on track, consume a balanced diet always. Sleep: BMR is 10 percent lower when you’re sleeping than when awake. That is because the muscles are relaxed. (2) Activity level: You burn more calories when you are active. Indulging in exercises will increase your BMR. Pregnancy: During pregnancy, the BMR increases especially during the last trimester where it rises from 15 to 25 percent. The increase is due to the increased weight during that period. To reach or maintain a healthy weight level, it is ideal to know how BMR works and the total number of calories you burn every day. With that information, it’s easier to determine the changes you need to make in your lifestyle. It is also best to ensure that you track your BMR before embarking on a new diet or a new exercise program. This move will help you keep track of the results. Click to learn more about body wellness. 1. Ferraro, R., Lillioja, S., Fontvieille, A. M., Rising, R., Bogardus, C., & Ravussin, E. (n.d.). Lower sedentary metabolic rate in women compared with men. The Journal of clinical investigation. https://pubmed.ncbi.nlm.nih.gov/1522233/. 2. Pacheco, D. (2020). How Your Body Uses Calories While You Sleep. Sleep Foundation. https://www.sleepfoundation.org/how-sleep-works/how-your-body-uses-calories-while-you-sleep.
Hypernatremia: Elevated blood sodium. Sodium is the major positive ion (cation) in fluid outside of cells. The chemical notation for sodium is Na+ (from natrium, a synonym for sodium). When combined with chloride (Cl), the resulting substance is table salt (NaCl). Excess sodium (such as from fast food hamburger and fries) is excreted in the urine. Too much (or too little) sodium can cause cells to malfunction, and extremes can be fatal. The normal blood sodium level is 135 - 145 milliEquivalents/liter (mEq/L), or in international units, 135 - 145 millimoles/liter (mmol/L). Health Solutions From Our Sponsors
Ever since humans began to live together in settlements they have felt the need to organize some kind of defense against potentially hostile neighbors. Many of the earliest city states were built as walled towns, and during the medieval era, stone castles were built both as symbols of the defenders' strength and as protection against potential attack. The advent of cannon prompted fortifications to become lower, denser, and more complex, and the forts of the eighteenth and nineteenth centuries could appear like snowflakes in their complexity and beautiful geometry. Without forts, the history of America could have taken a very different course, pirates could have sailed the seas unchecked, and Britain itself could have been successfully invaded. This book explains the history of human fortifications, and is beautifully illustrated using photographs, plans, drawings, and maps to explain why they were built, their various functions, and their immense historical legacy in laying the foundations of empire.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Thalidomide was first developed by Chemie-Grunenthal, a German pharmaceutical company, in 1954. It was first marketed as a sedative, and was far more effective that the alternative of the time – barbiturates. It was non-addictive, didn’t give rise to tolerance and didn’t depress respiration. Soon after its release onto the market it was discovered to be an effective anti-emetic (preventing vomiting) and was used to treat morning sickness which usually occurs in the first trimester of pregnancy (during the period of organogenesis). The drug soon became widespread (46 countries) and was widely marketed and advertised as a safe drug. It was only in the early 1960s that reports started to come in detailing occurrences of severe birth defects, ‘flipper-like limbs’, with a significant increase in phocomelia (shortening of the limbs) and amelia (no limbs). The increase in defects was only related to the drug because these conditions are exceptionally rare in humans (only 1 in 4 million births). In 1961, Lenz in Germany and McBride in Australia confirmed thalidomide to be the cause of these teratogenic effects. This was the biggest man made medical catastrophe with 10,000 severe birth defects in children, and about 5,000 of these survived to adulthood. It is not known how many babies died in utero but a marked increase in miscarriages and stillbirths was seen in this period with estimates over 100,000 additional ‘deaths’. Thalidomide was withdrawn from the market in the UK in November 1961. Dr Frances Kelsey of the FDA was responsible for licensing drugs in the USA at this time, but she was concerned about the safety of the drug due to a side effect – peripheral neuropathy which causes numbness and tingling in the hands and feet. She also feared about the effects on pregnancy so didn’t licence the drug for use in the USA. This averted the disaster occurring in the USA and she was given the Presidents Award for Distinguished Federal Civilian Service. Very few organs in the body weren’t affected in some way by thalidomide. Most commonly in the limbs but also seen was significant damage to the face, eyes, ears, genitalia, heart, kidney and gastrointestinal tract. The exceptionally high infant mortality is attributed to internal organ damage. The mechanism of action of thalidomide is unknown – although this is still being studied and a credible reaction cascade has been proposed from various hypotheses. Thalidomide (also known as alpha-N-phthalimidoglutarimide) exists in 2 forms. It has a chiral carbon which means it can exist in 2 enantiomers an R-enantiomer and an S-enantiomer. The S-enantiomer is thought to be teratogenic, and the R-enantiomer is thought to provide the sedative effect, but it is very hard to stabilise this switching as it occurs in any aqueous environment (e.g. cellular) and doesn’t require enzymes to catalyse it. The drug given to patients was a racemic mix of both enantiomers. This was further broken down into 18 metabolites with various molecular targets (Cereblon, tubulin and nitric oxide), however the exact molecular effects and mechanisms are not fully understood. The thalidomide disaster was a tragedy, but some good came out if it in the pharmaceutical industry. Partly as a result of this disaster drugs are now tested far more extensively than they were in the 1950s and 60s, and there has not been a repeat of the disaster. Drugs are tested using rabbits as they are more sensitive to teratogenic effects than mice and are useful in analysing whether a drug causes birth defects.
Free Math Printable Worksheets For 6th Grade – There are various ways to teach the fundamentals of mathematics through worksheets. The first grade is an important age to learn math. Children in that grade are too young and cannot understand algebra. However worksheets are a great tool as a beginner in math. Students can use them to improve their number sense and reasoning skills. They can also help students learn about different mathematical concepts like fractions, decimals, trig operations, and other math concepts. You can even customize your math worksheets to suit your needs. You can change the number of problems, font size and spacing. You can also use your own graphics to make the worksheets more appealing for your child. This will help focus your child’s attention on the subject. There are many math-related websites where you can find free printable worksheets. These worksheets are only available for limited subjects. Math.org is a great resource for comprehensive math information. You can find many free math worksheets online. These worksheets are great for learning basic concepts and applying them in a variety contexts. If you’re a parent, a math worksheet can help your child learn more about the fundamentals of math. These worksheets are free to download to your computer. You can use them to teach your child. You can also download free printables from the website. These printable worksheets are a great introduction to division basics. Math worksheets are great for teaching problem-solving skills in preschool and kindergarten. They are also great for helping your child improve their mathematical understanding. There are many free online math worksheets. Moreover, they are widely available on various websites. These worksheets can also be used to practice at home or as a supplement to existing math programs. You can find them useful for you and your child. There are many types of math worksheets available online. Math worksheets can also be used by kids to learn about different topics. Each group of math worksheets has a question page and an answer key page. The questions are not only simple to understand, but also difficult to answer. These printable worksheets will help your child become more confident solving math problems and improve their math skills. The best math worksheets will be those that focus on core concepts. These worksheets can help your child to improve their concentration and confidence while studying. These worksheets can help your child improve their math skills. These worksheets are perfect for elementary school students as they can help them to learn more difficult equations through the use of worksheets. These worksheets can help children learn the basics and improve their analytical skills. In the end, these worksheets will help your child become a better mathematician. They can also serve as exercises to improve their mental and physically abilities.
Defined global warming and climate change Global warming is the observed and projected increases in the average temperature of Earth's atmosphere and oceans. The Earth's average temperature rose about 0.6° Celsius (1.1° Fahrenheit) in the 20th century, see temperature graphs below. Fig. 2: Definition for global warming: Temp. increase in the last 150 years (graph from Fig 3: Definition for global warming: Temp. increase in the last 25 years (graph from According to different assumption about the future behaviour of mankind, a projection of current trends as represented by a number of different scenarios gives temperature increases of about 3° to 5 C (5° to 9° Fahrenheit) by the year 2100 or soon afterwards. A 3°C or 5° Fahrenheit rise would likely raise sea levels by about 25 meters (about 82 feet). Power in a Warming World: The New Global Politics of Climate Change and the Remaking of Environmental Inequality (Earth System Governance) Book (The MIT Press) Why is global warming being changed to climate change? Climate change and Global Warming.Although both are very dangerous to planet Earth. The climate is getting hotter, every day.
What is Osteoarthritis of the Hip? Osteoarthritis, also called degenerative joint disease, is the most common form of arthritis. It occurs most often in the elderly. This disease affects the tissue covering the ends of bones in a joint called cartilage. In osteoarthritis, the cartilage becomes damaged and worn out, causing pain, swelling, stiffness and restricted movement in the affected joint. Although osteoarthritis may affect various joints including the hips, knees, hands, and spine, the hip joint is most commonly affected. Rarely, the disease may affect the shoulders, wrists, and feet. Causes of Osteoarthritis of the Hip Advanced age is one of the most common reasons for osteoarthritis of the hip. You may also develop osteoarthritis in the following cases: - Previous hip injury or fracture - Family history of osteoarthritis - Suffer from hip diseases such as avascular necrosis and other congenital or developmental hip diseases Symptoms of Osteoarthritis of the Hip You will experience severe pain that is confined to the hip and thighs, morning stiffness and limited range of motion. Diagnosis of Osteoarthritis of the Hip Based on your symptoms, your doctor will perform a physical examination, X-rays and other scans, and some blood tests to rule out the other conditions that may cause similar symptoms. Management of Osteoarthritis of the Hip There are several treatments and lifestyle modifications that can help you ease your pain and symptoms. - Medications: Pain-relieving medications such as NSAIDs and opioids may be prescribed. Topical medications such as ointments can be applied over the skin to relieve pain. If the pain is very severe, corticosteroid injection can be administered directly into the affected joint to ease the pain. - Other treatments: Your physiotherapist will teach you exercises to keep your joints flexible and improve muscle strength. Heat/cold therapy that involves applying heat or cold packs to the joints provides temporary pain relief. Lifestyle modifications are encouraged to control your weight and avoid extra stress on the weight-bearing joints. - Surgery: Hip joint replacement surgery is considered as an option when the pain is so severe that it affects your ability to carry out normal activities.
Use this activity to create an obstacle course outdoors for your child to ride her bike through. - 20 to 30 minutes - Scooters, bikes, and other vehicles Draw an obstacle course to ride through and around. - With more than one child, have participants begin together at one line and finish together at another, emphasizing control of speed and cooperation. - Use cones or other objects instead of chalk lines. - Start in small groups at opposite ends of the course and ride past each other slowly without touching or bumping. - Assign a safe stopping place and have each child take a turn riding around the area and return to a spot to get off for the next person's turn. - Be a traffic director and have them follow your arm movements.
Vocabulary Learning Strategies and Foreign Language Acquisition Second Language Acquisition,1 as a field of scientific research and a foundation of contemporary language instruction, is still a relatively young discipline. Historically, second language instruction was either not grounded on any scientific theory (e.g. the Grammar-Translation Method), or was grounded on conclusions partly derived from valid linguistic theories and partly from general theories of learning (e.g. the influence of structural linguistics and behaviourism on the development of the audiolingual method). The Grammar-Translation Method was based on the fundamental assumption that learners will learn the target language simply by following the teaching method, whereas according to the audiolingual method the learner is conceived of as a passive recipient of the programme whose intervention would seriously interfere with the desirable automatic reaction. These theories received severe criticism from the new opposing theories, such as the interlanguage theory that views the learner as a creator of rules and errors as evidence of positive efforts by the learners to learn (Selinker, 1972). The new theories incited two general directions in SLA research: Rubin (1975) begins her work on raising awareness of learners’ strategies of learning responsible for the language learning success, and Krashen (cf. 1981) proposes his influential theory which states that, for language acquisition to occur, learners need natural authentic communication, and not direct instruction. Due to this idea Krashen has often been recognised as the originator of the communicative approach to second language teaching. In addition to the above-mentioned approaches and methods, there is a host of other methods, often referred to as alternative, that have, in their own ways, influenced second language instruction. In general, language instruction today clearly reflects recognition and appreciation of the values and contributions of various methods and approaches. جهت استعلام قیمت و سفارش چاپ این محصول لطفا با انتشارات گنج حضور تماس حاصل فرمایید دسته: آموزش زبان برچسب: Vocabulary Learning Strategies and Foreign Language Acquisition, چاپ کتاب, فروش کتاب
Become a critical reader by reading opposing viewpoints about contemporary issues, practicing close reading, notetaking, and summarizing. This is an introduction to critical reading and writing. Genres include short stories, journalistic writing, essays, and poetry. Classroom exercises develop important literary analytical tools including compare/contrast, cause/effect, and prediction. Students write a variety of compositions on the results of their analyses and the literary themes expressed in the texts. They also write an original work.
In sum Machiavelli and Rousseau lived entirely different lives even though they didn’t really agree w each other’s ideas they did have similarities in their thoughts. Maviavelli and Rousseau both disliked factions, groups with a political purpose, often described as a "party within a party." Both of them distinguish between "conflicts that serve to protect and even invigorate the foundational principles of liberty from those that seek to advance private interests."They believe that conflict between the public and their leaders is necessary at times. Machiavelli and The time of the reformation was a time of heavy politics, political wars, and religious attacks and conflicts. This was a period of growth for some countries, such as England, and a time of decline for other countries like Spain. These two particular countries, England and Spain, had two very powerful rulers who helped determine the fate of their nations. Phillip II of Spain was born into a very powerful family of extremely Machiavellian heritage. He had control of the Netherlands, Spain, parts of the North and South Americas, and parts of Asia and Brazil. He was also extremely Catholic and loyal to the Catholic Church. Queen Elizabeth I of England inherited a small country divided between Lutherans and Catholics, but she would turn out to be one of the greatest rulers of England in history. These two rulers would go head to head until their deaths, and while England rose up, Spain began to decline. Although Phillip II of Spain was a very Machiavellian ruler, Elizabeth I of England was much more Machiavellian, for Phillip of Spain was not governed by necessity, as Machiavelli advised; he was feared by his subjects, not loved; and Queen Elizabeth I of England was an effective ruler and near perfect example of the Machiavellian Prince. "Machiavelli identifies the interests of the prince with the interests of the state." He felt that it was human nature to be selfish, opportunistic, cynical, dishonest, and gullible, which in essence, can be true. The state of nature was one of conflict; but conflict, Machiavelli reasoned, could be beneficial under the organization of a ruler. Machiavelli did not see all men as equal. He felt that some men were better suited to rule than others. I believe that this is true in almost any government. However, man in general, was corrupt -- always in search of more power. He felt that because of this corruptness, an absolute monarch was necessary to insure stability. Machiavelli outlined what characteristics this absolute ruler should have in The Prince. One example of this can be seen in his writings concerning morality. He saw the Judeo-Christian values as faulty in the state's success. "Such visionary expectations, he held, bring the state to ruin, for we do not live in the world of the "ought," the fanciful utopia, but in the world of "is". The prince's role was not to promote virtue, but to insure security. He reasoned that the Judeo-Christian values would make a ruler week if he actually possessed them, but that they could be useful in dealing with the citizens if the prince seemed to have these qualities. Another example of Machiavelli's ideal characteristics of a prince While corruption existed in the Church during the Renaissance, the Reformation was as much about politics, theology, and individualism, as it was about rooting out corruption. When looking at the religious values that guide human choices, why did Martin Luther break away from the Catholic Church? Machiavelli’s interpretation of human nature was greatly shaped by his belief in God. In his writings, Machiavelli conceives that humans were given free will by God, and the choices made with such freedom established the innate flaws in humans. Based on that, he attributes the successes and failure of princes to their intrinsic weaknesses, and directs his writing towards those faults. His works are rooted in how personal attributes tend to affect the decisions one makes and focuses on the singular commanding force of power. Fixating on how the prince needs to draw people’s support, Machiavelli emphasizes the importance of doing what is best for the greater good. He proposed that working toward a selfish goal, instead of striving towards a better state, should warrant punishment. Machiavelli is a practical person and always thought of pragmatic ways to approach situations, applying to his notions regarding politics and Machiavelli concentrated more on the way things should be and how to manipulate them for his own personal gain rather than for the betterment of the state. He was well-known for being a political thinker who believed that outcomes justified why things happened. A key aspect of Machiavelli’s concept of the Prince was that “men must either be caressed or annihilated” (Prince, 9). What Machiavelli meant by During the 1500’s a movement away from traditional Catholicism started to take hold. The most notable figure during this time was Martin Luther. He had ideals that, at the time, were extremely radical. As Gerald Strauss put it, “His doctrine of the two realms- the kingdom of Christ and the kingdom of the world, derived directly from Augustine – entailed the strictest segregation of things spiritual and things material” (22). He did not believe that the people of the church had any right to control the population at large. He believed that they were meant to be spiritual guides, not rulers, and that they wielded way too much control over the common people. One of the most radical things that he did, which was also the most influential Niccolò Machiavelli was an activist of analyzing power. He believed firmly in his theories and he wanted to persuade everyone else of them as well. To comment on the common relationship that was seen between moral goodness and legitimate authority of those who held power, Machiavelli said that authority and power were essentially coequal.9 He believed that whomever had power obtained the right to command; but goodness does not ensure power. This implied that the only genuine apprehension of the administrative power was the attainment and preservation of powers which indirectly guided the maintenance of the state. That, to him, should have been the objective of all leaders. Machiavelli believed that one should do whatever it took, during the given circumstance, to keep his people in favor of him and to maintain the state. Thus, all leaders should have both a sly fox and ravenous wolf inside of him prepared to release when necessary.10 Martin Luther was a very important Christian figure of the Reformation. He began questioning the Roman Catholic Church and soon, he gained followers that split from Catholicism and began the Protestant tradition. Luther didn’t want to form a new church or go against the religious order of medieval Europe. He wanted to end the wrongs that were occurring in the churches and reform morals. (Historical Context) Machiavelli considers society an immoral place. According to Machiavelli as stated in The Discourses on Livy, “for as men are, by nature, more prone to evil than to good”. The Prince is a manual for being a successful ruler in an immoral society. Often times that success is met by committing immoral acts. Machiavelli, an outsider to the inner workings of government gives what he thinks are the critical tools to being a successful ruler in modern society. “Sometimes you have to play hardball” is a saying from today that I relate to his philosophies. Niccolo Machiavelli is a very pragmatic political theorist. His political theories are directly related to the current bad state of affairs in Italy that is in dire need of a new ruler to help bring order to the country. Some of his philosophies may sound extreme and many people may call him evil, but the truth is that Niccolo Machiavelli’s writings are only aimed at fixing the current corruptions and cruelties that filled the Italian community, and has written what he believed to be the most practical and efficient way to deal with it. Three points that Machiavelli illustrates in his book The Prince is first, that “it is better to be feared then loved,”# the second He sees no purpose in restraining and controlling oneself for the society because the society will not prosper if the ruler does not. Ruthlessness, maliciousness, and deviousness are all hailed as being acceptable, in fact encouraged, as means of securing position of power. Through his prioritizing, Machiavelli does not seem to be as concerned with the society and the individual as the previous philosophers in history have been. Rather, he sees power as the one and only goal in life, regardless of the individual or the state. Again, though, he is a reflection of his times. The men of the Renaissance era wanted many things--money, power, enjoyment in life--regardless of the moral cost. Others would argue that these superfluities either meant nothing or would not occur without restraining the desires of both ones self and ones state. One needs balance of everything in order to reach the ideals of perfection, but Machiavelli would argue that perfection is not real and so is not worth striving for. Instead, one must live for ones self. He makes the generalization of men that: In essence, Machiavelli’s ideal principality sustains a genuine sense of morality behind the violence that “must be subjected in order to maintain stability.” Looking at his plans subjectively, Machiavelli’s lowering of politics creates an impact on the way ordinary subjects and citizens behave, a prince, according to Machiavelli, should be loved but most important to him, this sovereign should be feared, citizens need to obey and follow regulations and be faithful to the ruler, they are expected to honor and fight for their sovereign, in general, Machiavelli does not go into so much detail about the duties of the people, but he explains that by teaching the prince how to manage the system, he is working for the sake of people, as Machiavelli explains, a prince should follow two policies in which one of the two explains how a sovereign must keep balance and unchanged laws when conquering new territories, “not to change their laws or impose new taxes” (Machiavelli’s The Prince, page 8) what he means by this is that a sovereign should respect customs and traditions, the way people While Martin Luther reinforces Aquinas' concept of how the state with a virtuous ruler is required to preserve peace, punish the unjust, and restrained the wicked in society, he evolves the concept one step further. His central argument with regard to the concept of the state centers on the idea that there is truly a division between the Church (spiritual power) and secular authority even though both were needed and both complimented each other. More important, he vehemently argues that the Church had no domination over all matters that are temporal or earthly matters. He affirms this idea when says "es preciso ditnguir claramente los dos regimens y conserver ambos: uno, para producer justicia, el otro, para mantener la paz externa e impeder las obras malas. Ninguno es suficiente en el mundo sin el otro." (Luther, De la Authoridad Secular: Hasta Que Punto Se Le Debe Obediencia (1523)