content
stringlengths
275
370k
"Apprehend" means to take into custody or grasp mentally. While "apprehend" sometimes means "understand," it is best to use "comprehend" because it's easier for most people to understand. For the purposes of this exercise, always use "comprehend" for "understand." The noun form, "apprehension," means a foreboding or dread of something. Example: Please apprehend the criminal. "Comprehend" means to grasp mentally or understand fully. Example: Do you comprehend this material? The adjective "comprehensive" means all-inclusive or having a wide range. Example: Final exams are usually comprehensive because they include questions on all the material covered in a semester. Practice What You've Learned Exercises are reserved for account holders. Please log in.
What is bipolar disorder? Individuals with bipolar disorder classically have cycles of depression alternating with euphoric/irritable mood states (called mania). There are several disorders of mood in addition to the depressive disorders that involve depression as well as manic or hypomanic mood states. The additional mood disorders are as follows: - Bipolar I disorder - Bipolar II disorder - Mood disorder not otherwise specified A manic episode is defined as a period of euphoric and/or irritable mood that lasts at least 4 days; it is characterized by a decreased need for sleep, racing thoughts, the need to keep speaking, inflated self-esteem or grandiose thinking, and excess goal-directed activities. The same group of symptoms also defines a hypomanic episode, but the severity is judged to be less. Individuals in the midst of a manic episode can become psychotic and require hospitalization. In bipolar I disorder, the person must have a history of at least one manic episode. The number of depressive episodes can be as few as none to any amount. Classically, an afflicted person alternates between episodes with normal mood in between. However, cycles can consist of any frequency of mood states in any order. Bipolar II disorder is comprised of depressive episodes alternating with hypomanic episodes only (no mania). In cyclothymia, no major depressive episode has occurred, but mild depressive episodes alternate with hypomanic states. Mood disorder not otherwise specified is also a condition of exclusion in that a mood disorder is considered present, but the criteria have not been met for the other conditions in the DSM-IV-TR. In someone presenting with depression, these conditions can only be excluded by a thorough history of symptoms and episodes in the past. Sometimes the patient does not recall such episodes, however, such that a bipolar condition is not learned of until the treatment for depression is initiated.
A fun animated version of the fairytale, introducing an investigation into friction. Would make a good stimulus for a ramp investigation. TES whiteboard activity that could be used as follow up to Rapunzel clip. A virtual version of the classic car on a ramp, friction vs. gravity, experiment. Choose either ramp angle or surface as the variable. Now run a series of tests, graphing the results. This activity asks the pupils to identify the forces used to move an object. Several examples are given where the force (push, pull or twist) needs to be identified, eg to push a door open. The pupils are then asked to sort a collection of toys into those which are used by pulling or pushing. Four primary teachers present their Great Lesson Ideas for teaching forces and motion in science. A fun activity to help children learn about the effects of forces. Includes a worksheet. A fun activity to help children learn about pushing and pulling forces. BBC Bitesize resource to identify different forces. It includes a short animation about forces and short knowledge check questions.
For some cultures, December marks the season for holidays. Christmas, Hanukkah, Kwanzaa, and sometimes even Milad un Nabi are just a few of the celebrations that American citizens will commemorate this month – not to mention the closing of the calendar year. But December is also observed for another, lesser known reason: it is the Universal Month for Human Rights. So what does this mean exactly? It’s important to first understand how the Universal Month for Human Rights started. It began in 1948, when the United Nations wrote up a document called the Universal Declaration of Human Rights. This happened after the Second World War, because the U.N. wanted to prevent the atrocities that had occurred. They created the document as a way to properly define what human rights would be protected universally. The very first article of this declaration makes it clear what the purpose is. It states: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. The rest of the document lists out what these rights are. It emphasizes how important it is to work towards protecting freedom for all in order to keep peace. How can you observe the Universal Month for Human Rights? There is a lot of turmoil in the world. Open up any newspaper or look on any Facebook or Twitter feed and see the many challenges our planet is constantly facing. One of the most important things you can do throughout the course of this month – and even beyond – is to find common ground with the people around you. We must remember that all human beings were born into the same world we were and that, despite our differences, we must learn to function here together. Human Rights Month is about acknowledging that people of different races, religions, cultures, and beliefs are still just that: people. We must be careful of differentiating ourselves from others so much that we forget this. Take the time to learn about another culture that is different from yours – perhaps a culture that makes you nervous or uneasy. Research their history or perhaps make a new friend that is a member of that culture. You’ll start to see quickly how similar all people really are. You’ll start to see just how important it is that everyone be treated with dignity and respect.
Fluids are defined as substance that flow or deform under the applied shear stress. Fluid has no definite shape of its own. It assumes the shape of its container. Liquids and gases are known as fluids. Depending on the behavior of fluids, fluids are divided into two types. They are - Newtonian Fluid - Non-Newtonian Fluid The above mentioned types of fluids are discussed under in brief. Fluids that obey Newton law of viscosity are known as Newtonian Fluids. Newton law of viscosity states that the shear stress on a fluid element layer is directly proportional to the rate of shear strain. Examples of Newtonian fluids: water, air, kerosene. Fluids that doesn’t obey Newton law of viscosity are known as Non-Newtonian fluids. These fluids are the opposite of Newtonian fluids. Examples of Non-Newtonian fluids : colloids, thick slurry, emulsions.
Pack ice in northern McMurdo Sound showing the range of floe sizes and levels of aggregation. The complex web of leads (large fractures) in between the floes is a critical but highly variable area for heat and light transfer. Photo: Craig Stevens/NIWA Seen from space, the annual cycle of Antarctic sea ice formation and loss is one of the most dramatic natural phenomena on Earth. Such an expanse of sea ice inevitably plays a huge role in defining the Antarctic marine environment, but what’s less well appreciated is the extent to which formation of Antarctic sea ice also affects ocean circulation across the globe. Recent concerns over apparent changes to Antarctic sea-ice dynamics have led scientists to contemplate the potentially dire global consequences of a slow-down in sea-ice production. The processes that intertwine sea-ice formation and melting, weather and ocean circulation are extremely complex. An improved understanding of the mechanisms linking Antarctic sea ice to global ocean circulation is crucially important to quantify and respond to global risks. Sea ice factories Seawater freezes at -1.8oC. Sea ice forms when the sea reaches this temperature, and the air temperature falls sufficiently below this to allow ice formation at the surface. Most sea ice forms in a relatively few locations known as polynyas. Polynya is a Russian word that refers to a naturally-formed area of open water surrounded by sea ice. These ‘sea-ice factories’ can be coastal or in the open ocean. They occur where persistent winds and/or ocean currents move sea ice away from the area as fast, or faster than, it is forming. Conditions in these polynyas favour sea-ice production: the air and water are cold enough, and there is a constant formation to replace the ice that gets moved away. The newly-formed ice gradually thickens as it joins the ocean pack ice and moves away from the polynya. How sea ice formation influences global oceans The ocean is stratified (naturally separated into horizontal layers) due to density differences. Cold or salty water is dense and tends to sink below warmer, less salty water that is less dense. Mixing occurs through currents, winds and tides, and through heat slowly seeping deeper into the ocean. Mixing is generally slow. The formation of sea ice produces particularly dense water at the ocean’s surface. Not only is the water as cold as seawater can get (or else it would not freeze), but when it does freeze, little of the salt in sea water forms part of the ice. Alignment of water molecules to form ice crystals leaves no room for the dissolved salt, which is squeezed out into the underlying water. This results in a layer of cold, salty and well-oxygenated water which, after mixing with relatively less cold ambient water, drops to the seafloor. This Antarctic Bottom Water makes up about 30-40% of the global ocean volume. So much of this dense water is formed in active polynyas that it creates a substantial downwelling current. Ocean moorings in the Ross Sea deployed by the Antarctic Science Platform, along with national and international collaborators, monitor this flow. It forms a current that moves northwards along the floor of the continental shelf, typically concentrated in submarine canyons, before it tumbles down the shelf break to the bottom of the Southern Ocean as a series of submarine waterfalls. (Watch the video above or on YouTube.) This process is a major driver of global ocean currents. The cold, salty, oxygenated Antarctic Bottom Water moves out from the Southern Ocean toward the equator to begin a millennium-long journey around the planet. The oxygen transported this way is fundamental to ecosystem processes in the deep ocean. And of course, water sinking in Antarctica displaces water already at depth towards the surface. Rich in nutrients after centuries of recycling, this water completes the ocean conveyor belt circulation and drives ocean productivity. Production of Antarctic Bottom Water can be slowed by changes in sea ice. This can occur in various ways – for example, if less sea ice is formed, or if there is a reduction in the density of Antarctic Bottom Water due to ice being formed from sea water made fresher by the melting of coastal ice sheets. Recent modelling research has highlighted the effects that such changes could have on global climate, ecosystems and the ability of the oceans to absorb carbon dioxide. Designing a Ross Sea Ocean Observatory Understanding these complex processes and their interconnections requires different types of observations in different places brought together with modelling tools. This will provide critical information about what is happening right now and enable improved model results. Having a comprehensive network of ocean observations across the Ross Sea region is essential. We aim to continue to develop our time series observations at critical locations in the Ross Sea and then, through partnerships with other national programmes and SOOS (Southern Ocean Observing System), create an integrated and responsive ocean observing system. One example of a network of international observations is the RSfEAR (Ross Sea and far East Antarctic) network being discussed with Australia. Such a network would connect research on climate, ecosystems and sea-ice processes, advancing the Platform’s purpose of delivering excellent science to understand the impact of future changes in Antarctica.
Share This article Construction and observation For the most part, we have the semiconductor industry to thank for the construction of these wonder materials. In general, creating and manipulating wonder materials involves equipment that operates on an atomic level — and the machines that make computer chips are, by some stretch, the most intricate tools created by man. With modern chips like Intel’s 22nm Ivy Bridge and Haswell, there are layers of silicon and metal oxides that can be measured in numbers of atoms, rather than nanometers. Likewise, these wonder materials often owe their whacky properties to features and patterns that are just a few atoms or nanometers across. Many wonder materials, such as graphene, are mass-produced using chemical vapor deposition (CVD), which is also a key process in the CMOS (silicon chip) industry. Likewise, the nanostructures that give metamaterials their ability to create invisibility cloaks, can be created using lithography — the same process that is used by the CMOS industry to trace out the locations of billions of transistors on a chip. Creating something is one thing, but actually observing how and why it works is something else entirely. With a bridge, you can build it, and then use cameras, gyroscopes, and other sensors to measure how it reacts as cars drive across it. Observing individual carbon-carbon bonds in graphene, though, and measuring the voltage potential of a single electron passing over that bond, is considerably more tricky. It is only because of the scanning tunneling microscope (STM), which was developed by IBM in the ’80s, and other similar devices such as the atomic force microscope (AFM), that researchers can now build up an atom-by-atom image of a material. The STM, which has only reached maturity in the last few years, also has the ability to pick up and move single atoms, which makes it the choice instrument for materials scientists looking to create new wonder materials, too. What a wonderful world it would be From space elevators to computers the size of a grain of sand, there are scant few areas of life that would be untouched by the maturity, mass production, and proliferation of wonder materials. The good news is that, at this point, it’s now more of a matter of when, rather than if, graphene and co. will come to market. The bad news is that, as of today, there probably isn’t more than a few grams of graphene in existence, and the road to mass production will be long and expensive. Likewise, commercial metamaterials that can create superlenses and invisibility cloaks, are still years away. Still, we are producing these wonder materials today, using tools that we’ve only really had at our disposal for a few years. Over the next decade, these materials will slowly work their way into military and space applications (as always), and then eventually the consumer market. More excitingly, though, just take a moment and think about all of the possible ways of arranging the hundred-odd elements that populate our universe, and the billions of structure and pattern permutations that are possible with our modern machines. In all likelihood, there are quite literally millions of other wonder materials that are still left to be discovered or synthesized, each with magical and never-before-seen properties that have the potential to change life as we know it. - 2 of 2
Histamine in Foods other than Fish and Fishery products What is histamine? Histamine is a biogenic amine which is a naturally occurring substance in the human body. Histamine is derived from the break down (decarboxylation) of the amino acid histidine. It has important physiological functions related to local immune responses, gastric acid secretion and neuromodulation. See our other FAQ on Histamine in Fish and Fishery Products. What is histamine food poisoning? Histamine is involved in the body’s inflammation response and if food or drink with raised levels of histamine are ingested, it can produce symptoms similar to that of an allergic reaction. What foods can pose a risk of histamine food poisoning? After fish, cheeses are reported to be the next most commonly implicated food in histamine poisoning. However, histamine production can occur in a wide range of other foods, particularly fermented foods, such as fermented meats (e.g. sausage), wine, sauerkraut, miso, vegetables and soy sauce. What are the symptoms? Onset of symptoms of histamine food poisoning can range from several minutes to several hours following ingestion. Typically, the average incubation period before onset of illness is approximately one hour. Severity of illness varies depending on factors such as the amount of histamine present and the susceptibility of the affected person. Generally, observed symptoms are: - Abdominal cramps - Skin rash - Burning sensation of the mouth and lips - A peppery taste sensation How is histamine produced? Histamine is produced when the enzyme histidine decarboxylase, produced by a wide range of spoilage microorganisms, breaks down the amino acid histidine present in the food or drink. The production rate of histamine is temperature and time dependent and, in general, increases with increasing temperature. In addition, histamine accumulation is minimised at lower temperatures, since low temperatures slow down microbial growth and reduce enzyme activity. The optimum temperature for the formation of histamine by mesophilic bacteria has been reported to be between 20-37°C, while production of histamine decreases below 5°C and above 40°C. For example, the bacterium Morganella morganii is known to be a powerful histamine producer in seafood at storage temperatures of 7-10°C. Which bacteria are involved? A wide range of bacteria are capable of producing histamine. Examples include, Morganella morganii, Klebsiella spp., Pseudomonas, Clostridium, Citrobacter freundii, Lactobacillus buchneri and more. However, the primary bacteria responsible for producing histamine varies in different kinds of foods - for example, M. morganii in fish, compared to Lactobacillus buchneri in Swiss cheeses. How is histamine controlled? Strict control of the cold chain (≤ 5°C) during processing and storage will minimise histamine production by inhibiting the growth of histamine-producing spoilage bacteria and by reducing the activity of the enzyme histidine decarboxylase, the enzyme responsible for histamine production. Can processing technologies prevent histamine production? Spoilage bacteria responsible for histamine production can survive a wide range of processing conditions such as smoking, brining, salting, fermenting and drying. Histamine already produced can also survive these processes. Vacuum packaging is also not an effective method of preventing the production of histamine. Histidine decarboxylase, the enzyme responsible for breaking down histidine into histamine, can remain active even after the bacteria responsible for producing it have been inactivated or killed. The enzyme continues the histamine production slowly at refrigeration temperatures and the it remains stable if frozen, thus allowing it to rapidly re-commence activity after thawing. Whilst the enzyme that produces histamine can be inactivated by cooking, once histamine has been produced, it cannot be eliminated by heat treatment (cooking/freezing) and its toxicity remains intact. Last reviewed: 30/5/2018
Starting in the middle of October, students will be expected to independently: - write at least 2 sentences in the homework journal (a main idea and a detail sentence) - create a detailed illustration - reread their work - pack the homework folder in the backpack and return it to school. Writing homework in first grade serves primarily as a tool that: - enables students to become confident writers - gives students an opportunity to express their thoughts and experiences to their teachers - provides teachers with a daily assessment of student writing. Parents should not be involved in homework, except to make sure that it is done daily. The teachers want to see the mistakes and spelling habits of students. If parents are providing erasers and giving spellings, the teachers do not get a clear picture of what each student is capable of writing independently. Writing Homework should not take more than 10 minutes! If your child is not finished after 10-15 minutes, please give them the choice to put it away. Also, please note that sometimes it is easier for children to draw a picture first, then write about it. The role of the parent at this time is to read aloud to their child at least 10 minutes each day. Then, the parent should record the book’s title, the date, and the parent initials on the reading log found in the homework folder.
Drugs affect everyone differently. So when you’re prescribed a medication, what factors dictate how it will impact your health? It’s important to be mindful of these points to ensure that your medication is functioning effectively in your body and not resulting in preventable side effects. Factor 1: Age Infants and elderly people are the most susceptible to issues with medications. For both groups, their livers and kidneys function less efficiently than the average adult. As a result, drugs that are broken down by the liver or excreted by the kidneys are especially problematic, since they can build up in the body. Also, elderly people tend to take more medications — and the more drugs you take the more likely you are to have a problem caused by a drug interaction. Older people may also have more difficultly following instructions for medications, making it less likely that they take them correctly. Factor 2: Genetics Genetic differences impact how bodies process medications. There’s actually an entire field devoted to the study of genetic differences in response to drugs –– pharmacogenomics. Genetics can make some people metabolize drugs more slowly than others. Medications can accumulate in the blood and cause toxicity. Half of all people in the United States possess a liver enzyme that works slowly to metabolize certain drugs. This can be problematic when taking certain drugs, since these individuals’ bodies may destroy red blood cells and cause hemolytic anemia. Other people metabolize drugs too quickly and drug levels in the blood are never high enough for their medications to be effective. For example, one in 20,000 people have a genetic defect that makes muscles overly sensitive to certain inhaled anesthetics. When given these drugs with a muscle relaxant, a life threatening disorder called malignant hyperthermia may develop that causes a very high fever, muscle stiffness and decreased blood pressure. The field of pharmacogenomics is new and evolving, but tests are being released that can help you understand how your body will respond to certain drugs so that you avoid adverse reactions. Factor 3: Drug Interactions The effect a drug has on a person may be different than anticipated because that drug is interacting with: - Another drug that person is taking - A food, beverage or supplement - Another disease Drug interactions can increase or decrease the activity of one or more drugs, resulting in unwanted side effects or failed treatment. These interactions can also occur with both prescription and nonprescription medications. There are over one dozen additional factors that impact how a drug will respond in the body, including psychological status status, infection, disease, sunlight, exercise and lactation. Below is a graphic containing factors that affect drug response:
Researchers have developed a new protein-based sensor that can detect lanthanides, the rare earth metals used in smartphones and other technologies, in a more efficient and cost-effective way. The sensor changes its fluorescence when it binds to these metals. The protein undergoes a shape change when it binds to lanthanides, which is key for the sensor's fluorescence to "turn on", said the study. To develop the sensor, the researchers used a protein they recently described and subsequently used to explore the biology of bacteria that use lanthanides. Rare earth metals are found in varying quantities in smartphones. Neodymium, dysprosium, praseodymium, terbium, gadolinium and lanthanum are found inside displays and other electronic components of smartphones. These are further re-purposed in making components for cars, airplanes, lasers and even next-generation armour plating. Researchers develop a new protein-based sensor that can detect lanthanides, the rare earth metals. "Lanthanides are used in a variety of current technologies, including the screens and electronics of smartphones, batteries of electric cars, satellites, and lasers," said Joseph Cotruvo, Assistant Professor. "These elements are called rare earths, and they include chemical elements of atomic weight 57 to 71 on the periodic table," Cotruvo added. Extracting rare earths from the environment or from industrial samples, like waste water from mines or coal waste products, is generally very challenging and expensive. They developed a protein-based sensor that can detect tiny amounts of lanthanides in a sample, letting us know if it's worth investing resources to extract these important metals. China has a monopoly over rare earths, with an annual production of 105,000 metric tonnes. For comparison, its closest competitor, Australia, generates only 20,000 metric tonnes of rare earth metals. Further, China will impose restrictions rare earths production beginning in 2020. The US-China trade war has made matters worse for electronics manufacturers. This has forced them to resort to recycling these metals.
Getting Started with Java This is a 100% free course, but we need you to first join or login to watch this video.Alright, join or login here. This video introduces the concept of classes and objects in Java. Classes and Objects are the basis of object oriented programming in Java. They allow you to represent any real world or virtual world object as your own data type using the predefined data types in Java. Create a class using the class keyword and specify the properties or instance variables of that class along with actions or behavior or methods of that class which will somehow access or modify the properties.
STEM (Science, Technology, Engineering, Math) education is essential for children of all ages and here is a fun, colorful activity book that offers kids a way to make math fun! Young mathematicians will find simple explanations and beautifully illustrated activities on each page of this totally fun book. What is STEM? What is math? Learn how to count coins. Learn to understand weights and measures. Learn about volume and multiplication. And all of it is done with fun, colorful quizzes, mazes, tables, and charts. Start a lifelong passion for STEM subjects and inspire problem-solving skills.
5 Ways to Foster Scientific Learning in Your Early Learning Classrooms Science in early childhood is all about prediction, exploring, and discovering answers through questions. Children learn scientific investigation skills through hands-on exploration and play. Here are five simple ways to help facilitate scientific learning in early years: 1. Take them outside Measure, pour, transfer water, and get dirty. Science is all about hands-on learning, and the best place to get your hands messy is outside! 2. Encourage children to ask questions Begin the processes of prediction, planning, collecting, and data recording. Keep this simple and your children will ask more and more. If certain questions can be extended, incorporate other aspects of your play with their interests. 3. Ask them lots of questions too! Asking children "why", "what if", and "how" questions as they play will get them thinking critically about their actions and about the objects around them. 4. Grow something Plant seeds and allow children to experience hands-on scientific learning. Discuss water, light, sunshine, and all the things that plants need to grow, or experiment with these elements to see how different amounts of each affect the plant growth. 5. Watch butterflies grow up and set them free! Every spring at Scholar's Choice we bring in Painted Lady caterpillars that anyone can take to their home or to school and watch them grow. It is so exciting watching the tiny caterpillars grow and make their chrysalises, then releasing them to the wild.
MAKE YOUR OWN MUSICAL INSTRUMENT WITH SOME STRAW, LOLLIPOP STICKS AND SOME ELASTIC BANDS! - A Straw - 2 lollipop sticks - Elastic bands - Place 2 Lollipop sticks together and fix at each end with 2 elastic bands - Cut a straw and insert 2 short pieces into the gap between the lollipop sticks - Blow through the gap - Can you hear a sound? What happens if you move the 2 straws closer together or further apart? Does the noise change? HEALTH AND SAFETY Ensure that the students are using children’s safety scissors.
Several fungi can attack the oak tree, and some fungi attack only certain species of oak. However, many diseases or insect infestations manifest similar symptoms. For example, browning of the leaves of the canopy of an oak tree could be a symptom of oak root fungus, oak anthracnose or an insect manifestation, so further investigation is required to determine which disease is causing the problem. A plant’s structural cellulose, hemicellulose and even lignin can be destroyed by decay fungi. This decay is not seen on the outside of the oak tree. You will only be aware of this decay if the bark has been removed through injury or if there is an open wound somewhere on the tree. The destruction of the plant’s structure makes the diseased tree dangerous due to its instability/structure decline. A decaying tree cannot support the weight of its limbs and branches, and can easily be toppled over during a storm. Identification of Wood Rot Fungi Wood rot fungi are identifiable by their shape, color, and the formation of their fruiting bodies that appear on the tree. The fruiting bodies are called conks or brackets, and they can be found at wounds in the bark, by branch scars and often by the root crown. Fungi are divided into three groups: white rots, brown rots and soft rots. Oak anthracnose is a fungal disease that attacks the leaves and new growth of the oak tree. Fungal spores (Discula quercinia) infect the tree, which result in irregularly shaped brown spots on the leaves, creating a browning of the canopy. This disease is likely to show up when weather conditions are warm and wet, and the lower branches are most severely affected due to higher humidity levels. Oak anthracnose is cosmetic in nature; it does not kill the tree. One precautionary measure (to prevent the spread of the spores) is to prune oak trees in dry weather. They should not be pruned in April, May or June. Chemical sprays are not effective in combating oak anthracnose. Oak Root Fungus Oak root fungus, also called Armillaria mellea, attacks coniferous trees. This fungus creates a white rot, which can be seen when the bark is removed. It appears below the soil line, between the bark and trunk, and also between the wood of the roots and the interior of the roots. Mushrooms, a symptom of oak root fungus, appear at the base of the tree after it rains. During wet conditions the fungus grows rapidly. To slow down the decay the affected area needs to be exposed to the air and sunlight so that it can dry out. This can be accomplished by removing enough soil at the base of the tree to expose the decaying trunk and roots. Once they are exposed the sunlight and air will remove the moisture, thus slowing down the decay. Conk is yet another fungus that can attack the oak tree as well as a large number of ornamental trees. It is also known as Ganoderma applanatum. Wounds provide the entryway for this disease, and the fungus can kill the sapwood or cause white rot of the sapwood. The conks are generally at the ground level and appear in a semicircle that can be from 2 to 30 inches wide, and up to 8 inches thick. (It is called artist's conk because if you cut the conk/mushroom off the tree and turn it over, you can draw or write on it.) - Linden Tree Diseases - Diseases of Blackjack Oak Trees - Save Diseased Oak Trees - Ganoderma Root Rot Treatment for Oak Trees - Why Are My Pine Trees Dying? - Diseases of a Sweet Gum Tree - Treatment For Ganoderma Root Rot - Root Rot in Citrus - Willow Oak Tree Diseases - Fungicide to Treat Blue Mold in Pine Trees - Identify a Chaga Mushroom - White Rot Fungus on Oak Trees
For decades, astronomers have been trying to see as far as they can into the deep Universe. By observing the cosmos as it was shortly after the Big Bang, astrophysicists and cosmologists hope to learn all they can about the early formation of the Universe and its subsequent evolution. Thanks to instruments like the Hubble Space Telescope, astronomers have been able to see parts of the Universe that were previously inaccessible. But even the venerable Hubble is incapable of seeing all that was taking place during the early Universe. However, using the combined power of some of the newest astronomical observatories from around the world, a team of international astronomers led by Tokyo University’s Institute of Astronomy observed 39 previously-undiscovered ancient galaxies, a find that could have major implications for astronomy and cosmology. WFIRST ain’t your grandma’s space telescope. Despite having the same size mirror as the surprisingly reliable Hubble Space Telescope, clocking in at 2.4 meters across, this puppy will pack a punch with a gigantic 300 megapixel camera, enabling it to snap a single image with an area a hundred times greater than the Hubble. With that fantastic camera and the addition of one of the most sensitive coronagraphs ever made – letting it block out distant starlight on a star-by-star basis – this next-generation telescope will uncover some of the deepest mysteries of the cosmos. The expansion of our universe is accelerating. Every single day, the distances between galaxies grows ever greater. And what’s more, that expansion rate is getting faster and faster – that’s what it means to live in a universe with accelerated expansion. This strange phenomenon is called dark energy, and was first spotted in surveys of distant supernova explosions about twenty years ago. Since then, multiple independent lines of evidence have all come to the same morose conclusion: the universe is getting fatter and fatter faster and faster. Neutron stars scream in waves of spacetime when they die, and astronomers have outlined a plan to use their gravitational agony to trace the history of the universe. Join us as we explore how to turn their pain into our cosmological profit. Exotic dark matter theories. Gravitational waves. Observatories in space. Giant black holes. Colliding galaxies. Lasers. If you’re a fan of all the awesomest stuff in the universe, then this article is for you. Since the 1960s, astrophysicists have postulated that in addition to all the matter that we can see, the Universe is also filled with a mysterious, invisible mass. Known as “Dark Matter”, it’s existence was proposed to explain the “missing mass” of the Universe, and is now considered a fundamental part of it. Not only is it theorized to make up about 80% of the Universe’s mass, it is also believed to have played a vital role in the formation and evolution of galaxies. However, a recent finding may throw this entire cosmological perspective sideways. Based on observations made using the NASA/ESA Hubble Space Telescope and other observatories around the world, astronomers have found a nearby galaxy (NGC 1052-DF2) that does not appear to have any dark matter. This object is unique among galaxies studied so far, and could force a reevaluation of our predominant cosmological models. For the sake of their study, the team consulted data from the Dragonfly Telephoto Array (DFA), which was used to identify NGC 1052-DF2. Based on data from Hubble, the team was able to determined its distance – 65 million light-years from the Solar System – as well as its size and brightness. In addition, the team discovered that NGC 1052-DF52 is larger than the Milky Way but contains about 250 times fewer stars, which makes it an ultra diffuse galaxy. As van Dokkum explained, NGC 1052-DF2 is so diffuse that it’s essentially transparent. “I spent an hour just staring at this image,” he said. “This thing is astonishing: a gigantic blob so sparse that you see the galaxies behind it. It is literally a see-through galaxy.” Using data from the Sloan Digital Sky Survey (SDSS), the Gemini Observatory, and the Keck Observatory, the team studied the galaxy in more detail. By measuring the dynamical properties of ten globular clusters orbiting the galaxy, the team was able to infer an independent value of the galaxy’s mass – which is comparable to the mass of the stars in the galaxy. This led the team to conclude that either NGC 1052-DF2 contains at least 400 times less dark matter than is predicted for a galaxy of its mass, or none at all. Such a finding is unprecedented in the history of modern astronomy and defied all predictions. As Allison Merritt – an astronomer from Yale University, the Max Planck Institute for Astronomy and a co-author on the paper – explained: “Dark matter is conventionally believed to be an integral part of all galaxies — the glue that holds them together and the underlying scaffolding upon which they are built…There is no theory that predicts these types of galaxies — how you actually go about forming one of these things is completely unknown.” “This invisible, mysterious substance is by far the most dominant aspect of any galaxy. Finding a galaxy without any is completely unexpected; it challenges standard ideas of how galaxies work,” added van Dokkum. However, it is important to note that the discovery of a galaxy without dark matter does not disprove the theory that dark matter exists. In truth, it merely demonstrates that dark matter and galaxies are capable of being separate, which could mean that dark matter is bound to ordinary matter through no force other than gravity. As such, it could actually help scientists refine their theories of dark matter and its role in galaxy formation and evolution. In the meantime, the researchers already have some ideas as to why dark matter is missing from NGC 1052-DF2. On the one hand, it could have been the result of a cataclysmic event, where the birth of a multitude of massive stars swept out all the gas and dark matter. On the other hand, the growth of the nearby massive elliptical galaxy (NGC 1052) billions of years ago could have played a role in this deficiency. However, these theories do not explain how the galaxy formed. To address this, the team is analyzing images that Hubble took of 23 other ultra-diffuse galaxies for more dark-matter deficient galaxies. Already, they have found three that appear to be similar to NGC 1052-DF2, which could indicate that dark-matter deficient galaxies could be a relatively common occurrence. If these latest findings demonstrate anything, it is that the Universe is like an onion. Just when you think you have it figured out, you peal back an additional layer and find a whole new set of mysteries. They also demonstrate that after 28 years of faithful service, the Hubble Space Telescope is still capable of teaching us new things. Good thing too, seeing as the launch of its successor has been delayed until 2020! The first results of the IllustrisTNG Project have been published in three separate studies, and they’re shedding new light on how black holes shape the cosmos, and how galaxies form and grow. The IllustrisTNG Project bills itself as “The next generation of cosmological hydrodynamical simulations.” The Project is an ongoing series of massive hydrodynamic simulations of our Universe. Its goal is to understand the physical processes that drive the formation of galaxies. At the heart of IllustriousTNG is a state of the art numerical model of the Universe, running on one of the most powerful supercomputers in the world: the Hazel Hen machine at the High-Performance Computing Center in Stuttgart, Germany. Hazel Hen is Germany’s fastest computer, and the 19th fastest in the world. Our current cosmological model suggests that the mass-energy density of the Universe is dominated by dark matter and dark energy. Since we can’t observe either of those things, the only way to test this model is to be able to make precise predictions about the structure of the things we can see, such as stars, diffuse gas, and accreting black holes. These visible things are organized into a cosmic web of sheets, filaments, and voids. Inside these are galaxies, which are the basic units of cosmic structure. To test our ideas about galactic structure, we have to make detailed and realistic simulated galaxies, then compare them to what’s real. Astrophysicists in the USA and Germany used IllustrisTNG to create their own universe, which could then be studied in detail. IllustrisTNG correlates very strongly with observations of the real Universe, but allows scientists to look at things that are obscured in our own Universe. This has led to some very interesting results so far, and is helping to answer some big questions in cosmology and astrophysics. How Do Black Holes Affect Galaxies? Ever since we’ve learned that galaxies host supermassive black holes (SMBHs) at their centers, it’s been widely believed that they have a profound influence on the evolution of galaxies, and possibly on their formation. That’s led to the obvious question: How do these SMBHs influence the galaxies that host them? Illustrious TNG set out to answer this, and the paper by Dr. Dylan Nelson at the Max Planck Institute for Astrophysics shows that “the primary driver of galaxy color transition is supermassive blackhole feedback in its low-accretion state.” “The only physical entity capable of extinguishing the star formation in our large elliptical galaxies are the supermassive black holes at their centers.” – Dr. Dylan Nelson, Max Planck Institute for Astrophysics, Galaxies that are still in their star-forming phase shine brightly in the blue light of their young stars. Then something changes and the star formation ends. After that, the galaxy is dominated by older, red stars, and the galaxy joins a graveyard full of “red and dead” galaxies. As Nelson explains, “The only physical entity capable of extinguishing the star formation in our large elliptical galaxies are the supermassive black holes at their centers.” But how do they do that? Nelson and his colleagues attribute it to supermassive black hole feedback in its low-accretion state. What that means is that as a black hole feeds, it creates a wind, or shock wave, that blows star-forming gas and dust out of the galaxy. This limits the future formation of stars. The existing stars age and turn red, and few new blue stars form. How Do Galaxies Form and How Does Their Structure Develop? It’s long been thought that large galaxies form when smaller galaxies join up. As the galaxy grows larger, its gravity draws more smaller galaxies into it. During these collisions, galaxies are torn apart. Some stars will be scattered, and will take up residence in a halo around the new, larger galaxy. This should give the newly-created galaxy a faint background glow of stellar light. But this is a prediction, and these pale glows are very hard to observe. “Our predictions can now be systematically checked by observers.” – Dr. Annalisa Pillepich (Max Planck Institute for Astrophysics) IllustrisTNG was able to predict more accurately what this glow should look like. This gives astronomers a better idea of what to look for when they try to observe this pale stellar glow in the real Universe. “Our predictions can now be systematically checked by observers,” Dr. Annalisa Pillepich (MPIA) points out, who led a further IllustrisTNG study. “This yields a critical test for the theoretical model of hierarchical galaxy formation.” IllustrisTNG is an on-going series of simulations. So far, there have been three IllustrisTNG runs, each one creating a larger simulation than the previous one. They are TNG 50, TNG 100, and TNG 300. TNG300 is much larger than TNG50 and allows a larger area to be studied which reveals clues about large-scale structure. Though TNG50 is much smaller, it has much more precise detail. It gives us a more detailed look at the structural properties of galaxies and the detailed structure of gas around galaxies. TNG100 is somewhere in the middle. IllustrisTNG is not the first cosmological hydrodynamical simulation. Others include Eagle, Horizon-AGN, and IllustrisTNG’s predecessor, Illustris. They have shown how powerful these predictive theoretical models can be. As our computers grow more powerful and our understanding of physics and cosmology grow along with them, these types of simulations will yield greater and more detailed results. What will Curious George grow up to be? Being curious, then George will ask a lot of questions. And if lucky then physics will be George’s destiny, for physics seems to have so many answers. From the biggest to the smallest, that’s its purview. And for Delia Perlov and Alex Vilenkin in their book “Cosmology for the Curious” aim to answer a great many of those questions. Or at least those questions pertaining to mankind’s place in space. Cosmology is all about space and time. Which means that this book begins by traveling back in time. Traveling to the time of the Greeks. Hundreds of years b.c.e. Apparently the Greek philosophers did a lot of pondering about the smallest of things they called atoms. And the largest, they called planetary epicycles. From this baseline the book very quickly progresses through the traditional growth of knowledge with some choice descriptions. As an example it proposes energy as nature’s ultimate currency. And it allows the reader to wonder. Wonder why the sky is black at night. And ask questions. As in “why is the speed of light the same as the Earth travels about the Sun?” Most of the descriptions rely on Newtonian mechanics for explanation but it is only a slight passing for the book quickly raises Einstein’s field equations, particularly emphasizing inertial frames of reference. With this, the reader is accorded a pleasant view of Lorentz transforms, a somewhat abstract view of the Sun being flung out of the solar system by a very large golf club and a realization of how the GPS navigation system incorporates gravitational time dilation. Still all this is simply the cosmological baseline for the reader. Now the neat thing about cosmology is that there is simply no first hand observation. Most everything of interest happened a long time ago and in a somewhat different relative location. And this is the book’s next and most rewarding destination. Through many arguments or thought experiments, it associates the cosmic microwave background with redshifts and the changing spatial dimensions. Later, postulated dark matter and dark energy refocus the reader’s attention on the very beginning of the universe in a big bang. Or perhaps a multiverse of many shapes and various physical laws. Which of course leads to considerations about what’s next. How will our universe continue? Will it go to a quiet heat death or will we be gobbled up by another bubble universe? We can’t determine from our vantage point on Earth. But this book does provide its own vantage point. Helping this book along are a number of pleasant additions. For one, often when an accomplished researcher is mentioned, there’s an accompanying, quite complementary photograph. And equations are liberally spread throughout as if teasing the reader to explore more. But the book has very little math. And best of all are the questions at the end of each chapter. Now these questions aren’t your typical textbook questions. For example, consider “Inflation is almost certainly eternal to the future. Is it eternal to the past too? Why/why not?” Isn’t this a great question? And one that you really can’t get wrong. Which of course begs the question “Why aren’t you as curious as George?” There’s a whole universe out there waiting for us to explore and understand. Let’s not take it for granted. Let’s satisfy our curiosity perhaps with reading the marvellous book “Cosmology for the Curious” by Delia Perlov and Alex Vilenkin. After all you don’t want to be upstaged by George, do you? Ever since Galileo pointed his telescope at Jupiter and saw moons in orbit around that planet, we began to realize we don’t occupy a central, important place in the Universe. In 2013, a study showed that we may be further out in the boondocks than we imagined. Now, a new study confirms it: we live in a void in the filamental structure of the Universe, a void that is bigger than we thought. In 2013, a study by University of Wisconsin–Madison astronomer Amy Barger and her student Ryan Keenan showed that our Milky Way galaxy is situated in a large void in the cosmic structure. The void contains far fewer galaxies, stars, and planets than we thought. Now, a new study from University of Wisconsin student Ben Hoscheit confirms it, and at the same time eases some of the tension between different measurements of the Hubble Constant. The void has a name; it’s called the KBC void for Keenan, Barger and the University of Hawaii’s Lennox Cowie. With a radius of about 1 billion light years, the KBC void is seven times larger than the average void, and it is the largest void we know of. The large-scale structure of the Universe consists of filaments and clusters of normal matter separated by voids, where there is very little matter. It’s been described as “Swiss cheese-like.” The filaments themselves are made up of galaxy clusters and super-clusters, which are themselves made up of stars, gas, dust and planets. Finding out that we live in a void is interesting on its own, but its the implications it has for Hubble’s Constant that are even more interesting. Hubble’s Constant is the rate at which objects move away from each other due to the expansion of the Universe. Dr. Brian Cox explains it in this short video. The problem with Hubble’s Constant, is that you get a different result depending on how you measure it. Obviously, this is a problem. “No matter what technique you use, you should get the same value for the expansion rate of the universe today,” explains Ben Hoscheit, the Wisconsin student who presented his analysis of the KBC void on June 6th at a meeting of the American Astronomical Society. “Fortunately, living in a void helps resolve this tension.” There are a couple ways of measuring the expansion rate of the Universe, known as Hubble’s Constant. One way is to use what are known as “standard candles.” Supernovae are used as standard candles because their luminosity is so well-understood. By measuring their luminosity, we can determine how far away the galaxy they reside in is. Another way is by measuring the CMB, the Cosmic Microwave Background. The CMB is the left over energy imprint from the Big Bang, and studying it tells us the state of expansion in the Universe. The two methods can be compared. The standard candle approach measures more local distances, while the CMB approach measures large-scale distances. So how does living in a void help resolve the two? Measurements from inside a void will be affected by the much larger amount of matter outside the void. The gravitational pull of all that matter will affect the measurements taken with the standard candle method. But that same matter, and its gravitational pull, will have no effect on the CMB method of measurement. “One always wants to find consistency, or else there is a problem somewhere that needs to be resolved.” – Amy Barger, University of Hawaii, Dept. of Physics and Astronomy Hoscheit’s new analysis, according to Barger, the author of the 2013 study, shows that Keenan’s first estimations of the KBC void, which is shaped like a sphere with a shell of increasing thickness made up of galaxies, stars and other matter, are not ruled out by other observational constraints. “It is often really hard to find consistent solutions between many different observations,” says Barger, an observational cosmologist who also holds an affiliate graduate appointment at the University of Hawaii’s Department of Physics and Astronomy. “What Ben has shown is that the density profile that Keenan measured is consistent with cosmological observables. One always wants to find consistency, or else there is a problem somewhere that needs to be resolved.” Whenever we talk about the expanding Universe, everyone wants to know how this is going to end. Sure, they say, the fact that most of the galaxies we can see are speeding away from us in all directions is really interesting. Sure, they say, the Big Bang makes sense, in that everything was closer together billions of years ago. But how does it end? Does this go on forever? Do galaxies eventually slow down, come to a stop, and then hurtle back together in a Big Crunch? Will we get a non-stop cycle of Big Bangs, forever and ever? We’ve done a bunch of articles on many different aspects of this question, and the current conclusion astronomers have reached is that because the Universe is flat, it’s never going to collapse in on itself and start another Big Bang. But wait, what does it mean to say that the Universe is “flat”? Why is that important, and how do we even know? Before we can get started talking about the flatness of the Universe, we need to talk about flatness in general. What does it mean to say that something is flat? If you’re in a square room and walk around the corners, you’ll return to your starting point having made 4 90-degree turns. You can say that your room is flat. This is Euclidian geometry. But if you make the same journey on the surface of the Earth. Start at the equator, make a 90-degree turn, walk up to the North Pole, make another 90-degree turn, return to the equator, another 90-degree turn and return to your starting point. In one situation, you made 4 turns to return to your starting point, in another situation it only took 3. That’s because the topology of the surface you were walking on decided what happens when you take a 90-degree turn. You can imagine an even more extreme example, where you’re walking around inside a crater, and it takes more than 4 turns to return to your starting point. Another analogy, of course, is the idea of parallel lines. If you fire off two parallel lines at the North pole, they move away from each other, following the topology of the Earth and then come back together. Got that? Great. Now, what about the Universe itself? You can imagine that same analogy. Imaging flying out into space on a rocket for billions of light-years, performing 90-degree maneuvers and returning to your starting point. You can’t do it in 3, or 5, you need 4, which means that the topology of the Universe is flat. Which is totally intuitive, right? I mean, that would be your assumption. But astronomers were skeptical and needed to know for certain, and so, they set out to test this assumption. In order to prove the flatness of the Universe, you would need to travel a long way. And astronomers use the largest possible observation they can make. The Cosmic Microwave Background Radiation, the afterglow of the Big Bang, visible in all directions as a red-shifted, fading moment when the Universe became transparent about 380,000 years after the Big Bang. When this radiation was released, the entire Universe was approximately 2,700 C. This was the moment when it was cool enough for photons were finally free to roam across the Universe. The expansion of the Universe stretched these photons out over their 13.8 billion year journey, shifting them down into the microwave spectrum, just 2.7 degrees above absolute zero. With the most sensitive space-based telescopes they have available, astronomers are able to detect tiny variations in the temperature of this background radiation. And here’s the part that blows my mind every time I think about it. These tiny temperature variations correspond to the largest scale structures of the observable Universe. A region that was a fraction of a degree warmer become a vast galaxy cluster, hundreds of millions of light-years across. The Cosmic Microwave Background Radiation just gives and gives, and when it comes to figuring out the topology of the Universe, it has the answer we need. If the Universe was curved in any way, these temperature variations would appear distorted compared to the actual size that we see these structures today. But they’re not. To best of its ability, ESA’s Planck space telescope, can’t detect any distortion at all. The Universe is flat. Well, that’s not exactly true. According to the best measurements astronomers have ever been able to make, the curvature of the Universe falls within a range of error bars that indicates it’s flat. Future observations by some super Planck telescope could show a slight curvature, but for now, the best measurements out there say… flat. We say that the Universe is flat, and this means that parallel lines will always remain parallel. 90-degree turns behave as true 90-degree turns, and everything makes sense. But what are the implications for the entire Universe? What does this tell us? Unfortunately, the biggest thing is what it doesn’t tell us. We still don’t know if the Universe is finite or infinite. If we could measure its curvature, we could know that we’re in a finite Universe, and get a sense of what its actual true size is, out beyond the observable Universe we can measure. We know that the volume of the Universe is at least 100 times more than we can observe. At least. If the flatness error bars get brought down, the minimum size of the Universe goes up. And remember, an infinite Universe is still on the table. Another thing this does, is that it actually causes a problem for the original Big Bang theory, requiring the development of a theory like inflation. Since the Universe is flat now, it must have been flat in the past, when the Universe was an incredibly dense singularity. And for it to maintain this level of flatness over 13.8 billion years of expansion, in kind of amazing. In fact, astronomers estimate that the Universe must have been flat to 1 part within 1×10^57 parts. Which seems like an insane coincidence. The development of inflation, however, solves this, by expanding the Universe an incomprehensible amount moments after the Big Bang. Pre and post inflation Universes can have vastly different levels of curvature. In the olden days, cosmologists used to say that the flatness of the Universe had implications for its future. If the Universe was curved where you could complete a full journey with less than 4 turns, that meant it was closed and destined to collapse in on itself. And it was more than 4 turns, it was open and destined to expand forever. Well, that doesn’t really matter any more. In 1998, the astronomers discovered dark energy, which is this mysterious force accelerating the expansion of the Universe. Whether the Universe is open, closed or flat, it’s going to keep on expanding. In fact, that expansion is going to accelerate, forever. I hope this gives you a little more understanding of what cosmologists mean when they say that the Universe is flat. And how do we know it’s flat? Very precise measurements in the Cosmic Microwave Background Radiation. Is there anything that all pervasive relic of the early Universe can’t do?
Climate change could outpace mammals: study Falling behind As the climate changes this century, the ranges of most mammal species will shrink - in many cases because animals won't be able to get to areas suitable for them, says new research. And while some animals will do just fine or even better than before, certain animals could face catastrophic losses of survivable habitat. Most at risk are primates, which will likely lose 75 per cent of their range because of both inhospitable climate and the inability to get to liveable places. "We could be underestimating the vulnerability of some species to climate change," says Carrie Schloss, an ecologist at the University of Washington, Seattle. "There have been a lot of projections done on species' ranges and where they are projected to be in the future based on where the climate will be suitable," she says. "But most don't tell you whether species can get from where they are today to where the climate will be suitable." To make more accurate predictions of how mammals might be expected to fare in the coming decades, Schloss and colleagues collected information on 493 species of mammals whose future ranges had already been predicted through about the year 2100. Then, the researchers used known relationships between how big an animal is and what it eats to estimate how far a given species could be expected to move from generation to generation. Previous studies have shown that climate change will expand the ranges where some species will be able to live. But when Schloss' team factored in whether animals could actually get to these newly suitable habitats, they found that true ranges will actually shrink in nearly 60 per cent of those cases. Range size will shrink by an average of nearly 40 per cent. Winners and losers Animals in tropical regions face the biggest risks, the researchers report today in the Proceedings of the National Academy of Sciences, possibly because species there are extra-sensitive to even small changes in climate. Across the moist subtropical regions of the western hemisphere, for example, nearly 15 per cent of mammals will likely be left behind by climate change. That number jumps to nearly 40 per cent in some areas of the Amazon. In those places, species that can only migrate about one kilometre each year would need to move eight times faster to keep up with climate-induced shifts in their ideal rangelands. Other areas that are likely to experience climate changes that are more extreme than many species will be able to handle include the Yucatan Peninsula, the Appalachian Mountains and the southeastern United States. Primates are in particularly trouble, as are moles and shrews. Animals expected to be able to keep up with climate change include carnivores, armadillos, sloths, coyotes, elk and moose. Many of these animals can move large enough distances to get them to where they'll need to go. The new study should help researchers focus conservation efforts by, for example, figuring out where to create corridors for animals that will need to migrate in the face of climate change, says David Ackerly, an ecologist at the University of California, Berkeley. "Unfortunately, there is not a lot of good news in analyses of climate impacts," he says. "Rapid change will be disruptive. The question is: Where will impacts be worse and what can we do?"
Recall that the general form for an equationin the first degree in one variable is ax + b = 0. The general form for first-degree equations in two variables is ax + by + c =0 It is interesting and often useful to note whathappens graphically when equations differ, in certain ways, from the general form. With this information, we know in advance certain facts concerning the equation in question. LINES PARALLEL TO THE AXES If in a linear equation the y term is missing, as in 2x - 15 = 0 the equation represents a line parallel to the Yaxis and 7 1/2 units from it. Similarly, an equation such as 4y - 9 = 0 which has no x term, represents a line parallel to the X axis and 2 1/4 units from it. (Seefig. 12-a.) The fact that one of the two variables doesnot appear in an equation means that there are no limitations on the values the missing variable can assume. When a variable does not appear, it can assume any value from zero to plus or minus infinity. This can happen only if the line represented by the equation lies parallel to the axis of the missing variable. Lines Passing Through the Origin A linear equation, such as that has no constant term, represents a linepassing through the origin. This fact is obvious since x = 0, y = 0 satisfies any equation not having a constant term. (See fig. 12-a.) Lines Parallel to Each Other An equation such as has all possible terms present. It representsa line that is not parallel to an axis and does not pass through the origin. Equations that are exactly alike, except forthe constant terms, represent parallel lines. As shown in figure 12-8, the lines represented by the equations 3x - 2y = -18 and 3x - 2y = 6 Parallel lines have the same slope. Changing the constant term moves a line away fromor toward the origin while its various positions remain parallel to one another. Notice in figure 12-8 that the line 3x - 2y = 6 lies closer to the origin than 3x - 2y = -18. This is revealed at sight for any pair of lines by comparing their constant terms. That one which has the constant term of greater absolute value will lie farther from the origin. In this case 3x - 2y = -18 will be farther from the origin since |-18| > |16|.. The fact that lines are parallel is indicatedby the result when we try to solve two equations such as 3x - 2y = -18 and 3x - 2y = 6 simultaneously. Subtraction eliminates both x and y immediately. If both variables disappear, we cannot find values for them such that both equations are satisfied at the same time. This means that there is no solution. No solution implies that there is no point of intersection for the straight lines represented by the equations. Lines that do not intersect in the finite plane are parallel.
The chemical formula for tin(IV) nitrate, also known as stannic nitrate, is Sn(NO3)4. It is synthesized by the reaction of 70 percent composition of nitric acid and tin.Continue Reading In chemistry, the two types of compounds are organic and inorganic. Organic compounds involve the formation of substances containing the element carbon while inorganic compounds are made up of two or more elements, other than carbon, that are chemically bonded together. Inorganic compounds are categorized into two: ionic compounds and molecular compounds. Ionic compounds comprise of a metal and a non-metal while molecular compounds are composed of two non-metals. Ionic compounds consist of a positively charged metallic ion, called a "cation," and a negatively charged non-metallic anion, which is referred to as an "anion." Some transition metals may exist in various ionic forms. Different ions of the same element are distinguished using a Roman numeral, which also indicates its oxidation state and charge. The element tin forms the Sn2+ and Sn4+ ions and are named tin(II) and tin(IV), respectively. The Latin equivalents of these two ions are stannous for tin(II) and stannic for tin(IV). When two or more atoms are covalently bonded, a polyatomic ion is formed. One of the most commonly occurring polyatomic anion is nitrate, with the chemical formula NO3-. The ionic compound tin(IV) nitrate consists of the stannic cation and the polyatomic nitrate anion.Learn more about Solutions & Mixtures
- the standard unit of electrical resistance in the International System of Units(SI), formally defined to be the electrical resistance between two points of a conductor when a constant potential difference applied between these points produces in this conductor a current of one ampere. The resistance in ohms is numerically equal to the magnitude of the potential difference. Symbol: Ω Origin of ohm - Ge·org Si·mon, [gey-awrk zee-mawn] /geɪˈɔrk ˈzi mɔn/1787–1854, German physicist. Examples from the Web for ohm Wheatstone by his knowledge of Ohm's law and the electro-magnet was probably able to enlighten him.Heroes of the Telegraph We must therefore have a standard for the ohm, which is the measure of resistance.Electricity for Boys J. S. Zerbe In applying this illustration to the voltaic cell, we make use of Ohm's law. One of these methods depends upon an application of Ohm's law. This furlough was perhaps the most important event in Ohm's life.Makers of Electricity - the derived SI unit of electrical resistance; the resistance between two points on a conductor when a constant potential difference of 1 volt between them produces a current of 1 ampereSymbol: Ω - Georg Simon (ˈɡeːɔrk ˈziːmɔn). 1787–1854, German physicist, who formulated the law named after him Word Origin and History for ohm unit of electrical resistance, 1867, in recognition of German physicist Georg S. Ohm (1789-1854), who determined the law of the flow of electricity. Originally proposed as ohma (1861) as a unit of voltage. Related: ohmage; ohmic; ohmeter. - A unit of electrical resistance equal to that of a conductor in which a current of one ampere is produced by a potential of one volt across its terminals. - The SI derived unit used to measure the electrical resistance of a material or an electrical device. One ohm is equal to the resistance of a conductor through which a current of one ampere flows when a potential difference of one volt is applied to it. - German physicist who discovered the relationship between voltage, current, and resistance in an electrical circuit, now known as Ohm's law. The ohm unit of electrical resistance is named for him. The unit of electrical resistance, named after the nineteenth-century German physicist Georg Ohm.
If you’ve ever watched a drop of water form into a bead or a water strider scoot across a pond, you are familiar with a property of liquids called surface tension. The interior molecules of a liquid are pushed and pulled by neighboring molecules in every direction, but those at the surface have only half as many neighbors to interact with. The resulting density change of molecules arranged near the surface causes a drop of water to form a sphere and prevents the insect from sinking even though its density is greater than the water’s. The surface tension of liquids is well-established, says Anand Jagota , but the same property in solid materials has seemed a moot point. Technically, it exists, but its force is usually too weak to deform a solid by more than an angstrom. Jagota, professor of chemical engineering and director of Lehigh’s bioengineering program , has pondered for more than a decade the possibility that some solids, especially soft biomaterials and geometrically altered materials, might also exhibit surface tension. Over the last two years, at the Leibniz Institute for New Materials (INM) in Saarbruecken, Germany, and then at Cornell University, Jagota and his collaborators have experimented with two classes of solids: rubber-like elastomers and a more compliant gelatin similar in stiffness to human tissue.A faithful, but attenuated replica The researchers patterned the elastomer with ripples measuring microns in depth and then covered it with a gel and exposed it to air. Dark-field optical microscopy of a cross-section of the elastomer and gel revealed that the gel faithfully replicated the surface topography of the elastomer. When the researchers removed the gel from the elastomer, however, the gel flattened almost instantaneously. It continued to match the peaks and valleys of the elastomer’s ripples, but with significantly diminished features. “We wondered if the gel would be an exact replica of the elastomer when we removed it,” he says. “Instead, we saw an attenuated replica. The gel had filled all the undulations in the elastomer, but as soon as we removed it, a pent-up force acted immediately to flatten it.” The group reported their results in Physical Review E in an article titled “Surface-tension-induced flattening of a nearly plane elastic solid .” Jagota’s coauthors were Animangsu Ghatak ’03 Ph.D., associate professor of chemical engineering at the Indian Institute of Technology at Kanpur, and Dadhichi Paretkar, formerly of INM and now a postdoctoral research associate at Lehigh. “Our results show that surface tension of soft solids drives significant deformation, and that the latter can be used to determine the former,” the researchers wrote.“A basic mechanical force” The discovery, says Jagota, should motivate scientists and engineers to rethink many of their assumptions. “It has generally been agreed that surface tension in solids was felt only at the atomic scale. We have shown that surface tension in these compliant solids is a real thing and that it manifests itself at relatively large scales. “As a basic mechanical force, surface tension in compliant solids will play a role in all mechanical phenomena involving compliant materials, especially biomaterials. How do things fracture, stick, slide, have friction, deform? What are the elastic forces that resist a cell when it spreads on a gel? How strongly do dust particles stick to the inside of a lung? “We’re going to have to rethink many of the questions involving compliant materials.” After experimenting on the elastomer and gel, Jagota asked a second question: Could surface tension be observed in a stiff material made compliant by geometry, such as a thin sheet of paper? From an elastomeric material, Jagota and his colleagues made an annulus, or ring-shaped object. Across its hole, they suspended a thin film of the same material and deposited a 1-mm drop of water on top of the film. As expected, the drop beaded on the elastomer. “When we placed the drop of water on the bottom of the thin film,” says Jagota, “the drop bulged up. This was counterintuitive, as it bulged in an opposite direction to gravitational pull. “We expected to see a sagging as the water drop beaded. Instead, it appeared to defy gravity. The explanation for this is surface tension.” The researchers, Jagota says, had replicated the phenomenon of Neumann’s triangle, which describes the surface tensions of three immiscible liquids—such as an oil, water and a hydrophobic liquid that doesn’t mix with oil—that are in equilibrium. “In our case, we replaced one of the three liquids with a solid—the thin film elastomer—which behaved like the third liquid in Neumann’s triangle.” The group published their results in the Proceedings of the National Academy of Sciences . Their article, titled “Solid surface tension measured by a liquid drop under a solid film ,” was coauthored by Jagota; Nichole Nadermann, former research scientist at Lehigh and now a postdoctoral fellow at the National Institute for Standards and Technology; and Chung-Yuen Hui, professor of mechanical and aerospace engineering at Cornell. The research was supported by the Division of Materials Science and Engineering in the Office of Basic Energy Sciences of the U.S. Department of Energy.
It’s been almost two centuries since a possible link between large earthquakes and nearby volcanic eruptions was first proposed by none other than Charles Darwin, who compiled accounts of increased activity at a number of Andean volcanoes in the wake of the 1835 Concepcion earthquake. In the decades since, however, firm evidence of earthquakes triggering volcanic eruptions has remained somewhat elusive; but a new paper by Sebastian Watt and colleagues at the University of Oxford takes a fresh look at Darwin’s old seismic stomping grounds, and claims that spikes in volcanic activity in the Andes can indeed be observed in the months following large earthquakes. The study focusses on the southern Andean subduction zone off the coast of Chile. The earthquakes are from the eastward subduction of the Nazca plate beneath South America (the blue line marks the plate boundary); here, water driven from the downgoing slab triggers melting of the surrounding mantle and generates a volcanic arc (the orange blobs shows the locations of volcanos known to be active in the last 10,000 years, courtesy of the Smithsonian Global Volcanism Program). Watt et al. have compiled records of seismic and volcanic activity in this are stretching as far back as the 1500s, which included 16 earthquakes with a magnitude of more than 7.5, and more than 250 eruptions from 25 volcanoes. However, the record prior to 1850 is more poorly documented and likely to be incomplete. The plot below charts the number of eruptions that initiated in each year since 1850, with the five large earthquakes that occurred in this period also plotted (click for a larger version). This plot shows that 2 of these earthquakes – in 1906 and 1960 – appear to have been associated with spikes in volcanic activity, with six or seven volcanoes starting to erupt in the following 6-12 months. Even though the eruption record as a whole is fairly variable and ‘spiky’, these peaks still seem a little unusual, standing out from the normal variation between 0 and 4 (and, very occasionally, five) eruptive events a year. 1906 and 1960 stand out statistically too: in general, the records follows an exponential distribution, with there being many more years with 0 or 1 eruptions than there are years with 3 or 4 eruptions (the average rate is 1.32 eruptions per year). The best-fit exponential distribution for the whole record predicts that there should be only one year with more than 6 eruptive events every 500 years or so, whereas this record not only has two such years in the last 150, but these years also both immediately follow large subduction zone earthquakes – an association that should occur once every 2,500 years or so if there was no connection between the the earthquake and the period of high volcanic activity. So the association does seem to be more than a coincidence, especially since the less reliable record between 1550 and 1850 indicates at least one more year – 1751 – where 6 eruptions occurred in the year following a subduction zone earthquake.* It’s important to note that this study potentially establishes a statistical, rather than a phenomenological, link: that is, you can’t point at any of the 7 volcanoes that erupted in the year after the 1960 earthquake, and say “that one was triggered by the earthquake, so was that one, but that one would have erupted anyway”. This is because all of the volcanoes involved were probably already on the verge of erupting, and the earthquake somehow just provided a little extra push over the threshold. This need for volcanoes to already be ‘primed’ to erupt before they can be triggered by seismic activity probably explains why not every great earthquake on the Chilean subduction zone is followed by excess volcanic activity (as is the case in 1928, 1939 and 1985). For there to be a measurable effect you need a reasonable number of volcanoes with full magma chambers, which depends on factors like the rate of magma generation and ascent in the subsurface, and (possibly) the time since the last big earthquake. However, this study does provide at least one clue about the actual processes behind this linkage, because the effect seems to stretch at least 500 km from the epicentre of the triggering earthquake, far beyond the distance where permanent changes in stress associated with deformation at the plate boundary are going to have a measurable effect on a volcanic magma chamber. This suggests that it is the passage of seismic waves generated by the earthquake that is somehow pushing the volcanoes over the brink, even if it sometimes takes a few months for that effect to be felt at the surface. Right now no-one really knows exactly what mechanism is involved: Watt et al. discuss a number of possibilities, including seismic energy forcing gas bubbles out of the melt, or dislodging partially solidified, mushy chunks of melt from the walls of the magma chamber. As it stands though, whilst large regional studies like these are needed to firmly establish a correlation between earthquakes and volcanoes, only detailed investigations of individual volcanoes are going to provide insight into the physical processes that may cause earthquake triggering. *Rather ironically, the 1835 earthquake which prompted Darwin’s initial musings does not show up as having a particularly significant effect in this study, but this may be because the observations that he reported – which were mainly second-hand accounts, and therefore difficult to verify – were deemed not to be reliable enough to be included in the eruption catalogue. S WATT, D PYLE, T MATHER (2008). The influence of great earthquakes on volcanic eruption rate along the Chilean subduction zone Earth and Planetary Science Letters DOI: 10.1016/j.epsl.2008.11.005
Jules Henri Giffard's Steam Airship The Wright brothers may be the most famous people in the history of aviation for the first aeroplane flight in 1903, but the first ever powered and controlled flights were carried out in lighter-than-air craft before either of the Wright brothers was even born. Jules Henri Giffard was a Frenchman who made his fortune by inventing the steam injector (a device to prevent steam engine boilers running out of water whilst they were stationary, patented in 1858), but before that in 1852, he built the world's first passenger airship. Other people had previously built and flown balloons filled with hydrogen, but in order to make the jump from ballon to being a true airship there needed to be both a source of propulsion and a means of changing direction so that there was the control to choose to fly where one wished. The first airships were know as "dirigible balloons" from the French "dirigeable", meaning "steerable". Later they were simply refered to as "dirigibles". In 1850 Giffard helped fellow French engineer Jullien to build an airship with a propeller driven by clockwork, but it was to be Giffard's knowlege of steam power that would place his own airship in the history books and in 1851 he patented the "application of steam in the airship travel". He managed to build a small and light steam engine weighing just 250 pounds and despite the added weight of the boiler and coke brining it to over 400 pounds, it was still light enough for his hydrogen filled balloon to lift. The engine drove a large (3.3 metre) rear-facing three-bladed propeller, and although only producing a power of 2,200 watts(1) (three horsepower), it would prove to be enough to demonstrate that controled flight was possible. The funnel pointed downwards and the exhaust stream was mixed with the combustion gasses to try and prevent sparks which might ignite the highly flammable hydrogen gas in the balloon. The balloon itself was 43 metres (144 foot) long and pointed at both ends. Below it at the rear was mounted a sail-like triangular vertical rudder. The airship successfully flew on the 24th September 1852, launching from the Paris Hippodrome and flying 27km (17 miles) to Elancourt, near Trappes. Because the small engine was not very powerful it could not overcome the prevailing winds to allow Giffard to make the return flight (the top speed of Giffard's airship was just six miles per hour). However, he did manage to turn the airship in slow circles, proving that in calm conditions controled flight was possible. (1) To put that in context, that is about the same as a modern steam iron, and less than a fast-boil kettle (3,000 watts).
Correlational Evidence When variable X increases, variable Y also increases So, does X increase Y? –or does Y increase X? Alternatively, does Z increase both X and Y? Experimental Research Experimental research involves a direct assessment of how one variable influences another This allows the establishment of causality All extraneous variables must be held constant while a single variable is manipulated and the effect measured Definition of variables: Experimental Designs Pre-Experimental Quasi-Experimental True-Experimental Key: –R = random assignment to groups –O 1,2… = observation of group x (recording of DV) –O a,b… = observation of group y (recording of DV) –T = treatment (IV) –P= placebo (IV). Pre-Experimental Designs One Shot Study One Group Pre-test Post-test Pre-Experimental Designs Static Group Comparison Time series True-Experimental Designs Randomised Group Comparison Pre-test Post-test Randomised Group Comparison Scientific Reasoning (Logic) General Theory Specific Observation Formation of a theory grounded in your own observations Confirmation of a theory from your own observations Quantitative versus Qualitative Quantitative Research Strategy Qualitative Research Strategy Choice of Research Strategy… Based on: –Epistemology (How should we be attempting to assess knowledge?) –Ontology (Does the data exist in a tangible or an intangible form?) Choice of Research Strategy… Study in the natural sciences often requires and Study in the social sciences often requires and Selected Reading Thomas J. R. & Nelson J. K. (2005) Research Methods in Physical Activity, 5 th edition. Champaign, Illinois: Human Kinetics Berg K. E. & Latin R. W. (2008) Essentials of Research Methods in Health, Physical Eduction, Exercise Science, and Recreation, 3 rd edition. Maryland: Lippincott Williams &Wilkins
First some definitions: Aerobic running: Aerobic running occurs when you have enough oxygen in your body to supply energy to your muscles to complete your exercise. When your body uses oxygen as a source of energy, it produces a waste product of carbon dioxide and water, which you expel simply by breathing. Anaerobic running: Anaerobic running occurs when there is not enough oxygen in your body to supply enough energy to complete your exercise. This is typically seen in short, powerful races which last less than 90 seconds, or an all-out sprint to the finish. The waste product is lactic acid, which is very difficult for your body to break down and causes your body to have extreme fatigue. Why should endurance athletes, who function off aerobic energy (oxygen) resources for their sport still need to condition themselves in anaerobic exercise? So many reasons, so I'll highlight just a few. - Increase speed and power characteristics in muscle fibers By developing the fast-twitch muscle fibers in our bodies, we encourage reaction time and therefore a faster turnover in our strides. The stride is more powerful and allows the body to be propelled forward by energy in the body AND force of impact. By including the power from the force of impact, more free energy is stored in the body, allowing for an increased amount of endurance. - Develop a higher lactic threshold Runners who include anaerobic training into their workouts will develop stronger muscles which will gain a higher lactic threshold, and therefore be more resistant to fatigue. - Avoid injury Runners with stronger muscles, which have had multiple types of fibers developed and trained, and less likely to get injured. Also, when the muscles are properly trained to resist fatigue, it is more difficult to over train the muscles by asking them to do work they are not capable of doing. I want my athletes and clients to be the best versions of themselves and be capable of any work presented to them. By exposing their muscles to exercises which target multiple energy types, I can ensure that they are properly (and SAFELY) developing.
Whether you're a diehard recycler who shops with canvas bags and keeps a compost bin in the corner of your backyard, or a busy parent looking for some quick tips on sorting glass from plastic, it's easy to get your family on the path to greener living. But the best earth-friendly practices require the cooperation of everyone in the household. So, how do parents get kids to reduce, reuse, and recycle and embrace the other basics of environmental responsibility? As with most good habits, the best way to teach them is to be a good role model yourself. By showing that you care about and respect the environment, your kids will do the same. It's a Family Affair Here are some suggestions you can try as a family: - Teach respect for the outdoors. This can start in your own backyard. Help kids plant a garden or tree. Set up bird feeders, a birdbath, and birdhouses. Kids can clean out and refill the bath daily, and clean up seed debris around feeders and restock them. On a larger scale, you can plan family vacations that focus on the great outdoors. Maybe a summer trip to the Grand Canyon or Yellowstone Park appeals to your adventurous clan. Shorter trips might include a day at a state or national park. Even a couple days at the beach can offer plenty of opportunities for you to point out and discuss the plants and animals you see and why it's important to protect their habitats. - Recycle. Recycling is easy, and in some communities, mandatory. Check with your local recycling office and be sure you know all the rules. Some communities allow co-mingling — all recyclables can be placed in one container — while others require sorting into separate containers. You may need bins for each type of recyclable: One for plastic, one for glass, one for paper, and one for cans. Kids can sort (and rinse, if necessary) items, place them in the correct bins, and take the containers out to the curb for collection. After the bins have been emptied, ask your kids to rinse them out (if they're dirty) and bring them back into the house or garage. - Drink your own water. Bottled water is expensive and, experts say, not any cleaner or safer than tap water. In fact, much bottled water is actually tap water that has been filtered. The water that comes out of home spigots in the United States is extremely safe. Municipal water supplies are monitored constantly and the test results made public. And unless they're recycled, the plastic bottles — most commonly made from polyethylene terepthalate (PET), which is derived from crude oil — can end up in landfills. So have your kids tote water from the tap (you can add a filter to improve its taste) in reusable bottles. - Clean green. Many natural products can replace commercial — and possibly hazardous — cleaning preparations. Just a few examples: to deodorize carpets, sprinkle them with baking soda, wait 15 minutes and then vacuum; use vinegar and baking soda for everything from oven cleaning and drain clearing to stain removal and metal polishing. Lots of websites offer green cleaning tips, and many stores carry pre-made nontoxic cleaners for those who don't want to make their own. - Lend a hand. Many communities sponsor green activities, like pitching in to help clean up a local park or playground. Maybe the area around your child's school could use sprucing up. Getting Kids to "Go Green" In their own day-to-day activities, encourage kids to find ways to limit waste, cut down on electricity, avoid unnecessary purchases, and reuse items that they already have. Here's how: - Conserve energy. Remind kids to turn off lights when they're not in use, power down computers, turn off the TV when nobody's watching, and resist lingering in front of the refrigerator with the door open. - Hoof it. If kids can safely ride a bike or walk to school or to visit friends rather than catch a ride from parents, encourage it! Or if safety is a concern, consider organizing a "walking school bus" — this activity allows kids to walk or bike to and from school under the supervision of an adult. - Let there be (more) light. Older kids can help replace regular light bulbs with energy-efficient ones. Compact fluorescent light bulbs provide about the same light output as incandescent bulbs, but last much longer and use a fraction of the energy. - Reuse and recharge. Buy rechargeable batteries for your kids' electronics and toys and teach them how to care for and recharge them. This reduces garbage and keeps toxic metals, like mercury, out of landfills. - Pass it on. Ask kids to gather toys, books, clothes, and other goods that they no longer use or want for donation to local charities. Have them ride along for the drop-off so they can see how groups such as Goodwill and the Salvation Army use donations to help others. These tips are just some ways to get your family to become more earth-friendly. Once you get everyone on board with conservation, challenge your kids to come up with new and interesting ways of going green. Can your grade-schoolers cut back on the amount of paper they print from the Internet? How about your teens: Can they agree to take shorter showers? Engaging your kids in this way will get them to start thinking about how their individual efforts affect the world they live in, and how little changes can — and will — make a difference. Reviewed by: Mary L. Gavin, MD Date reviewed: January 2012 |Environmental Protection Agency (EPA) The EPA is the government agency that works to protect human health and safeguard the natural environment.| |EPA Student Center This Environmental Protection Agency site aims to educate and inspire kids. You'll find activities, tips about starting an environmental club, and information about awards kids can win for protecting the environment.| |The Green Guide The Green Guide and www.thegreenguide.com are the "green living source for today's conscious consumer," with green homes tips, eco-product reviews, a section for kids, environmental health information, and more.| |Community Service: A Family's Guide to Getting Involved One of the most satisfying, fun, and productive ways to unite as a family is volunteering for community service projects. It sets a good example for your kids and helps the community.| |Organic and Other Environmentally Friendly Foods You've probably noticed the increased quantity and variety of organic foods available in regular grocery stores. Are organic foods healthier and safer? How do they taste?| |Making the Holidays Less Materialistic It can be hard to look beyond all of the product-driven hoopla surrounding the holidays. Here are five ways to curb materialism and reinforce the real reason for the season.| Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor. © 1995-2014 KidsHealth® All rights reserved. Images provided by iStock, Getty Images, Corbis, Veer, Science Photo Library, Science Source Images, Shutterstock, and Clipart.com
Stonehenge (stōnˈhĕnjˌ) [key], group of standing stones on Salisbury Plain, Wiltshire, S England. Preeminent among megalithic monuments in the British Isles, it is similar to an older and larger monument at Avebury. The great prehistoric structure is enclosed within a circular ditch 300 ft (91 m) in diameter, with a bank on the inner side, and is approached by a broad roadway called the Avenue. Within the circular trench the stones are arranged in four series: The outermost is a circle of sandstones about 13.5 ft (4.1 m) high connected by lintels; the second is a circle of bluestone menhirs; the third is horseshoe shaped; the innermost, ovoid. Within the ovoid lies the Altar Stone. The Heelstone is a great upright stone in the Avenue, northeast of the circle. It was at one time widely believed that Stonehenge was a druid temple, but this is contradicted by the fact that the druids probably did not arrive in Britain until c.250 B.C. In 1963 the American astronomer Gerald Hawkins theorized that Stonehenge was used as a huge astronomical instrument that could accurately measure solar and lunar movements as well as eclipses. Hawkins used a computer to test his calculations and found definite correlations between his figures and the solar and lunar positions in 1500 B.C. However, as a result of the development of calibration curves for radiocarbon dates, Stonehenge is now believed to have been built in several stages between c.3000 and c.1500 B.C., with the main construction completed before 2000 B.C. Excavation and testing in 2008 established a date of between 2400 and 2200 B.C. for the erection of the bluestones. Some archaeologists objected to Hawkins's theory on the basis that the eclipse prediction system he proposed was much too complex for the Early Bronze Age society of England. Most archaeologists agree, however, that Stonehenge was used to observe the motions of the moon as well as the sun. Research by the archaeologist Alexander Thom, based on the careful mapping of hundreds of megalithic sites, indicates that the megalithic ritual circles were built with a high degree of accuracy, requiring considerable mathematical and geometric sophistication. More recent speculation on the Neolithic ceremonial and cultural functions of Stonehenge has included its possible use as a center for healing and as a burial ground for a local ruling family. Among the burials near the site have been found remains of a man who was raised near the Alps and a teenage boy raised near the Mediterranean. Evidence of a former stone circle with 25 bluestones has been found nearby beside the River Avon; the stones once used there may have been incorporated into Stonehenge. See G. S. Hawkins, Stonehenge Decoded (1965); H. Harrison and L. E. Stover, Stonehenge (1972); A. Thom, Megalithic Sites in Britain (1967) and Megalithic Lunar Observations (1973). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
Genetic analysis finds vegetation change around same time as megafauna extinction Per Möller/Johanna Anjar For want of mums woolly mammoths were lost. A genetic analysis of ancient permafrost suggests that after the last Ice Age the Arctic shifted from a landscape dominated by nutritious flowering plants known as forbs to one dominated by hard-to-digest grasses and woody plants. Evolutionary geneticist Eske Willerslev of the University of Copenhagen and his colleagues report the finding in the Feb. 6 Nature. That shift may have helped drive the extinction of large herbivores such as woolly mammoths and woolly rhinoceroses, Willerslev speculates. The researchers examined 242 permafrost samples from 21 sites in Siberia, Alaska and Canada. Each sample was carbon dated to determine its age. To identify plants in the samples, the researchers sequenced DNA from plant organelles that carry out photosynthesis. From about 50,000 years ago until around 12,000 years ago, the most abundant
The continuing increase in atmospheric carbon dioxide levels could trigger a large, abrupt shift in the climate which is termed a tipping point. Climate change becomes uncontrollable and the planet enters a chaotic era lasting thousands of years before natural processes can bring the climate under control once a tipping point has been reached. Climate history demonstrates that the transition from ice ages to warm, interglacial periods (such as we are in now) is not smooth. The transitions are dramatic with sharp changes setting off accelerated warming that triggers the disappearance of summer ice in the Arctic, melting of the Greenland ice sheet, melting of the West Antarctic ice sheet and large scale destruction of rainforests. The Paleocene epoch which occurred 55 million years ago is an example of the Earth undergoing a large and abrupt climate shift. At this time the planet was undergoing a gradual global warming similar to the warming trend the planet is experiencing today. A sudden release of an enormous amount of carbon flooded the atmosphere following this gradual warming period. It is hypothesized that the source of this carbon was methane escaping from the ocean floor and the permafrost. Global surface temperature rose 9-16o F and the ocean became acidic. Fossil records demonstrate that mid-latitude flora migrated toward the North Pole. On the ocean floor 30 to 35% of the fauna became extinct. The Paleocene epoch lasted about 120,000 years. It took another 40,000 years for the planet to return to cooler conditions through the removal of carbon from the atmosphere by rock weathering. In other words, it took the planet 40,000 years to transfer the excess carbon from the short-term carbon reservoir to the long-term reservoir (my posting entitled “Carbon Cycle”). Geological evidence clearly demonstrates that the climate has changed dramatically in the past in response to natural forces. Why should we be concerned about the current global warming trend? The answer to this question is simple. The release of carbon dioxide into the atmosphere from the burning of fossil fuel is not part of the natural cycle. This unnatural release of carbon dioxide triggers a carbon overload in the atmosphere. Global warming is a result of this carbon overload. The current global warming trend is accelerating faster than past warming trends. Will the release of carbon dioxide into the atmosphere by the continued burning of fossil fuel induce a climate tipping point? The amount of carbon that entered the climate system during the Paleocene epoch is about the same amount of carbon that is projected to enter the system this century if fossil fuel use continues to grow at the current pace. The Earth’s atmosphere can safely handle around 700 billion tons of carbon but the atmosphere now contains about 800 billion tons. As the world population continues to increase so will atmospheric levels of carbon. Methane plumes have recently been detected rising from the floor of the Arctic Ocean. Between 2003 and 2008 the Greenland ice sheet lost an area 10 times the size of Manhattan and this past 2010 summer a 100 square mile island of ice (four times the size of Manhattan) broke off the Greenland ice sheet. In 2008 the Northwest Passage was ice free for the first time in history (see my posting entitled The Arctic Feedback Factor and Climate Change). The planet did not have to deal with over 6 billion human inhabitants during previous climate shifts. Humans can’t regulate the climate. Humans through the burning of fossil fuel make it more difficult for Mother Nature to keep the climate under control. The good news is the technology that exists today will allow us to solve the carbon problem! Solving the carbon problem will be discussed in future blog posts.
view a plan Students learn how holding a political office effects change 9, 10, 11, 12 Title – Do Something about… Voting/Civic Engagement Lesson 4 – How have people used elected offices to make changes? By – Do Something, Inc. Primary Subject – Social Studies Secondary Subjects – Grade Level – 9-12 Do Something about… Teen Voting/Civic Engagement The following lesson is the fourth lesson of a 10-lesson Teen Voting/Civic Engagement Unit from Do Something, Inc. Other lessons in this unit are as follows: | Lesson 1: What is Civic Action? Students learn about why people get involved in their communities. | Lesson 2: Why Is Democracy So Demanding? Students will discuss the role of citizens in a democracy. | Lesson 3: Representin’ Students learn about the system of representation in a democracy. | Lesson 4: How have people used elected offices to make changes? (See lesson below) Students learn how holding a political office effects change. | Lesson 5: Social Capital Students learn about social capital and how to use networking for civic action. | Lesson 6: Politics, A Laughing Matter Students learn how cartoons and satire raise concerns about an issue. | Lesson 7: How do organizers bring about change? Students earn about the strategies of unionizing and boycotting. | Lesson 8: Why do I have to do jury duty? Students learn how jury duty is a type of civic engagement. | Lesson 9: How can I use writing to lead others to action? Students learn how the written word is a method of civic action. | Lesson 10: How can speaking engage others in my cause? Students learn how speeches can gather support for community change. More student teen voting resources can be found at: For more Service-Learning Curricula check out: Lesson 4: How have people used elected offices to make changes? Students will learn about how holding a political office is a way to effect change. Civics Standard 20: - Understands the roles of political parties, campaigns, elections, and associations and groups in American politics Reading Standard 7: - Uses reading skills and strategies to understand and interpret a variety of informational texts Writing Standard 4: - Uses a variety of print and electronic sources to gather information for research topics - Warm-up: Have students discuss the qualities of an effective leader. - Tell students that one way to effect change in their community is by running for public office. Explain to students that there are many different type of elected political positions on both the city, state and federal level. - Discover: Have students read and discuss a few of the articles about young people who have been appointed to a government office. Why did this young person run for office? What challenges did they overcome? How did his/her youth affect their perspective? - Have students attend a school board or town hall meeting and take notes on what issues are being discussed. How do these issues affect the students? Could they see themselves becoming a leader? What talents could they use to lead others? Students can investigate how to become a member of their school board. Are there student representatives? If not, is this something that the class can work on to change? - Take Action: In their Civic Action Group , students should think about which elected position would have the most power to create the changes they want in regard to their issue. Have students pretend that they are going to run for an elected position in their state. As a group, they should identify: What elected position can bring about the most change regarding your issue? Why do you think this is the most effective position to address their groups’ concerns? How does a person get elected to that position? Once in that position, how does that person go about effecting change? Identify individuals who have used this position to create changes in policy E-Mail www.dosomething.org !
Selden, P. A., W. A. Shear & M. D. Sutton. 2008. Fossil evidence for the origin of spider spinnerets, and a proposed arachnid order. Proceedings of the National Academy of Sciences of the USA 105 (52): 20781-20785. A new paper published today presents us with a revised description of Attercopus fimbriunguis, the stem-spider (thanks to William Shear, one of the paper's authors, for sending it out). With this redescription, the position of Attercopus is secured as one of palaeontology's great "transitional fossils". Attercopus is a fossil arachnid from the Middle Devonian (bonus question: what is the connection between Attercopus and Barad-dur?), so dates back to when the terrestrial environment was first finding its feet (and in those invertebrate-dominated days, there were often a lot of them to find). Most modern terrestrial animals were yet to make an appearance - the vertebrates were still keeping to the water, the insects were there but not yet a significant part of the ecosystem. It was the age of the arachnids and myriapods. Even within the arachnids, most of the taxa then present would have been unfamiliar to modern humans, and the currently most familiar group of arachnids, the spiders, had not yet made an appearance. That is where Attercopus becomes so significant. Spiders are actually not typical arachnids at all. Like all other arthropods, the ancestral arachnid form has the body divided up into segments. These segments are externally visible as the cuticle is divided into plates, with separate dorsal (tergites) and ventral (sternites) plates. In most living arachnid orders (such as scorpions and harvestmen), these external plates are still present. In most spiders, the cuticular plates have become fused, and the segmentation is not externally visible. One small group of spiders that is today restricted to eastern Asia, the Mesothelae or liphistiomorphs, differ from all other living spiders (the Opisthothelae, to which they form the sister group) in retaining visible tergites on the opisthosoma (abdomen), though they do not have visible sternites. Mesothelae also differ from Opisthothelae in lacking poison glands in the fangs. As well as the concealed segmentation (independently acquired by acaromorphs such as mites), spiders are also distinct in their production of silk. Only one other group of arachnids, the pseudoscorpions (as well as numerous groups of insects), produces silk. In pseudoscorpions, the silk-producing glands are in the pedipalps. In spiders, they are at the back end of the underside of the opisthosoma, and open through appendages called spinnerets (photo below from here). The presence of silk-producing spigots in Attercopus was first established in 1991, when it was connected to an isolated Devonian 'spinneret' described two years previously (Selden et al., 1991). As redescribed by Selden et al. (2008), however, Attercopus shows a number of significant differences from modern spiders. It retains distinct external segmentation, both tergites and sternites. Also, rather than having the silk glands on spinnerets, the spigots are positioned directly on the underside of the opisthosoma (and their status as silk glands is confirmed in one specimen by the presence of a strand of silk preserved in the process of being exuded from one of the spigots!) The 'spinneret' previously described for Attercopus, as it turns out, was an artifact resulting from post mortem folding of the cuticle. Without the guiding control of spinnerets, Attercopus would not have produced silk in well-defined strands like a modern spider, but in more of a shapeless mat. This is not surprising - the distribution of silk use in modern spiders suggests that its use in reproductive functions (constructing egg cases, spermatophores, etc.) or in constructing burrows probably pre-dated its use in prey capture. Attercopus also appears to have lacked poison glands (again, their previously-suggested presence appears to have been an artifact), which tallies well with their absence in living Mesothelae. Perhaps most intriguing of all (at least to me) is that Attercopus possessed a segmented flagellum. The flagellum is a character of the Uropygi (whip scorpions) which, together with the Amblypygi, form the probable living sister group to spiders in the clade Tetrapulmonata (Shultz, 2007). At present, we cannot say whether the flagellum is an ancestral feature of Tetrapulmonata that was lost in spiders and amblypygids, or was independently derived in uropygids and Attercopus. Selden et al. (2008) also identify sternites and a flagellum in a Permian spider-like fossil, Permarachne novokshonovi, and establish a new order, Uraraneida, for the two fossils. This is not a major change in classification, as Uraraneida is still regarded as the stem group to modern spiders. Also, as the characters uniting Attercopus and Permarachne (free sternites and a flagellum) are both probably plesiomorphies, the Uraraneida is not necessarily monophyletic. With the definite exclusion of Attercopus from the crown group, the earliest known true spider is now Palaeothele montceauensis, a liphistiomorph from the late Carboniferous. The big change between Attercopus and crown Araneae seems to have been the development of spinnerets instead of bare spigots. Developmental genetic studies show that the spinnerets are homologous to opisthosomal legs, which is remarkable because arachnids don't have legs on the opisthosoma. To find opisthosomal appendages on the arachnid lineage, one has to go to their living sister group, the horseshoe crabs. Because of the derived position of spiders within arachnids, and the fact that all other fossil arachnids lack opisthosomal appendages, it is unlikely that opisthosomal appendages in spiders represent a retained plesiomorphy that was lost in all other arachnids. Selden et al. (2008) suggest that this may represent reactivation of suppressed developmental genes, as supposedly seen in stick insects. But despite my wince at their ill-chosen supporting example, legs-to-spinnerets is perhaps a good candidate for such a process. While obvious opisthosomal appendages are not present in arachnids, developmental studies indicate that the covering plates of the arachnid book lungs are homologous to appendages, and it has been suggested for scorpions that the sternites themselves represent fused appendage remnants. The sad fact, I feel, is that our understanding of how developmental processes evolve is still all too rudimentary. For all the vast amount of genetic studies that have been conducted in recent decades, most have been focused on a relatively small number of model species - Drosophila melanogaster, Danio rerio, Arabidopsis thaliana,... Consideration of a single species, or even a few closely-related species as has been done for Drosophila, becomes woefully inadequate when considering questions raised when debating the possibility of genetic recurrence. What happens to a developmental gene when it is inactivated for a certain function? Can it be readily reactivated, or does genetic drift seal its fate as a pseudogene? Is genetic reactivation even the only possible explanation - what about those genes that are still developmentally functional elsewhere in the body? Can they become activated elsewhere in the embryo to give rise to novel structures? Could the spinnerets of spiders be not recurrences of the lost opisthosomal appendages, but rather re-deployments of the appendages still present on the prosoma? Or could they somehow represent a combination of the two? Whatever the answers that are yet to be found, fossils such as Attercopus will always be critical in directing our searches for them. Selden, P. A., W. A. Shear & P. M. Bonamo. 1991. A spider and other arachnids from the Devonian of New York, and reinterpretations of Devonian Araneae. Palaeontology 34: 241–281. Shultz, J. W. 2007. A phylogenetic analysis of the arachnid orders based on morphological characters. Zoological Journal of the Linnean Society 150 (2): 221-265.
I created this geometry word search as a fun activity for my students to do one day for morning work. They love word searches, and we are about to begin our Geometry unit. This activity is great for introducing vocabulary terms, such as the names of the 3-D shapes, symmetry, vertices, faces, edges, etc. This could also be used for a filler activity or for something to do when they finish a quiz or test. Enjoy! Check out my other geometry study guides and practice packs!
Most types of terminal have commands that take longer to execute than they do to send over a high-speed line. For example, clearing the screen may take 20msec once the entire command is received. During that time, on a 9600 bps line, the terminal could receive about 20 additional output characters while still busy clearing the screen. Every terminal has a certain amount of buffering capacity to remember output characters that cannot be processed yet, but too many slow commands in a row can cause the buffer to fill up. Then any additional output that cannot be processed immediately will be lost. To avoid this problem, we normally follow each display command with enough useless charaters (usually null characters) to fill up the time that the display command needs to execute. This does the job if the terminal throws away null characters without using up space in the buffer (which most terminals do). If enough padding is used, no output can ever be lost. The right amount of padding avoids loss of output without slowing down operation, since the time used to transmit padding is time that nothing else could be done. The number of padding characters needed for an operation depends on the line speed. In fact, it is proportional to the line speed. A 9600 baud line transmits about one character per msec, so the clear screen command in the example above would need about 20 characters of padding. At 1200 baud, however, only about 3 characters of padding are needed to fill up 20msec. Go to the first, previous, next, last section, table of contents.
Living Among the Mohawks, 1644 In 1609 Henry Hudson, an English sea captain under contract to the Dutch, sailed his ship the Half Moon up what would become the Hudson River to a point near present-day Albany, NY. Hudson searched for the elusive Northwest Passage that his Dutch masters hoped would provide them a short, northern sea route to the lucrative spice islands of the Pacific. His quest was unsuccessful; but Hudson’s voyage was not a failure. The expedition did not lead to spices, but it did find another profitable commodity – furs. Distracted by the search for riches in the South Pacific, the West Indies and South America, the Dutch did not exploit their discovery in North America until 1621. That year witnessed the formation of the Dutch West India Company with a monopoly for trade along the shores of the Americas. The colony of New Netherland was established and its first settlement - Fort Orange - constructed at the head of the navigable waters of the Hudson River in 1624. Down stream, the Dutch established New Amsterdam on Manhattan Island where the Hudson meets the Atlantic Ocean The Dutch traders were primarily interested in furs – beaver furs for fashionable hats. Their principal trading partner was the Mohawks who lived in the area surrounding Fort Orange. The Mohawks were a part of the Iroquois family of Native Americans that also included the Oneida, Onondaga, Cayuga and Seneca tribes. Three centuries earlier, hardships, that included famine and warfare, forced these groups to abandon their homeland in the Mississippi Valley and make an exodus to the New York area. The Iroquois invasion was a gradual process in which the tribes carved out separate homelands by ousting the resident Algonquians. The Mohawks had established themselves in the area of the headwaters of the Hudson River approximately one hundred years before the arrival of the Dutch. The fur trade was beneficial for both the Dutch and the Mohawks. Fort Orange and New Amsterdam thrived. Soon the Dutch West India Company was enticing settlers who would convert the forests to farmland to make the voyage across Johannes Megapolensis was a thirty-nine-year-old Dutch minister who made his way to Fort Orange with his wife and four children in 1642. He had a six-year contract with Kilaen van Rensselaer, a director of the West India Company, to fill the spiritual needs of the inhabitants of the area. Megapolensis described his experience in a series of letters that were later published as a book. In the following excerpt, he provides a portrait of the Mohawks: "The principal nation of all the savages and Indians, hereabouts with which we have the most intercourse, is the Mohawks who have laid all the other Indians near us under contribution. The people and Indians here in this country are like us Dutchmen in body and stature; some of them have well formed features, bodies and limbs; they all have black hair and eyes, but their skin is yellow. In summer they go naked, having only their private parts covered with a patch. The children and young folks to ten, twelve and fourteen years of age go stark naked. In winter, they hang about them simply an undressed deer or bear or panther skin; or they take some beaver and otter skins, wild cat, raccoon, martin, otter, mink, squirrel or such like skins, which are plenty in this country, and sew some of them to others, until it is a square piece, and that is then a garment for them. . . They make themselves stockings and also shoes of deer skin, or they take leaves of their corn, and plait [braid] them together and use them for shoes. They generally live without marriage; and if any of them have wives, the marriage continues no longer than seems good to one of the parties, and then they separate, and each takes another partner. I have seen those who had parted, and afterwards lived a long time with others, leave these again, seek their former partners, and again be one pair. The women, when they have been delivered, go about immediately afterwards, and be it ever so cold, they wash themselves and the young child in the river or the snow. They will not lie down (for they say that if they did they would soon die), but keep going about. They are obliged to cut wood, to travel three or four leagues with the child; in short, they walk, they stand, they work, as if they had not lain in, and we cannot see that they suffer any injury by it. . . The men have great authority over their concubines, so that if they do anything which does not please and raises their passion, they take an axe and knock them in the head, and there is an end of it. The women are obliged to prepare the land, to mow, to plant, and do everything; the men do nothing, but hunt; fish, and make war upon their enemies. They are very cruel towards their enemies in time of war; for they first bite off the nails of the fingers of their captives, and cut off some joints, and sometimes even whole fingers; after that, the captives are forced to sing and dance before them stark naked; and finally, they roast their prisoners dead before a slow fire for some days, and then eat them up. The common people eat the arms, buttocks and trunk, but the chiefs eat the head and the heart. |Mohawk Chief Joseph Brant From a portrait painted in Our Mohawks carry on great wars against the Indians of Canada, on the River Saint Lawrence, and take many captives, and sometimes there are French Christians among them. They spare all the children from ten to twelve years old, and all the women whom they take in war, unless the women are very old, and then they kill them too. Though they are so very cruel to their enemies; they are very friendly to us, and we have no dread of them. We go with them into the woods, we meet with each other, sometimes at an hour or two's walk from any houses, and think no more about it than as if we met with a Christian. They sleep by us, too, in our chambers before our beds. I have had eight at once lying and sleeping upon the floor near my bed, for it is their custom to sleep simply on the bare ground, and to have only a stone or a bit of wood under their heads. In the evening, they go to bed very soon after they have supped; but early in the morning, before day begins to break, they are up again. They make their houses of the bark of trees, very close and warm, and kindle their fire in the middle of them. They also make of the peeling and bark of trees, canoes or small boats, which will carry four, five and six persons. In like manner, they hollow out trees, and use them for boats, some of which are very large. Their weapons in war were formerly a bow and arrow, with a stone axe and mallet; but now they get from our people guns, swords, iron axes and mallets. Their money consists of certain little bones, made of. shells or cockles, which are found on the sea-beach; a hole is drilled through the middle of the little bones, and these they string upon thread, or they make of them belts as broad as a hand, or broader, and hang them on their necks, or around their bodies. They have also several holes in their ears, and there they likewise hang some. They value these little bones as highly as many Christians do gold, silver and pearls; but they do not like our money, and esteem it no better than iron." This eyewitness account appears in: Jameson, J. Franklin (ed.), Narratives of New Netherland 1609-1664 (1909); Ellis, David M. (et al), A Short History of New York State (1957). How To Cite This Article: "Living Among the Mohawks, 1644," EyeWitness to History, www.eyewitnesstohistory.com (2007).
How do bats move without hitting other bats, which are in thousands, in caves? Bats use echolocation to hunt for food and to avoid collisions and obstacle. They have the ability to create and hear noises that humans cannot hear. The sound waves bounce off of objects and back to the bat, which can then judge the size and distance of the object. These subsonic noises vary in length and pulse frequency, and are unique to the individual. Each bat recognizes its own pulse reflections, or “voice,” and uses it to avoid objects and to identify food. Flying around with thousands of other bats inside a cave creates a chaotic amount of noise. The bats simply ignore their personal navigation systems inside the caves. Echolocation is a method of sensory perception by which certain animals orient themselves to their surroundings, detect obstacles, communicate with others, and find food. During echolocation a series of short, high-pitched sounds are emitted by an animal. These sound travel out away from the animal and then bounce off objects and surfaces in the animal’s path creating an echo. The echo returns to the animal, giving it a sense about what is in its path. A bat can determine an object’s size, shape, direction, distance, and motion. This echolocation system is so accurate that bats can detect insects the size of gnats and objects as fine as a human hair.
|This article needs additional citations for verification. (September 2008) (Learn how and when to remove this template message)| Railroad directions are used to describe train directions on rail systems. The terms used may be derived from such sources as compass directions, altitude directions, or other directions. However, the railroad directions frequently vary from the actual directions, so that, for example, a "northbound" train may really be headed west over some segments of its trip, or a train going "down" may actually be increasing its elevation. Railroad directions are often specific to system, country, or region. Many rail systems use the concept of a center (usually a major city) to define rail directions. Up and down In British practice, railway directions are usually described as up and down, with up being towards a major location. This convention is applied not only to the trains and the tracks, but also to items of lineside equipment and to areas near a track. Since British trains run on the left, the "up" side of a line is on the left when proceeding in the "up" direction. The names originate from the early railways, where trains would run up the hills to the mines, and down to the ports. On most of the network, "up" is the direction towards London. In most of Scotland, with the exception of the West and East Coast Main Lines, "up" is towards Edinburgh. The Valley Lines network around Cardiff has its own peculiar usage, relating to the original meaning of traveling "up" and "down" the valley. On the former Midland Railway "up" was towards Derby. On the Northern Ireland Railways network, "up" generally means toward Belfast (the specific "zero" milepost varying from line to line); except for cross-border services to Dublin, where Belfast is "down". Mileposts normally increase in the "down" direction, but there are exceptions, such as the Trowbridge line between Bathampton Junction and Hawkeridge Junction, where mileage increases in the "up" direction. Individual tracks will have their own names, such as Up Main or Down Loop. Trains running towards London are normally referred to as Up trains, and those away from London as Down. Hence the Down Night Riviera runs to Penzance and the Up Flying Scotsman to Kings Cross. In China, railway directions with terminus in Beijing are described as "up" (上行) and "down" (下行), with up towards Beijing; trains leaving Beijing are "down", while those going toward Beijing are "up". Trains run through Beijing may have two or more numbers, for example, the train from Harbin to Shanghai K58/55 uses two different numbers: In the Harbin-Tianjin section, the train runs toward Beijing, the train is known as K58, but in Tianjin-Shanghai section, the train is known as K55; the opposite train from Shanghai to Harbin is known as K56/57, while K56 is used from Shanghai to Tianjin and K57 is used from Tianjin to Harbin. In Japan, railway directions are referred to as "up" (上り? nobori) and "down" (下り? kudari), and these terms are widely employed in timetables and station announcements for the travelling public. For JR Group trains, trains going towards the capital Tokyo are "up" trains, while those going away from the capital are "down" trains. For private railway operators, the designation of "up" or "down" (if at all) usually relies on where the company is headquartered as "up". The railway systems in the Australian states have usually followed the practices of railways in the United Kingdom. Railway directions are usually described as up and down, with up being towards the major location in most states. The major location is usually the capital city of the particular state, so in the state of New South Wales trains running away from its capital Sydney are down trains, while in Victoria trains running away from its capital Melbourne are down trains. An interstate train traveling from Sydney to Melbourne would be a down train until it crosses the state border at Albury where it changes its classification to an up train. In states that follow this practice, exceptions do exist for individual lines. In the state of Queensland, up and down directions are individually defined for each line. So a train heading towards the main railway station in the capital Brisbane (Roma Street station) would be classified as an up train on some lines but as a down train on other lines. Inbound and outbound In many commuter rail and rapid transit services in the United States, the rail directions are related to the location of the city center. The term inbound is used for the direction leading in toward the city center and outbound is used for the opposite direction leading out of the city center. City name directions Some British rail directions commonly used are London and Country. The London end of a station is the end where trains to London depart. The country end is the opposite end, where trains to the country depart. This usage is problematic where more than one route to London exists (e.g. at Exeter St Davids). Even and odd In France, railway directions are usually described as Pair and Impair (meaning Even and Odd), corresponding to Up and Down in the British system. Pair means heading toward Paris, and Impair means heading away from Paris. This convention is applied not only to the trains and the tracks, but also to items of lineside equipment. Pair is also quasi-homophonic with Paris, so direction P is equivalent either with direction Pair or with direction Paris. A similar system is in use in Italy, where directions can be Pari or Dispari (Even and Odd respectively). Pari (Even) trains conventionally travel north- and west-bound. The city of Paris is referenced in colloquial use (Parigi in Italian), with Pari trains virtually leading towards it (Paris being in a north-western direction from any point in Italy). In double track loop lines – such as those encircling a city – the tracks, trains and trackside equipment can be identified by their relative distance from the center of the loop. Inner refers to the track and its trains that are closer to the geographic center. Outer refers to the track and its trains that are furthermost from the geographic center. One example is the City Circle line in the Sydney Trains system. For circle routes, the directions may indicate clockwise or counterclockwise (anti-clockwise) bound trains. For example, on the Circle line of London Underground or the loop of the Central line, the directions are often referred to as "inner rail" (anti-clockwise) or "outer rail" (clockwise). The same practice is used for circle routes in Japan, such as the Yamanote Line in Tokyo and the Osaka Loop Line, where directions are usually referred to as "outer" (外回り? soto-mawari) and "inner" (内回り? uchi-mawari), in a system where trains go clockwise on the outer track and counter-clockwise on the inner track. Most railroads in the United States use nominal cardinal directions for the directions of their lines, which often differ from actual compass directions. These directions are often referred to as "railroad" north, south, east, or west, to avoid confusion with the compass directions. Typically an entire railroad system (the lines of a railroad or a related group of railroads) will describe all of its lines by only two directions, either east and west, or north and south. This greatly reduces the possibility of misunderstanding the direction in which a train is travelling as it traverses lines which may twist and turn or even reverse direction for a distance. These directions also have significance in resolving conflicts between trains running in opposite directions. For example, many railroads specify that trains of equal class running to the east are superior to those running west. This means that, if two trains are approaching a passing siding on a single-track line, the inferior westbound train must "take the siding" and wait there for the superior eastbound train to pass. In the United States, most railroads use "east and west", and it is unusual for a railroad to designate "north and south" (the New York City Subway and the Washington Metro are rare examples). Even-numbered trains (superior) travel east (or north). Odd-numbered trains (inferior) travel west (or south). An easy way to remember this: "ODD trains go to San Francisco (west). VERY ODD trains go to Los Angeles (south)". Other names for north and south In New York City, the terms uptown and downtown are used in the subway to refer to northbound and southbound respectively. The nominal railroad direction is determined by how the line will travel when it enters Manhattan. In Hong Kong practice, the up track refers to northbound, and the down track refers to southbound. This old practice on the British Section of the Kowloon-Canton Railway, now called the East Rail, is followed on the West Rail. In other words, trains towards the city center of Kowloon is the "down" direction instead of the "up direction ". "Northbound" and "southbound" are, nonetheless, more commonly used. On the original metro network of the MTR, platforms for the general direction towards the depot are numbered with odd numbers, whereas platforms for the opposite direction are numbered with even numbers. Depots are usually located on or near the end of MTR lines. Exceptions are stations that are located further than the depot, such as the stations of Ngau Tau Kok and Kwun Tong on the Kwun Tong Line. For railways in China that are not connected with Beijing, north and west are used as "up", and east and south as "down". Odd numbered train codes are used for "down" trains, while even numbers are used for "up"; for example, train T27 from Beijing west to Lhasa is "down" (going away from Beijing) since 27 is odd. - Yonge, John; Padgett, David (August 2010) . Bridge, Mike, ed. Railway Track Diagrams 3: Western (5th ed.). Bradford on Avon: Trackmaps. maps 4C, 11C. ISBN 978-0-9549866-6-7. - Train numbers - JR Timetable, March 2012 issue - "Directional Running of Trains in QLD". Retrieved 20 July 2014. - Ferry, J. Amanda (May 20, 2003). "Boston's subway". Boston.com. Retrieved 2016-02-27. - "Muni Metro Map". The San Francisco Municipal Transportation Agency. Retrieved 31 October 2013. - "How to Ride the Subway". Metropolitan Transportation Authority. Retrieved 31 October 2013.
A cancer drug rewires Alzheimer’s or dementia affected neurons for memory improvement, according to new findings. The research comes from Rutgers University where researchers gave rats a cancer drug – RGFP966 – and saw the rats were more attentive, retained more information and developed new connections so memories could be transmitted. Lead author, Kasia M. Bieszczad said, “Memory-making in neurological conditions like Alzheimer’s disease is often poor or absent altogether once a person is in the advanced stages of the disease. This drug could rescue the ability to make new memories that are rich in detail and content, even in the worst case scenarios.” With dementia and Alzheimer’s brain cells shrink and die because the synapses that transfer information become weak. There is currently no treatment to repair or treat such damage after it occurs. The cancer drug being studied is typically used to stop healthy cells from becoming cancerous. The drug affects the brain by making neurons more plastic, which creates better connections and boosts memory. Rats given the drug had improved memory based on what they had learned from the researchers, compared to rats that did not receive the drug. Furthermore, the rats were more attentive to what they were being taught which is essential for humans because sound and signaling are critical forms of communication and learning. Bieszczad added, “People learning to speak again after a disease or injury as well as those undergoing cochlear implantation to reverse previous deafness, may be helped by this type of therapeutic treatment in the future. The application could even extend to people with delayed language learning abilities or people trying to learn a second language.” Hypersensitivity to auditory information allows for better processing and the creation of new pathways – this means more information can be turned into long-term memory. Bieszczad concluded, “People normally remember an experience with limited detail – not everything we see, hear and feel is remembered. What has happened here is that memory becomes closer to a snapshot of the actual experience instead of being sparse, limited or inaccurate.” The findings were published in the Journal of Neuroscience.
How the Kumon Program Helps Students Achieve Success Developing analytical skills is an important aspect of every child’s learning process. Once developed, children are able to strengthen their critical thinking skills and use that knowledge to draw conclusions. All of this however, is not possible without establishing a clear understanding of the basics. Through the step-by-step learning process of the Kumon Program, students gradually develop the skills needed to solve increasingly difficult concepts. When Kumon Students are able to solve problems with ease, they are ready to advance to the next level of the Kumon Program. Over the course of Kumon study, students gradually build their understanding of the critical math and reading concepts they’ll need to excel in high school. As students advance through the Kumon Program, so does their ability to solve more difficult problems. For example, a student studying subtraction problems in Kumon like 4 – 1 or 5 – 1 only does so after showing they are able to complete addition problems like 2 + 1 and 3 + 1 with ease. Once the student demonstrates success in this concept, he or she would progress to the next Kumon level, applying the skills learned from the previous level. Visit Kumon.com to learn how the Kumon Program can help your child achieve success.
The Lotus Diagram Ground Rules Classroom Mission created by students Statements Quality tools and PDSA used regularly Classroom Meetings facilitated by students The Continuous Improvement Classroom Student-led conferences Classroom and Student Measurable Goals Classroom Data Centers Student Data Folders Lotus Diagram Overview • A lotus diagram is an analytical, organizational tool for breaking broad topics into components, which can then be further organized, analyzed or prioritized. • It is a picture of the separate components of a topic. It provides a common understanding of the components of the topic. What is a Lotus Diagram • A visual of the components of a whole • Organizational tool for: foster thinking, analyzing, prioritizing, categorizing When? • Teams or individuals need a process for organizing and prioritizing components of a larger whole • Examine complex systems • Organize thoughts • Organize brainstorming around a theme or topic Why? • Defines the topic being studied • Fosters thinking skills • Organize ideas • Identify relationships Quality Tools Lotus Diagram Brainstorming Affinity Diagram Pareto Diagram Quality tools Nominal Group Technique Cause & Effect Fishbone Flow Chart Run Chart Student Project Example Topic: Civil War • creative thought and critical analysis as students explore new ideas. • lotus allows students to examine a variety of related areas, enriching a class’s understanding of the war in the context of its time. Some of the outcomes of the war Lotus as a Planning Tool The lotus was used to gather information for a research project. It was colored coded to differentiate between topics. Lotus in a Kindergarten Classroom Other Uses of the Lotus Diagram? Uses of the Lotus Diagram?
United States school systems commonly use the letter grade scale from “A” to “F,” with “A” being the highest grade. The cumulative numerical average refers to an average grade obtained by a student for classes taken. To determine this average all grades earned are converted to numbers using the following scale - “A”=4, “B”=3, “C”=2,”D”=1 and “F”=0. Grade Point Average (GPA) is another commonly used numeric average that takes into account not only the grade but also the number of course credit hours. Calculating Cumulative Grade Numerical Average Convert the course grades into the numeric scale. For example, a student took three classes and earned the following grades "A," "A" and "B." Those grades correspond to 4, 4 and 3 on the numeric scale. Add up all numeric grades; in this example, the sum is 4 + 4 + 3 = 11. Divide the sum by the number of classes taken to calculate the cumulative numerical average. In this example, the cumulative numerical average is 11 / 3 = 3.66667. Round the cumulative numerical average to the third decimal place; in this example, the result is 3.667. Calculating Grade Point Average (GPA) Convert the course grades into the numeric scale. For example, a student took three classes with 3,1 and 3 credit hours, respectively, and earned the following grades "A," "B" and "C." Those grades correspond to 4, 3 and 2 on the numeric scale. Multiply the grade by the credit hours for the respective course to calculate grade points. In this example, the the grade points for each course are 12 (4 x 3), 3 (3 x 1) and 6 (2 x 3). Add up all grade points. In this example, the sum is 12 + 3 + 6 = 21. Add up all credit hours. In this example, the total credit hours are 3 + 1 + 6 = 7. Divide the total number of grade points by the total number of credit hours to calculate GPA. In this example, GPA is 21 / 7 = 3.
Learning technique for your instrument is often one of the most boring things you can practice. People tend to avoid practicing their technical exercises. For the most part, that may not be a terrible thing. There are usually better ways to spend your time practicing than working on unmusical technical exercises all day. There are two important exceptions to that though. Those exceptions are scales and arpeggios What Are Scales? A scale is simply an ordered set of notes. Most often the first and last note of a scale are the same and they span an octave. There are many different kind of scales, but in this article we’ll discuss the most used scales. When the scale is ordered by notes of increasing pitch it is called an ascending scale. When it is ordered by decreasing pitch, it is called a descending scale. Why Scales Are So Important For almost all music, scales make up the foundation of the piece. Each piece is written in what’s called a “key”. The scale represents the foundation for which a piece of music is written. Most of the notes in a piece will be made up of the scale that represents the key the music is in. If you know, and are very familiar with all of the notes in the scale, you’ll have a good foundation for playing that piece of music. It doesn’t stop there though. The foundation of learning about how music is created, what we call music theory, all begins with the scale. Learning all the scales will help you to be technically a better musician, but it will also allow you to understand the music you are playing much better. If I was learning a new instrument, one of the first things I would learn is my scales in every key. To understand how scales are constructed, you first need to understand simple intervals. An interval is the distance between two notes. This can be most clearly understood on a piano. Going from one note to another is called a half step. On the piano a half step is the smallest playable interval. A half step can be a white note to a black note or a white note to a white note. If you were to play every note on the piano, including black keys, you would be playing all intervals of half steps. A whole step is two half steps. This is far from a comprehensive overview of intervals, but it will suffice for understanding the most basic of scales. If you want to learn scales, it may feel a little overwhelming. There are a lot of them. To get started let’s look at where to focus your attention. Starting with the chromatic scale is wise because it’s the easiest to understand. For just about all of the commonly played instruments, a chromatic scale is can be played by playing every note the instrument can play. Technically, a chromatic scale is 12 notes that are a each a half step apart. You can play the chromatic scale starting on any note. The chromatic scale starting on C is shown below: The major scale contains 7 notes. Any note can start a major scale. To create an ascending major scale, you would need to follow this interval pattern: Whole Step, Whole Step, Half Step, Whole Step, Whole Step, Whole Step, Half Step. That’s all the information you need for finding every major scale there is. Start on any note, then go up with the pattern W-W-H-W-W-W-H. If you want to play a descending major scale, you would need reverse the pattern. An example of a C Major scale can be found below: You can find a list of all major scales here Minor scales sound more sad and melancholy than major scales. Learning all major scales before your minor scales is the best idea. Once your major scales are learned you can modify them to make your minor scales. Minor scales are used often, but major scales me be used slightly more. The minor scale is special in that there are a few different iterations of it that composers use. Let’s look over the major three. The natural minor scale follows the pattern of Whole Step, Half Step, Whole Step, Whole Step, Half Step, Whole Step, Whole Step. Start on any note and go up in the pattern of W-H-W-W-H-W-W and you’ll be playing a natural minor scale. An example of a C natural minor scale can be found below: You can find a list of all natural minor scales here The harmonic minor scale is the same as the natural minor scale with one exception. The 7th note is raised by one half step. The harmonic scale evolved out of the common harmonies composers would use in pieces written in minor keys. You’ll often see the raised 7th note raised in minor pieces of music to reflect this harmony and scale. An example of the C harmonic minor scale is found below. The melodic minor scale is like a natural minor scale going up, but it has it’s 6th and 7th notes raised by a half step. It’s almost a major scale, but the third is lowered a half step. When the scale is descending it is just like a natural minor scale. It’s called a melodic minor scale because composers tend to use it more while writing melodies, at least that’s the idea. Where To Start The first scale you should become familiar with is the chromatic scale. It’s easy to understand and memorize, and there’s really only version of it that you have to learn. After you feel comfortable with the chromatic scale you should learn all of your major scales. There are 12 of them, so it may take you a little while to learn. It’s important that you learn them well. It’s not enough to just be able to play them once, ascending and descending. You need to become intimately familiar with them. In order to say that you “learned” a scale, it should be memorized, and you should confidently be able to play it at a fairly fast tempo. Once you can do this, you should feel free to move on to the next scale. After learning all of your major scales, you should work on the natural, harmonic, and finally the melodic minor scales, in that order. Like the major scales there are 12 of each form of minor scale, but they are all pretty similar. Once you have the foundation of the corresponding major key, learning the minor keys will not be as difficult. Learn a Scale a Week Take a week and become intimately familiar with the chromatic scale. Play it ascending and descending. Work on it in different rhythms and with different articulations. Play it slowly, and then play it fast as well. You should play it from the lowest octave on your instrument to the highest. If you’re a pianist, scales are typically practiced in four octabes. I would suggest spending no more than 15 minutes a day in this initial scale learning phase. After you spent a week on the chromatic scale, move on to the major scales. Add one major scale every week. After your major scales you can start learning the minor equivalents. Although there are three different minor scales, since you already have the foundational major scale learned, learning all three during a one week period shouldn’t be too difficult. If you practice in this way, you’ll know the chromatic scale, and all of the major scales on your instrument in 13 weeks or just over 3 months. Add another 3 months for the minor scales. If you’re consistent, you will have learned all of your important scales in just 6 months. To understand arpeggios, you must first understand what a chord is. A chord is a set of notes, most often 3 or more notes, that are played simultaneously. The most common chords are the major and minor chords. These chords are built from the first, third, and fifth notes of their corresponding scale. You play an arpeggio by playing the notes of a chord one after the other, instead of at the same time. You can use any chord to play an arpeggio. A C major arpeggio can be seen below: Like scales, major arpeggios should be learned first. This should be followed by minor arpeggios. There are not different versions of minor arpeggios like there are minor scales. You can add more notes to these chords, but for triads (three not chords), a minor arpeggio is a minor arpeggio. Like scales, you should learn arpeggios through each octave on your instrument, piano arpeggios are usually played for only four octaves. You can learn the arpeggios at the same time that you are learning your scales. Learning your major and minor arpeggios will give you a very strong harmonic foundation for understanding chords and chord progressions. Arpeggios are found often throughout music as well, so you’ll be able to apply what you learn often to the music you are working on. Scales and arpeggios should be learned on just about every instrument. If you don’t already have all of your scales are arpeggios learned, spend 15 minutes a day and do it! There other scales you can look into, and other larger arpeggios you can look into practicing as well. For now, don’t worry about anything past learning your major and minor scales and arpeggios. Learn them well. You’ll find a new freedom on your instrument that you didn’t have before.
DOE Science Showcase - Rare Earths Research The rare earths are a set of 17 elements in the periodic table comprised of scandium, yttrium, and the lanthanides. Refined rare earths are critical materials in a broad variety of applications, including permanent magnets, rechargeable batteries, smart phones, computers, televisions, advanced lighting, clean energy, advanced transportation, health care, advanced optics, environmental mitigation, national defense, precision weapons systems, lasers, and more. Rare earths occur naturally as mixtures of ore but must be purified prior to use. Because mining and separation of the rare earth ore is challenging, DOE researchers are working to develop new recycling and recovery methods and to find substitutes for rare earths. More information, including DOE research reports, publications, and data collections about rare earth research projects, is available in the DOE databases and related resources provided below. Related Research Information in DOE Databases - DOE PAGES – journal articles and accepted manuscripts related to rare earths research. - SciTech Connect – rare earths research from DOE science, technology, and engineering programs. - ScienceCinema – scientific videos featuring rare earth research from DOE. In the OSTI Collections – Rare Earth Elements, Dr. William Watson, OSTI For additional information, see the OSTI Catalogue of Collections. - Rare earth element, Wikipedia - DOE Office of Science, Energy.gov - DOE Office of Basic Energy Sciences, Energy.gov - U.S. Department of Energy Critical Materials Strategy, SciTech Connect - Rare Earth Recycling, DOE Office of Basic Energy Sciences - What would we do without Rare Earths, SciTech Connect - Lawrence Livermore National Laboratory (LLNL) - Rare earths advance search for unified theory, LLNL - National Ignition Facility (NIF) - Ames Laboratory - What are the Rare Earths?, Ames Laboratory - Critical Materials Institute (CMI), Ames Laboratory - What would we do without rare earths?, CMI - Mastery of rare-earth elements vital to America’s security, Ames Laboratory - National Energy Technology Laboratory (NETL) - Rare Earth Elements From Coal and Coal By-Products, NETL - A Physicochemical Method for Separating Rare Earths: Addressing an Impending Shortfall, University of Pennsylvania, SciTech Connect - Rare Earth Statistics and Information, U.S. Geological Survey (USGS) - Estimated Rare Earth Reserves and Deposits, Energy.gov - Rare Earth Technology Alliance
Much like the other glaciers around Mt. McKinley, climbers are often found on the Kahiltna Glacier. During May and June the Kahiltna becomes a miniature international airport. People from around the world travel by ski plane to a base camp at 7,000 feet and from there they set out on adventures not only to the summit of Mt. McKinley but also to nearby Mt. Foraker, Mt. Hunter and other peaks of the Alaska Range. As many as 600 climbers each summer ascend Mt. McKinley using the famous West Buttress route. As climbers toil with the difficulties of traveling on the glacier and slowly making their way toward the summit, the glacier beneath their feet steadily transports tons of ice in the opposite direction. Like a huge conveyor belt, the Kahiltna Glacier has moved ice for centuries from 13,000 feet to about 1,000 feet above sea level. Over the last decade, twice-yearly measurements of snowfall and snowmelt have indicated that, like many other glaciers in Alaska and other parts of the world, this great glacier is shrinking. Closer to the terminus and many feet above the present glacier’s surface, cuts in the sides of the valley are observable, where the glacier surface existed only a century ago. Brought upon by fluctuations in climate, Changes to glaciers like the Kahiltna are caused by climate fluxuations, currently there is increasing melting. This effect, combined with melting on the much larger coastal glaciers of Alaska contribute more to rising sea level than the immense ice sheet that covers Greenland, which is also melting rapidly.
How cholesterol and hormone related Cholesterol is often related to various diseases which can be life-threatening in humans. In fact, this waxy substance is a very good substance required—in moderate amount, for proper digestion and other body functions. While high level of bad cholesterol or the LDL may put us into the risk of heart and cardiovascular diseases, maintain moderate amount of it is beneficial for all processes in our body. Main functions of cholesterol occurring in our body includes: - Cholesterol helps the liver creating bile acids, which is used by the body to digest consumed food. Without bile acids, foods, especially fats cannot be digested properly. - Lack of cholesterol will inhibit proper liver function, which may end in undigested fats. This may cause blockage to the arteries and heighten the risk of heart attack. - Cholesterol is main component of cells, which exists in every cell all over the body. - Cholesterol provides protective barrier for the cells. - This is one of the most important functions of cholesterol. - Vital hormones are produced by the help of cholesterol, which is stored in the adrenal glands. What hormones related to cholesterol? Cholesterol occurs in every cell of the body, and since in all human bodies it is stored in the adrenal glands, it has a vital role in synthetizing those steroid hormones, which are produced by the adrenals. Steroid hormones produced in human body include corticoids and androgens. Cholesterol is synthesized in the liver and hence, drugs which inhibit liver’s production also have a very damaging effect upon the proper production of hormones. Effects of cholesterol deficiency on hormone production Hormones which are induced by cholesterol have vital roles in our body. Hence, inhibited production—as a result of limited cholesterol amount inside the body will trigger some problems inside the body. Statins—cholesterol-lowering medications have a big impact of cholesterol deregulation in our body. Some of the effects are illustrated as below: - Glucocorticoids is an important growth hormone which is produced in human adrenal gland with the help of cholesterol. Hence, cholesterol deficiency will cause lower level of this hormone, which may trigger some symptoms of disturbed health status, such as: - Limited movements - Lack of growth - Muscle cramps - Kidney problems and failures - Androgenic hormones, such as the DHEA and testosterone are also influenced by cholesterol synthetized in the body. This kind of hormone is essential for libido and also necessary for maintaining bone density and health. Hence, people who are exposed to cholesterol-lowering medications for a long period might experience androgenic hormone deficiency, which may lead to sexual problems, fast aging, and osteoporosis. - Progestin hormone has a role of regulating women’s menstrual cycles and as a gestation hormone. Deficiency of cholesterol causing limited amount of progestin which increases the risk of having irregular menstrual cycles and miscarriage. - Estrogens are critical for sexual development and similar to progesterone, have a critical role for brain function. - Sterol—or vitamin D is converted in the liver and has hundreds if important immune supporting functions. This hormone is also essential for calcium regulation in our blood stream. Decreasing level of cholesterol will occur in deficit of vitamin D level in our body, resulting in bone problems and other growth issues. How to maintain proper level of cholesterol in our body - Drink sufficient amount of water every day. - Limit consumption of caffeine, especially in the morning, since this will inhibit the liver in producing cholesterol. - Increase your green leafy vegetables consumption every day. Two servings per day should be the minimal amount. - Consume more healthy fats, such as from raw nuts, and stay away from trans-fats. - Decrease consumption of sugar in your daily diets. - Exercise sufficiently every day. Walking, jogging, and cycling are the best exercise for maintaining balanced cholesterol level and thus, prevent cardiovascular diseases risk. - Have your cholesterol level—the LDL, HDL, and triglycerides checked before the age of 35 if you are male and 45 if you are female. Repeat the test every five years for accurate picture of your cholesterol level.
Limit these nutrients: Fat, cholesterol, and It is important to limit these nutrients. Eating too much fat, saturated fat, trans fat, cholesterol, or sodium may raise your risk for certain diseases, like heart disease, some cancers, or high blood pressure. Health experts recommend that you keep your intake of saturated fat, trans fat and cholesterol as low as possible as part of a nutritionally balanced diet. - Foods that are high in saturated fat include cheese, whole milk, butter, regular ice cream, and some meats. If your foods are prepared or processed with lard, palm oil, or coconut oil, they will also have saturated fat. Saturated fats tend to raise the level of cholesterol in your blood, which can put you at risk for heart disease. - Unsaturated fats do not raise blood cholesterol. Foods with unsaturated fats include olives, avocados, fatty fish, like salmon, and most nuts. Olive, canola, sunflower, soybean, corn, and safflower oils are high in unsaturated fats. Even though unsaturated fats don't raise blood cholesterol, all types of fat are high in calories and should be eaten in limited amounts. - Trans fats are in foods that have "partially hydrogenated" vegetable oils that are found in some margarines, vegetable shortenings, crackers, candies, baked goods, cookies, snack foods, fried foods, and other
The operators of NASA’s Mars Exploration Rover Opportunity intend to drive the rover into a valley where it will examine clay materials through its seventh Martian winter on the Red Planet. On 27 June, Opportunity resumed its study of Mars after around three weeks of reduced activity around solar conjunction. This is when the Sun’s position between Earth and Mars disrupts communication between us and the rover and, as a result Opportunity is unable to transmit any data that it collects on the same day. At present, the long-lived rover is working hard examining rocks, using the alpha particle X-ray spectrometer on the end of its robotic arm, not too far from the western end of Marathon Valley – a notch in the raised rim of Endeavour Crater, which is around 22 kilometres (14 miles) in diameter. Landing on Mars in 2004, Opportunity has explored the rim of Endeavour Crater since 2011. Now engineers and scientists will be sending the solar-powered Opportunity to Marathon Valley for several months to take advantage of a sun-facing slope thought to be brimming with potential science targets. Marathon Valley, which stretches three football fields in length, has been previously observed by the Compact Reconnaissance Imaging Spectrometer for Mars on NASA’s Mars Reconnaissance Orbiter. The orbiting spacecraft detected clay minerals believed to be holding evidence for wet environmental conditions during the Red Planet’s past. Opportunity will investigate the clay-bearing deposits further. Currently, the rover is operating in a mode that avoids any type of use of its flash memory, which is capable of retaining data when Opportunity is powered-down for the night. “Opportunity can continue to accomplish science goals in this mode,” explains the rover’s Project Manager John Callas. “Each day we transmit data that we [can] collect that day.” “Flash memory is a convenience but not a necessity for Opportunity,” Callas says. “It’s like a refrigerator that way. Without it, you couldn’t save any leftovers. Any food you prepare that day you would have to either eat or throw out. Without using flash memory, Opportunty needs to send home the high-priority data the same day it collects it, and lose any lower-priority data that can’t fit into the transmission.”
Poverty, ignorance, and desperation are not necessarily the fault of the poor as any Irishman should know. After the conquest by Cromwell, the Irish Catholics and Presbyterians suffered under the Penal Laws designed to force them to join the Established Church. Some of these laws endured until 1920 and their effects persist in Ulster. The Penal Laws deprived Catholics of citizenship, land, education, political power, and the free practice of their religion. Consequently, Irish Catholics became aliens in their own land. They were abused, starved, and ridiculed. Their grievance against England festered for hundreds of years. Some Irish escaped to America, but others were sold as slaves or indentured servants. America treated them with disdain—No Irish need apply. Political cartoons represented the Irish as ape-like creatures. Literature and the press branded the Irish as traitor more loyal to the Pope than the Constitution. In the Southern States, when it was necessary to work in alligator infested bayous, slaves were considered too valuable–according to Noel Ignatiev, the author of “How the Irish Became White–so Irishmen were hired. If something happened to them, it was no great loss. The Irish were kept poor, uneducated, and marginalized for much of American history. They worked and fought their way into the melting pot and now—for better or worse—seem no different from other whites. The same English landlords that suppressed the Irish ran colonies around the world on the backs of slaves. Slavery was the effective equivalent of the Penal Laws. It kept Africans poor, uneducated, and desperate. When England and later America banned slavery, Jim Crow Laws became the effective equivalent of the Penal Laws. When the Civil Rights movement ended Jim Crow Laws, James Crow, Esquire emerged using dog whistles terms like “the undeserving poor,” “Law and Order,” and “Entitlement Programs,” to target programs that help the poor, including poor Blacks break the cycle of poverty. Reverend William Barber documents in The Third Reconstruction that the same the rich and powerful Americans who back Conservative candidates, continue to perpetuate the cycle of poverty that has plagued the poor for centuries. Arthur Powers describes how the population of rural Brazil was driven from their farms to the metropolitan slums until the land beneath their hovels appreciated to the point that wealthy developers pushed the slum dwellers to even more remote and unlivable quarters. The wealthy care more for profit than the commandment to “love your neighbors as yourself.” Pope Francis sets the tone for Pro-life Catholics by calling for an equitable distribution of the worlds riches rather than their concentration in the account of the few. The cycle of poverty will continue until the rigged capitalistic system gives the poor and middle class access to their share of the world’s wealth. Those who pass lavish tax breaks to the top one percent of the wealthy, by definition impoverish the majority of humanity. Remember the questions, “When did I see you hungry and not feed you, or thirsty and not give you something to drink?” Siding with the top one percent put you in this camp.
Though well-informed, this history of astronomy caters to the insider rather than the intrigued novice. Science journalist Croswell presents a history of the Milky Way focusing on the changing theories about its origin, age, size, and shape. He explains why some stars are more luminous than others and describes the discovery that key elements like helium, lithium, and hydrogen were formed ``in the fiery aftermath of the big bang.'' In early chapters he offers simple, elucidating metaphors to make his sophisticated material more familiar. But this kind of translation is quickly abandoned, and the book contains too much math and physics and too little explanation of how the theories connect and what's at stake to appeal to readers with little background in astronomy. It becomes clear that, as he writes, the story of the Milky Way is a ``deeply human story, full of colorful and controversial characters,'' but Croswell takes the stance of an insider rather than a journalist, providing only snippets and sketchy portraits. Some stories are fleshed out, like the collaboration of astronomers E. Margaret and Geoffrey Burbidge, William Fowler, and Fred Hoyle (commonly known as BÞFH) on the theory that the elements originated in the stars; the Nobel Prize that went to Fowler alone for this work; and the obstacles women faced breaking down the sexist barriers in astronomy. Croswell's narrative of these events provides a rare and welcome balance to his zeal for technical detail. This work will leave readers feeling as though they are looking at the heavens through the wrong end of a telescope.
Lions : An introduction Lion is one of the largest member of the cat family. The name 'Lion" has captivated human imagination since ancient times, and has earned its name 'king of the beasts'.Today, the vulnerable population of less than 50,000 African lion ranges from southern Sahara to Southern Africa excluding Congo rain forest belt. While the African Lion may be vulnerable, its only living relative from another continent, the Asiatic Lion (Panthera leo persica), is less than 400. Out of the eight sub-species recognised, only six sub-species of lion are alive in the wild today. The other two are claimed to be alive in captivity, but scientific proofs do not support their validity of claims. The different sub-species of Lion alive today are : 1. Asiatic lion, 2. West African lion, 3. North East Congo lion, 4. East African lion (Massai lion), 5. Southwest African lion (Katanga lion) and 6. Southeast African lion (Transvaal lion). Out of the six sub-species of Lions, the Asiatic lion is the one most vulnerable to extinction. Differences between Asiatic and African lions 100,000 years of separation has produced some variations between the Asiatic and the African lions. One may notice when we compare them : 1. Asiatic lions are smaller with sparser manes than their African cousins. Male Asiatic lions weigh between 350-420 pounds, while females weigh between 240-365 pounds. On the other hand, male African lions weigh between 330-500 pounds (800 pound male holds the record) but the females weigh about the same as their asiatic cousins. 2. In Asiatic lions, the longitudinal skin fold running along the belly is prominent and present in both sexes, while the skin fold is rarely seen in their African counterparts. 3. Asiatic lions have thicker tuft of hairs on their elbows and tail to distinguish them from African lions. 4. The skull difference is perhaps the most interesting point. Fifty percent of Asiatic lions have two small apertures or holes (bifurcated infra-orbital foramina) that allow nerves and blood vessels to reach the eye, while there is only one infraorbital foramen in African lions. 5. Average pride strength in Asiatic lion is usually two or three, while African lion pride starts from 5 to many. Asiatic male lions do not form social group with females and they associate with female lions only when mating or sharing food. In fact, male Asiatic lions group together to defend their territory against rival males. Asiatic lions : A history of struggle Asiatic Lion is the only lion from a continent other than Africa. It is called Indian Lion in India and Persian Lion in the middle east. Asiatic lions once roamed the continent from northern Greece, across Southwest Asia, to central India. The last sighting of Lion outside India was in 1944, a dead lioness in Khuzestan province, in Iran. Now, it is found only in the Gir forest of Gujarat, India. By the turn of the 20th century, only 13 Lions (1907) were reported to have survived in Gir. With all effort, the Nawab of Junagadh banned lion hunting within his province. The effort proved worthy and by 1936, the number came up to 234. As per the 2006 census, there are 359 lions living in the Gir forest within an area of 545 square miles. Can you imagine the problems of more than 300 lions forced to live within an area of less than 550 square miles and that too from a very small gene pool (all closely related)? The result, man-animal conflicts and inbreeding. Some lions venture too far in search of easy prey and better territory that they occasionally come into contact with humans. Some are poisoned for attacking livestock; some are electrocuted by farmers' fences; while, some fall into open wells. Inbreeding can also be a serious concern. Inbred population can suffer from any disease due to weakened immune system and even lead to infertility. Poaching is another concern. Many lions are killed each year for their skins , claws and bones as eastern market demands are very high. Lion bones and body parts are used as substitutes to tiger body parts. The biggest concern is the presence of Maldharis, a vegetarian pastoral community living in the forest. The Maldharis set their livestock into the forest and disturb the natural forest food chain. They collect firewoods from the forest and sometimes aid poachers for small tips. Relocating them outside forest land helps very little as scarce livestock feeds outside the reserve forced them to graze back into the same forest. Illegal mining, recently, added another challenge. Second home for Asiatic lion With so little space in the Gir Protected Area, there is a chance that lions may die out completely if an epidemic sweeps the small population. An example can be given from the outbreak of Canine Distemper killing nearly 1000 Serengeti lions (Northern Tanzania) in 1994. Besides, reintroduction for the sake of another independent population other than Gir can help diversify the gene. Kuno-Palpur wildlife Sanctuary, a 133 square mile area, in Madhya Pradesh was selected for the second home. Asiatic lions had completely died out from this place in 1873. Now, in order to reintroduce them, more than twenty villages have been relocated. The place is ideal for lions, except for competition with a bigger cat, the Tiger. Abundance of prey and excellent geographical settings with windswept grasslands punctuated with trees and low shrub make the place ideal. After all the arrangements made, the Gujarat government is not willing to give out even a small percentage (five to ten individuals) of its lions for their better future. Zoo bred lions from Hyderabad and Delhi are reported to substitute the Gir lions. It is doubtful whether lions will really surive in their second home.
The embryology or the developmental history of animals offers unique proof of evolution. All the multicellular sexually reproducing organisms start their life from fertilized eggs, which are single cells. With the progress of development, two and three germinal layers are produced in the embryos. From these germ layers, all the body organs and organ systems are derived. Observing that all multicellualr organisms pass through double-layered (gastrula) condition and then become three layered. Haeckel (1811) postulated the ‘Recapitulation theory’ or ‘Biogenetic law’ for evolution. He meant that the embryos of highly evolved forms passed through the adult stages of their ancestral forms. For example, the double –layered stage is the adult condition of coelenterates. Moreover, the frog embryo becomes a tadpole larva, which is much like a fish. The biogenetic law is stated as ‘ontogeny repeats phylogeny’. Ontogeny means the developmental history of a particular organism while phylogeny means the development history of the broad group to which the organisms belongs. Recapitulation theory is totally discarded now a days and its modified version that ‘embroys of the higher organisms recapitulate the embryonic history of their ancestors’ seems to be more viable. From comparative study of embryo of the vertebrate group, the above statement becomes clear. It is seen that before the embryo of man manifests mammalian characters it passes through the embryonic conditions of a fish, then of a frog, then of a lizard in succession. Similar is the case for a bird’s embryo. Thus, one will note that the adults of two species of animals may resemble little but their embryos do so strongly if they are evolved from some common ancestors. Von Baer’s Principle: Von Baer postulated the following principle in support of embryological evidence: (i) General characteristics appear during development first then appear the specialized characters. (ii) From the more general, less general and finally the specialized characters appear. (iii) An animal during development departs progressively from the structure of ancestral forms. (iv) Young stages of an animal resemble the embryos of other groups of animals. Sometimes embryology helps in establishing the true taxonomic position of animals whose adult forms have suffered retrogressive development. For example ,tunicates like Herdmania has a degenerate adult form while its larva shows much advanced characters of urochordates. Similarly, Sacculina in its adult form is much degenerate sac-like parasitic form on the body of crabs. It does not show any of the crustacean characters that its nauplius and cypris larvae do. Hence, study of embryology to ascertain the phylogeny of related groups of organism helps in a greater way to understand the path of evolution.
General Information and CausesEdit According to the DSM-IV-TR, a personality disorder is an “enduring pattern of inner experience and behavior that deviates markedly from the expectation of the individual’s culture, is pervasive and inflexible, has an onset in adolescence or early adulthood, is stable over time, and leads to distress or impairment.” The same definition can also apply to the schizophrenias. Personality disorders affect personality, namely relationships, social function, and mental abilities. The disorders are often on a spectrum, ranging from mild to severe cases. Those who suffer from very mild personality disorders lead normal lives. However, interference of the symptoms in livelihood normally arise after periods of high stress or external pressure and can lead to impairments in emotional, psychological, and social functioning. The disorders are characterized by disturbances in: - The ability to have successful personal relationships - Appropriateness of range of emotion - Self-perception, world-perception, and the perception of others, and - Impulse control The combination of these creates the personality disorder, which leads to the exhibition of external behaviors that differ from societal norms. This is why people who suffer from personality disorders have bad relationships with other people in society. Personality disorders can be caused by a variety of different factors, including – but not limited to – upbringing, personality and social development, and genetic and biological factors. However, because the symptoms arise during times of high stress, treatment centers on coping mechanisms. Furthermore, Dr. Sam Vaknin has stated that there are commonalities between those who suffer from personality disorders. These are self-centeredness, the presence of a victim mentality, a lack of empathy, the presence of manipulative or exploitative behavior, depression, a vulnerability or susceptibility to other mental disorders, a distorted or superficial understanding of self and others’ perceptions, a need to force the world to conform to the needs of the sufferer, and no delusions/hallucinations or thought disorders (except for periods in Borderline Personality Disorder). There are ten personality disorders listed in the DSM-IV: - Antisocial Personality Disorder - Called sociopaths or psychopaths, these people show a lack of regard for the standards of local culture and do not get along with other people or abide by commonly accepted societal rules and regulations. - Avoidant personality disorder - Socially inhibited with feelings of inadequacy and sensitivity to criticism. - Borderline Personality Disorder - Marked by rapid changes in mood and unstable relationships. Sufferers tend to lack in identity. - Dependent Personality Disorder - The sufferer is unable to act on his or her own, but has to rely upon other people. These people have little or no self-confidence and fear separation and are often submissive. - Histrionic Personality Disorder - The person is overtly emotional in inappropriate ways and circumstances and is almost theatrical in nature. The emotions exhibited shift rapidly and without warning. - Narcissistic Personality Disorder - Lacking empathy and having the need to be admired by others, these people often ignore other people and are hypersensitive to criticism. - Obsessive-Compulsive Personality Disorder - The perfectionists with an inability to change from habit, also having an uncontrollable pattern of thought or action. - Paranoid Personality Disorder - Having distrust of others with the feeling that others are plotting to or are in the process of causing harm to the sufferer. The patients have the inability to forgive, which undermines any personal relationships. - Schizoid Personality Disorder - Limited range of emotion and showing indifference toward other people. - Schizotypal Personality Disorder - Extreme non-conformity to the point where eccentricity is harmful to the sufferer or others (belief in having magic powers leading to physical harm, etc.) or eliminates personal relationships. Eccentricity can be exhibited through appearance, behavior, or relationship style. Treatment cannot deal with causes because the causes are not definite. Common treatments are therapy sessions in which the patient is taught to take control of his or her life to change or act in a different way toward a specific behavior. Therefore, it is said that the sufferer must want to make the change for themselves in order to be healed. In such therapy sessions, psychosocial or Freudian techniques are implemented in order to determine whether or not the disorder stems from childhood trauma; then, cognitive-behavioral therapy is used. Often, a support system of therapy, familial and friendly support, and medication is used to further assist in treating personality disorders. Lebelle, Linda Personality disorders. Retrieved April 17, 2007, from Focus Adolescent Services Web site: http://www.focusas.com/PersonalityDisorders.html (2001, October 21). Mental help net - Personality disorders. Retrieved April 17, 2007, from MentalHelp.net Web site: http://mentalhelp.net/poc/view_doc.php?type=doc&id=440&cn=8
The following is a transcription of the podcast, “Accommodations vs. Modifications: What’s the Difference? (Audio).” In this NCLD podcast, Candace Cortiella speaks with Dr. Lindy Crawford about accommodations and modifications for students with learning disabilities (LD). Dr. Crawford is a member of the Professional Advisory Board at the National Center for Learning Disabilities. She is also an associate professor and the Ann Jones Endowed Chair in Special Education in the College of Education at Texas Christian University. And, she’s the author of NCLD’s report, State Testing Accommodations: A Look at Their Value and Validity. Candace Cortiella: Dr. Crawford, thank you for joining us. Let’s begin by having you provide our listeners with a brief description of what is meant by accommodation. Lindy Crawford: Accommodations are instructional or test adaptations. They allow the student to demonstrate what he or she knows without fundamentally changing the target skill that’s being taught in the classroom or measured in testing situations. Accommodations do not reduce learning or performance expectations that we might hold for students. More specifically, they change the manner or setting in which information is presented or the manner in which students respond. But they do not change the target skill or the testing construct. Let me give you an example. A student with a learning disability in reading may have difficulty reading the content and/or the questions on a history test. Therefore, he may not be able to demonstrate what he knows through reading, so a teacher or a test administrator may read the test aloud to him. Another example would be a student with Attention-Deficit/Hyperactivity Disorder (ADHD) who might not be able to concentrate on a classroom assignment if multiple distractions are present. And so the teacher may allow the student to work in a separate setting. In both of these examples, a change of presentation or a change of setting enables the students to demonstrate what they know without lowering the learning expectations, and without lowering the performance expectations or changing the complexity of the target skill being taught or measured. Generally, a large number of accommodations can be grouped into five categories: Timing. For example, giving a student extended time to complete a task or a test item. Flexible scheduling. For example, giving a student two days instead of one day to complete a project. Accommodated presentation of the material, meaning material is presented to the student in a fashion that’s different from a more traditional fashion. Setting, which includes things like completing the task or test in a quiet room or in a small group with other students. Response accommodation, which means having the student respond perhaps orally or through a scribe.
Muskrat (Ondatra zibethicus) (Photo by Linda Tanner) In The Wild The muskrat is a medium sized semi-aquatic rodent with webbed hind feet and a flat tail – which it uses as a rudder. Despite its name, the muskrat is not a ‘true’ rat but is a large member of the family of voles and lemmings. The name comes from the musky odour that comes from gland secretions from the perineal area. It usually has a dark brown or red brown coat. They are well adapted to swimming and to cope with the demands of living in a wetland habitat: it is found in fresh and salt water marshes, lakes, ponds and rivers. The head and body length of the muskrat is between 229-325mm, with a tail length of 180-295mm. They weigh between 681 to 1816 grams. It is a native species to North America but now can be found in many other parts of the world after being introduced across much of South America, Europe and parts of Asia. They can have a sizeable impact on local ecosystems. They normally live in family groups, with a pair (male and female) and their offspring. They appear to be mostly monogamous and when spring arrives they become very territorial and will fight bitterly over territory and potential mates. When unexpectedly disturbed, the muskrat will utter a whinish growl. Muskrat families take great care over the maintenance of their nests which are built to protect themselves and their young from cold and predators. In streams or large ponds, muskrats will burrow into the bank and create an underwater entrance. In marshes, raised nests are built using vegetation and mud. In snowy areas, the entrances to these constructions are plugged with vegetation, which they replace every day. Muskrats also build feeding platforms in wetlands. They help maintain open areas in marshes, providing valuable habitat for many species of birds and mammals. Generally, muskrats are largely nocturnal and crepuscular (active at dawn and dusk), but they are occasionally seen in the day during winter. They feed mostly on a range of vegetation with a particular fondness for the cattails plant. They do not store food for the winter and have been observed taking food stored by beavers and the two species appear to often share shelter and food stores. Plant materials make up about 95% of their diet but they also take small animals including fish, mussels, frogs, crayfish and small turtles. Muskrats follow trails they make in swamps and ponds and when the water freezes, they are able to continue following their trails under the ice. Muskrats are themselves heavily predated. As part of the ecosystem, they provide an important food resource for many species including mink, foxes, coyotes, wolves, lynx, bears, eagles, snakes, and larger hawks and owls. In common with most rodents, muskrats are prolific breeders. Females can have two or three litters a year of six to eight young each. The babies are born small and hairless, and weigh only about 22 g. Development of maturity in the youngster varies according to the climate with animals in colder areas taking longer. The populations appear to go through a regular patterns of rise and dramatic decline spread over a six to 10 year period. Muskrats will live for up to 3 years in the wild and up to 10 years in captivity. Muskrat and the Fur Trade The Muskrat’s fur is thick, glossy and durable making it a target for fur trappers. Tens of millions of them have been trapped over the last 100 years. At the beginning of the twentieth century, the species began to be farmed across much of Europe and parts of Asia. This saw numerous escapes or releases, populations which became naturalised and are still present today. A population became established in the UK, but great efforts were taken to eradicate it and it is one of the few ‘introduced’ species in Britain that has been successfully eradicated. Today, the North American Fur Auctions report that muskrat pelt remain popular – along with coyote- largely due to consistently strong sales in Korea. Most of the muskrat pelts around today come from animals that have been trapped. Trappers commonly use ‘drowning sets’- where traps are set in a way designed to drown and the muskrats and other semi-aquatic mammals like mink and beaver caught in them. These are set along the water’s edge. Thomas Eveland’s ‘Jaws of Steel’ (1991) states: “the muskrat flounders about on the surface until exhaustion and the weight of the trap overcome it – and then it drowns. … Drowning an animal by clamping a steel trap to its leg is anything but humane..” Muskrats can take up to five minutes drown in these traps. The fur trade’s use of muskrat fur is truly cruel and it is shameful that such cruelty continues in the 21st Century. [Below: In 2011, Respect for Animals conducted an undercover investigation into trapping in the US. Pictured is a dead muskrat which had suffered horrific injuries to its tail]
Immune thrombocytopenia (ITP), previously called immune thrombocytopenic purpura or idiopathic thrombocytopenic purpura, is an autoimmune disorder that occurs when the body attacks its own platelets and destroys them too quickly. Platelets are a part of blood that helps control bleeding. ITP affects at least 3,000 children under the age of 16 each year in the United States. While ITP often arises after a viral infection, for the majority of cases the cause is unknown. Luckily, acute ITP, the most common form, usually goes away on its own over the course of weeks or months, sometimes without treatment. Chronic ITP appears most frequently in adults, but occasionally is seen in children. This form of ITP is more serious, lasting for years and typically requiring specialized follow-up care. Children and young adults with immune thrombocytopenia are treated through the Blood Disorders Center at Dana-Farber/Boston Children’s. To learn more about ITP, continue reading below. Immune thrombocytopenia (ITP) is an autoimmune disorder (meaning the immune system attacks the body’s own tissues) that occurs when the body attacks its platelets, a part of the blood that helps control bleeding by forming blood There are two kinds of ITP. Acute thrombocytopenia is the most common form of ITP—accounting for more than 90 percent of cases—and occurring between the ages of 2 and 6. Chronic thrombocytopenia is more common in adults but can occur in children. first step in treating your child is forming an accurate and complete diagnosis. ITP can usually be identified by: all tests are completed, doctors will be able to outline the best treatment are a number of treatments that can help increase platelet levels in children with immune thrombocytopenia (ITP), but there is no cure. The majority of children with ITP get better gradually on their own in a few days, weeks or sometimes months, with or without treatment. treatment is necessary, the most common forms are: gamma globulin (also known as intravenous immunoglobulin, or IVIG): Rho (D) immune globulin (also known as WinRho®): treatments for ITP may include: with ITP also may receive antibiotics to treat infections. Children's Cancer and Blood Disorders Center is a world leader in ITP research. We are currently conducting a number of studies to improve the diagnosis and treatment of ITP and other platelet disorders. many children with rare or hard-to-treat conditions, clinical trials provide than 80 percent of children with treated ITP recover on their own in days, weeks or months. Fatal brain hemorrhages rarely occur with steroid, intravenous Rh immune globulin or intravenous gamma globulin therapy. of ITP is uncommon, but it can occur up to several years after the initial episode and may be associated with another viral infection. my child participate in sports or other athletic activities? The sports and activities that your child can participate in will depend on her platelet count (the severity of the ITP). Your child’s physician can make specific recommendations on the types of activities that may be appropriate for her depending on her platelet levels.
Eczema, also referred to as atopic dermatitis, is an inflammation (reddening and swelling) of the skin which is very itchy. The severity of the disease can vary. In mild forms the skin is dry, hot and itchy, whilst in more severe forms the skin can become broken, raw and bleeding. In the United Kingdom, up to one fifth of all children of school age have eczema, along with about one in twelve of the adult population. The most common type of eczema is atopic dermatitis. It is an allergic condition that makes your skin dry and itchy. It is most common in babies and children. Factors that can cause eczema include other diseases, irritating substances, allergies and your genetic makeup. Some people who have eczema scratch their skin so much it becomes almost leathery in texture. Others find that their skin becomes extremely dry and scaly. Eczema will permanently resolve by age three in about half of affected infants. In others, the condition tends to recur throughout life. Most affected individuals have their first episode before age 5 years. Eczema is not contagious. Eczema can affect people of any age, although the condition is most common in infants.About 1-2 percent of adults have eczema, and as many as 20 percent of children are affected. Eczema can occur on just about any part of the body; however, in infants, eczema typically occurs on the forehead, cheeks, forearms, legs, scalp, and neck. Sometimes the itching will start before the rash appears, but when it does the rash most commonly occurs on the face, knees, hands or feet. It may also affect other areas as well. Atopic eczema affects approximately 15-20% of young children in the UK. Atopic eczema clears up in approximately 70% of children by the time they reach their teens and in many it largely clears up by 4-5 years of age. If it persists into adult life, it usually affects the body creases, the face and hands. Soap removes dirt but also removes natural oils from the skin; making the skin dry, irritated and itchy. Try not to scratch the irritated area on your skin even if it itches. Treatment of weeping lesions may include soothing moisturizers, mild soaps, or wet dressing. Moisturizing gloves can be worn while sleeping. Emollient bath oils should be added to bath water and then suitable agents applied after patting the skin dry. Chronic thickened areas may be treated with ointments or creams that contain tar compounds, corticosteroids (medium to very high potency), and ingredients that lubricate or soften the skin. Mild anti-itch lotions or topical corticosteroids (low potency) may soothe less severe or healing areas, or dry scaly lesions. Systemic corticosteroids may be prescribed to reduce inflammation in some severe cases. Light therapy using ultraviolet light can help control eczema. UVA is mostly used, but UVB and Narrow Band UVB are also used. Ultraviolet light exposure carries its own risks, particularly eventual skin cancer from exposure. Tea-tree oil in a gel or diluted form has good antiseptic and antibacterial effects, and is helpful in calming down inflammation. Non-conventional medical approaches include traditional herbal medicine and others. Eczema Treatment Tips 1. Emollients are necessary to reduce water loss from the skin, preventing the dryness normally associated with eczema. 2. Steroids act by reducing inflammation and are used in most types of eczema. 3. Ultra Violet light treatment and stronger medication may be considered for very severe eczema. 4. Avoid substances that stress your skin. 5. Diet restrictions and chemical skin-drying agents may also be offered, but their success is controversial. 6. Use warm water with mild soaps or nonsoap cleansers when bathing your child. 7. Avoid using scented soaps. 8. Apply cool compresses on the irritated areas of your child’s skin to ease itching. 9. Keep your child’s fingernails short to minimize any skin damage caused by scratching. 10. Try having your child wear comfortable, light gloves to bed if scratching at night is a problem.
One of the fields in paleoanthropology I especially like is the cognitive and behavioral evolution. In particular, a challenging matter is the space that neandertals occupy in such field. There are many questions regarding their cognitive abilities and symbolic behavior. The big size of their brains suggests that neandertals would have had a genetic ability for complex cognition. But traditionally it was considered that they did not reach ‘the level’ of modern humans. We believe that neandertals had a form of language, otherwise they would never have been able to organise themselves to the extent that they did. Assuming that neandertals were capable of some symbolic thought, can we imagine their behavior when they confronted the encroaching Homo sapiens species with different capabilities? Half a century ago it was clearly believed that symbolic thought was a key distinction between modern humans and neandertals. Our knowledge has increased a lot since then, and now we have some evidence suggesting that neandertals did have symbolic thought. Here I collect some of the key findings: 1) In 2012 some scientists wrote an article in Science about the possibility of neandertals being the authors of some paintings in the Spanish Cueva El Castillo (Region of Cantabria): a round red disk and hand stencils, which are fairly basic but still suggesting a certain level of mental sophistication. However we have one potential barrier for the knowledge we are building up. Because of re-dating of sites, the dates at the crucial time of neandertals presence are changing, mostly pushing everything back. For instance, the re-dating may push back the latest dates of neandertals in the Iberian Peninsula making them more unlikely to be the authors of some findings. 2) Another study released in PNAS in 2009 suggest a symbolic use of marine shells and mineral pigments by neandertals also in Spain (Region of Murcia). 3) Again in Spain we are lucky to have in Atapuerca probable evidence of pre-neandertal rituals 430,000 years ago. 4) Neandertal remains are mainly associated with the Mousterian stone culture (300 Ka-35 Ka), which belongs to the Middle Palaeolithic and is more advanced than Acheulean (1.6 Ma-100 Ka). But in fact there were little changes over thousands of years, indicating that they were not highly innovative. 5) It is also often assumed that Châtelperronian (36 Ka-32 Ka), one of the earliest industry of the Upper Palaeolithic, was invented by neandertals, maybe influenced by modern humans they had contact with. The key site for Châtelperronian is Grotte du Renne (France), where some personal ornaments and other artefacts were found, associated with neandertal remains. In their last period, the neandertals populations were clearly less ‘classic neandertal-type’ in their physical appearance. A controversial hypothesis says that the Châtelperronian could actually consist of hybrid neandertal-sapiens populations, thus not being anymore a representative case to evaluate the cognitive abilities of ‘classic’ neandertals. This article illustrates the cultural exchange that may have taken place between modern humans and neandertals 40,000 years ago. 6) Now, an interesting research in 2013 by the University of Oxford. It says that, although neandertals’ brains were similar in size to their contemporary modern humans, their brain structure was different: larger areas of the neandertal brain (compared to the modern human brain) were given over to vision and movement, and this left less room for the higher level thinking required to form large social groups. It would seem that the neandertals thought in a different way to us despite having a similar sized brain and maybe their ‘symbolic thinking’ went in a different direction to ours, one that left no visible signs. 7) Finally, an engraving found with geometric pattern at Gorham’s cave in Gibraltar in 2012, published in Sep 2014, may be the most compelling evidence yet for neandertal art so far. At the time of the neandertals, modern humans were doing pretty much the same things: hunting, gathering, using similar tools, eating similar food. For some reason, our species survived to be able to do all those things and the neanderthals did not. So, why did neandertals disappear? Was it a case of intellectual capacity? Was it just circumstances and luck? Was it a combination of violence, absorption and climate change? Actually, a little piece of neandertal still lives in our DNA… But this is a different story for another post.
|The German Huguenot Museum in Bad Karlshafen| History - 1. How the Huguenots got their name 1. How the Huguenots got their name The term "Huguenot" is usually said to be derived from the German word "Eidgenossen" (eiguenot - confederates). In 1520 in Geneva the word was used to designate the early adherents of the Reformation, and later the followers of John Calvin. Term of abuse for conspirators In France the word "Huguenot" first appeared as a term of abuse in a letter written in Périgueux (in Guyenne) in 1551. The iconoclasts were described as "this vile race of Huguenots". The derogatory word Huguenot was also applied to the Protestants in the area of Tours on the Loire, when in 1500 the Catholic party led by the Guise family claimed that the Huguenots had conspired to abduct the young king Francis II from his castle in Amboise. A ghost called Hugo Hugues Capet (941-996) A contributory factor for the appearance of this term may have been a legend related in the town of Tours on the Loire. Legend had it that the ghost of the French king Hugo Capet flied (?) through the streets at night in the vicinity of the Hugo Gate. As the Protestants held their services in secret at night, the townspeople called them "little Hugos" conspiring against the state and the Catholic Church. An honourable name for French Protestants Whatever is origin, from 1560 on "Huguenot" was the word used to designate the Protestants of France. In the course of time the word "Huguenot" gradually lost its negative connotations and became an honourable name. © The German Huguenot Museum 2021
The purpose of the gastrointestinal (GI) tract is to extract fluid and essential nutrients from the food we eat and to eliminate wastes. All the way along the tract, food is propelled by involuntary rhythmic muscular contractions called peristalsis. From the mouth, ingested food proceeds down a straight tube called the esophagus into the stomach. It is here that the process of digestion begins, with stomach acid being secreted to break down food. Enzymes that also facilitate the breakdown of... The purpose of the gastrointestinal (GI) tract is to extract fluid and essential nutrients from the food we eat and to eliminate wastes. All the way along the tract, food is propelled by involuntary rhythmic muscular contractions called peristalsis. From the mouth, ingested food proceeds down a straight tube called the esophagus into the stomach. It is here that the process of digestion begins, with stomach acid being secreted to break down food. Enzymes that also facilitate the breakdown of chemicals in food, permitting absorption into the bloodstream, are secreted here and in subsequent sections of the GI tract. From the stomach, food passes into the small intestine, a relatively thin, long (12 feet) tube with three distinct portions: duodenum, jejunum, and ileum. Enzymes from the pancreas and the gallbladder enter at the duodenum and have specific roles in the digestion of food. Generally several hours later, the remaining food passes from the ileum into the large intestine or colon. The appendix is a pouch of uncertain function close to the junction between the large and small intestines. Water and some remaining nutrients are extracted in the large intestine, before the remains are excreted through the rectum as stool. Most of the time, the GI tract functions without problems, but there are a number of ways in which the system can go awry. Irritable bowel syndrome is a rather ill-defined syndrome said to affect 15% of people in Western countries. For unclear reasons, it appears to affect women more often than men. The essential elements of IBS are chronic abdominal pain associated with either constipation (constipation-predominant IBS) or diarrhea (diarrhea-predominant IBS); some patients alternate between constipation and diarrhea. IBS itself is not a life-threatening condition, although it can be debilitating. The diagnosis of IBS should be based on a set of internationally recognized symptoms known as the Rome II Criteria (see box under alosetron (LOTRONEX)) and requires the exclusion of treatable causes of the patient’s symptoms, such as ulcerative colitis. This is especially important if the following signs of ulcerative colitis are present: onset after age 50, rectal bleeding, fever, weight loss, or anemia. There are no abnormal laboratory tests or changes in the cells of the GI tract on biopsy that can objectively establish the diagnosis of IBS. In fact, the diagnosis of IBS can only be made if all tests for other diseases that might explain the patient’s symptoms are negative. For young, otherwise healthy patients, extensive testing may not be necessary. The FDA has approved drugs for both diarrhea-predominant and constipation-predominant IBS. The former, alosetron (LOTRONEX), had to be removed from the market after it caused serious constipation and a condition of decreased blood flow to the intestine called ischemic colitis. The latter, tegaserod (ZELNORM), has also been associated with ischemic colitis and severe, disabling diarrhea, and it is barely effective. Instead, we recommend that you manage IBS through a combination of dietary and drug treatments targeted at your particular symptoms. There is also a report of a successful multidisciplinary approach, including psychological counseling, to this disease.
Practice music theory and train your sight reading skills. Learn all important topics of music theory. Start with simple notes in treble clef for total beginners up to identification of exotic scale modes in mezzo-soprano clef. These topics are covered: - Key signatures up to 7 accidentals of major or major scale - Intervals including compound intervals (up to double octave). In identification mode can be double sharps or double flats used. - Chords including triads, sixths, sevenths, extended chords (9ths, 11ths), suspended chords, open voicing, inversions… - Scales including all scale modes of major or minor scales (modes of harmonic or melodic scale), neapolitan scales, pentatonic scales - Diatonic chord degrees on major or minor scale as triads or seventh chords - Rhythm exercises with up to sixteenth notes or dotted and double dotted notes. Complete bars with notes or rests or identify note lengths of given notes (even breve note) - Practice notes in these clefs: treble, bass, alto, baritone (C or F), french violin, mezzo-soprano, soprano, tenor Various note naming schemes: English (CDEFGAB), German (CDEFGAH), Latin (DoReMiFa…) This app helps you to prepare for Levels 1 – 5 of ABRSM* however it does not cover all the topics. Please check the official syllabus. This app is not affiliated with Associated Board of the Royal Schools of Music. All names, copyrights and trademarks belong to their respective owners.
|Bullying is characterized by an imbalance of power between two students. If two students are equally engaged in an altercation, this is not a bullying situation, but instead considered a "conflict" between the two students. When an imbalance of power is present, it can be seen in small ways over a long period of time or in a large way all at one time. Examples of a large incident is much easier to see, recognize, and address. The person who is doing the bullying has been vocal and physical to the bullying victim. He/she might threaten the other student with bodily or psychological harm and finds ways to fulfill the threat. The victim has no means of escape or is unable to fight back and is physically or emotionally harmed. Anyone who witnesses such events should report it immediately to administration. The online reporting form can also be used for this purpose. Examples of bullying over long periods of time are much more difficult to detect and protect the student. It could include name calling, making the student the focus of jokes, cyberbullying, constantly bumping into the student, taking personal belongings of the student, threaten to harm if they tell, etc. In this situation the student victim often is quiet, tries not to make trouble, and feels he/she can handle the problem on their own. The wall of silence is represented by the student not telling any adults, and peers not stopping the bullying or telling an adult. Adults in the area may not even notice the bullying going on. That is why we have the online reporting form. No one need know you have reported what you have seen or are experiencing. If you are a student, parent, or ANYONE who is aware of someone being bullied (it could be you), click on the link below to let the school know what is going on. The form gets emailed directly to our Principal for immediate followup. CLICK HERE TO REPORT BULLYING AT SCHOOL 45
The Evolution of Timekeeping: Water Clocks in China and Mechanical Clocks in Europe The Evolution of Timekeeping: Water Clocks in China and Mechanical Clocks in Europe Early in history, humans sought methods to tell time. A concept rather than a physical entity, time eluded accurate measurement for many centuries. One of the first successful timekeeping devices was the water clock, which was perfected in China in the eighth century. It wasn't until nearly seven centuries later that mechanical clocks began to make their appearance. Mechanical clocks not only made timekeeping much more precise, which was important for scientific purposes, but also introduced it to the masses when centrally located clock towers equipped with bells loudly struck the hour. One solar day spans one rotation of the earth on its axis. This natural unit of time is still the basic unit of timekeeping. For a variety of reasons, however, humans from past to present have desired smaller increments for determining the time. Thousands of years ago, humans began to separate the day into sections. At first, they assigned such broad categories as late morning or early afternoon, or identified the time of day by its association to mealtimes. By 2100 b.c., Egyptians had begun dividing the day and night each into 12 parts. Derived from the Greek word hora, an hour denoted the interval between the rising of specific stars at night. Since the period from dawn to dusk and from dusk to day were not identical—changing from season to season and even day to day—the length of an hour changed accordingly. As the days become longer or shorter, the time covered by these so-called temporal hours varied. For example, 12 daytime temporal hours in the summer might cover 14 hours of daylight, whereas the following 12 nighttime temporal hours would be crowded into the remaining 10-hour period. Many societies used ancient sundials to measure time intervals. Originally employed to identify the changing of seasons, they were further developed to measure increments within a day. Sundials rely on the sun to cast a shadow onto a marked platform. As the sun moves across the sky, the shadow advances across the platform and denotes the temporal hour. Chinese inventors developed the first method for measuring time consistently and without reliance on sunlight, day length, or star movement. Since about 3000 b.c., the Chinese used water clocks to gauge the passage of time. Water clocks are also known as clepsydrae, the Greek word for "water thief." A simple water clock is an apparatus that slowly drips or runs water from a small hole in one vessel into another that is stationed below it. By marking the water level in the lower vessel after a day had passed and then dividing it into equal portions, the clockmaker could use the device to tell time fairly accurately. Tests have indicated that early water clocks were correct to within 15 minutes each day. Water clocks in China continued to progress into more sophisticated and accurate devices. Their development took a leap forward in the eighth century during the K'ai-Yuan reign when a Buddhist monk named I-Hsing (I-Xing) along with Liang Lin-Tsan, an engineer and member of the crown prince's bodyguard, began work on a clock escapement to control the speed and regularity of the clock's movements. The clock, a bronze model of the celestial sphere (a representation of how the stars appear from Earth), used drops of water to move the driving-wheel mechanism, and keep track of hours, days, and years. The clock also connected to a bell and drum, to provide a sound alert every 15 minutes. Another notable clock in Chinese history was the astronomical clock of Chang Ssu-Hsün in 1092. Built into a 33-foot (10-m)-tall tower, the clock used water to power a complicated escapement mechanism that was similar in appearance to a Ferris wheel with water buckets in the place of seats. A water tank dripped water into one bucket at a time. As the bucket filled, it became heavy enough to trip a lever and rotate the wheel. When the wheel rotated, the next bucket moved under the water tank for filling. Chang also included in his clock design an armillary sphere, which consisted of rings to mimic planet orbits. In addition, the clock mechanism triggered 12 jacks, or puppets, to appear in sequence to ring bells and hit a drum to announce the time. The next major advancement in timekeeping came with the development of mechanical clocks, probably in the late thirteenth century. These clocks depended on neither the sun nor water to keep time. Some used pendulums, while other smaller clocks relied on repeated winding to run. English records indicate that a mechanical clock was operating in a Bedfordshire church in 1283. Similar reports refer to five other mechanical clocks in English churches before 1300. Within the next 50 years, the mechanical clocks became common throughout Europe. While temporal hours and early timekeeping methods were sufficient for many societal uses, humans continued their quest for better modes of telling time. Early astronomers and mathematicians in particular needed accurate time increments that remained static from day to day and season to season. Without precise measurements they could not determine speed, which was crucial for navigational and astronomical observations and applications. The advent of the water clock did much to change the way humans viewed time. Now the time of day did not depend on whether the sun was sufficiently able to penetrate the clouds and cast a shadow onto a sundial or whether the night sky was dark enough to view the stars' positions. An hour could now represent a constant length of time, and could be further divided into smaller fragments. When I-Hsing and Liang invented the escapement, they greatly refined clock performance. Chang then took I-Hsing and Liang's contribution to the next level by making an even more intricate escapement, which was named the "heavenly scale." Water clocks continued to be popular in China and many other countries well into the fourteenth century. (Currently, The Children's Museum of Indianapolis boasts the largest water clock in North America with a 26.5-foot [8-m]-tall device.) Despite improvements to the mechanism over the centuries, water clocks never attained perfection. They repeatedly needed resetting to the correct time, as well as near-constant maintenance. Winter was particularly trying. During these colder months, if the water was not replaced with mercury or some other liquid with a lower freezing point, the water would turn to ice and the clocks would stop. In Europe, the development of clocks took a different turn. Instead of looking to water as a power source, Europeans took another path. According to History of the Hour: Clocks and Modern Temporal Orders, "The principle of the Chinese escapement is pivoting balance levers that stabilized a stop-and-go motion. The principle of the European escapement, which employs the centrifugal force of an oscillating inert mass, does not resemble it in any way whatsoever." These weight-driven mechanical clocks injected time into European society. Clock towers sprung up in cities and loudly rang the hour for all the residents to hear. The earliest tower clocks were rather inaccurate—they lost or gained up to two hours each day—and had only one hand to denote the general time of day. For years, clockmakers struggled to regulate the mechanism's oscillation without much success. This problem did not deter the public from demanding better timekeeping devices. By 1500, clockmakers found a way to make the mechanism small enough so that the wealthy could purchase models for their homes. These clocks, many of which were used as alarm clocks, kept time by springs that were wound about once a day. The clocks kept time fairly well, although hours went by more and more slowly as the spring unwound. With timekeeping becoming commonplace, societal dependence grew. Meetings, church services, and other appointments could now be scheduled at certain hours, instead of general times of day. Scientists could now begin to make much more accurate time measurements, physicians could do simple diagnostic tests as determining pulse rate, and navigators could use time to determine their position at sea. As time became more important, people began demanding more accurate clocks. Despite persistent attempts to perfect mechanical clocks, it wasn't until 1656 when Dutch mathematician Christiaan Huygens (1629-1695) used pendulums as a timekeeping mechanism, that clocks were able to tick off minutes accurately. Huygens's original design was correct to within a minute a day. By 1670 William Clement of London had refined the pendulum clock to keep time to within a second each day. These improvements set the stage for later advancements that by 1761 had generated John Harrison's (1693-1776) marine chronometer accurate to 0.2 seconds per day, and by 1889 Siegmund Riefler's pendulum clock was true to 0.01 seconds a day. High-performance quartz-crystal clocks appeared in the 1930s, followed by the atomic clocks of more recent years. LESLIE A. MERTZ Dohrn-van Rossum, G. History of the Hour: Clocks and Modern Temporal Orders. Translated by T. Dunlap. Chicago: The University of Chicago Press, 1996. Maran, S. ed. The Astronomy and Astrophysics Encyclopedia. New York: Van Nostrand Reinhold, 1992. Needham, J., Wang Ling, and D. de Solla Price. Heavenly Clock: The Great Astronomical Clocks of Medieval China. Published in association with the Antiquarian Horological Society. Cambridge: Cambridge University Press, 1960. National Institute of Standards and Technology. "A Revolution in Timekeeping." http://physics.nist.gov/GenInt/Time/revol.html. National Institute of Standards and Technology. "Earliest Clocks." http://physics.nist.gov/GenInt/Time/early.html.
NATURAL Wool is a protein fiber formed in the skin of sheep, and is thus one hundred percent natural, not man-made. Since the Stone Age, it has been appreciated as one of the most effective forms of all-weather protection known to man, and science is yet to produce a fiber which matches its unique properties. RENEWABLE As long as there is grass to graze on, every year sheep will produce a new fleece; making wool a renewable fiber source. Woolgrowers actively work to safeguard the environment and improve efficiency, endeavoring to make the wool industry sustainable for future generations. BIODEGRADABLE At the end of its useful life, wool can be returned to the soil, where it decomposes, releasing valuable nutrients into the ground. When a natural wool fiber is disposed of in soil, it takes a very short time to break down, whereas most synthetics are extremely slow to degrade. NATURAL INSULATOR Wool is a hygroscopic fiber. As the humidity of the surrounding air rises and falls, the fiber absorbs and releases water vapor. Heat is generated and retained during the absorption phase, which makes wool a natural insulator. Used in the home, wool insulation helps to reduce energy costs and prevents the loss of energy to the external environment, thus reducing carbon emissions. BREATHABLE Wool fibers are crimped, and when tightly packed together, form millions of tiny pockets of air. This unique structure allows it to absorb and release moisture—either in the atmosphere or perspiration from the wearer—without compromising its thermal efficiency. Wool has a large capacity to absorb moisture vapor (up to 30 per cent of its own weight) next to the skin, making it extremely breathable. RESILIENT & ELASTIC Wool fibers resist tearing and are able to be bent back on themselves over 20,000 times without breaking. Due to its crimped structure, wool is also naturally elastic, and so wool garments have the ability to stretch comfortably with the wearer, but are then able to return to their natural shape, making them resistant to wrinkling and sagging. Wool therefore maintains its appearance in the longer term, adding value to the product and its lifespan. Wool is also hydrophillic—it is highly absorbent, and retains liquids—and so dyes richly while remaining colourfast, without the use of chemicals. MULTI-CLIMATIC/ TRANS-SEASONAL Thanks to its hygroscopic abilities, wool constantly reacts to changes in body temperature, maintaining its wearer’s thermophysical comfort in both cold and warm weather. EASY CARE The protective waxy coating on wool fibers makes wool products resistant to staining and they also pick up less dust as wool is naturally anti-static. Recent innovations mean wool items are no longer hand-wash only. Many wool products can now be machine-washed and tumble dried. ODOUR RESISTANT Wool is far more efficient than other textiles at absorbing sweat and releasing it into the air, before bacteria has a chance to develop and produce unpleasant body odor. A SAFE SOLUTION Wool is naturally safe. It is not known to cause allergies and does not promote the growth of bacteria. It can even reduce floating dust in the atmosphere, as the fiber’s microscopic scales are able to trap and hold dust in the top layers until vacuumed away. Thanks to its high water and nitrogen content, wool is naturally flame-retardant, and has a far higher ignition threshold than many other fibers, will not melt and stick to the skin causing burns, and produces less noxious fumes that cause death in fire situations. Finally, wool also has a naturally high level of UV protection, which is much higher than most synthetics and cotton. – Source Campaign for Wool.
Diamonds, scientifically classified into 4 types Diamonds are generally considered ether ""crystals of only pure carbon"" or ""gemstones composed of a single element"". However, in reality natural diamonds take in other elements, etc., as they grow through the crystallization process deep within the earth. The most common is nitrogen, which exists in abundance on the earth. Diamonds are classified into two types depending on the presence of nitrogen, and then further classified respectively into 2 other types for a total of 4 types. Classification of presence of nitrogen Type I (Common type among natural diamonds, containing nitrogen) Ia: Containing a mass of nitrogen atoms. These diamonds are range from colorless to yellow. Most natural diamonds are of this Ia type. Ib: Containing a single nitrogen atom or a scattering of them. These diamonds range from dark yellow to brown. Type II (containing no nitrogen) IIa: Contain almost no impurity elements such as nitrogen and boron. They are found colorless, brown and pink. II b: Containing boron.They have the unique characteristic of conducting electricity, and the fancy blue diamond is famous. Natural type II diamond constitute a mere 1-2%. 98-99% of natural diamonds are classified as type I which contains nitrogen. And very rarely natural type II diamonds are found which contain almost no impurity elements, accounting for only 1-2% of diamonds. They are considered to possess prominent clarity and beauty. Large diamonds of this type II are especially rare and historically famous, such as being seen in the collections of royal families overseas. Most Laboratory grown diamonds are type II As stated above, type II diamonds are very rare among natural diamonds. On the other hand, when Laboratory grown diamonds (synthetic diamonds; hereinafter abbreviated) are colorless, they are usually type IIa containing only pure carbon. This is because they are produced in under completely controlled conditions, enabling crystallization without any impurities. Furthermore, the elements other than carbon are adjusted during the creation of Laboratory grown diamonds, enabling the production of color diamonds, namely the types Ib and IIb which are rare among natural diamonds.Ia, which is common in nature, is not seen among Laboratory grown diamonds. 4C is well-known as the evaluation standard for diamonds, but why not have a look at these ""types"" of diamonds to enhance the enjoyment of viewing and selecting both natural diamonds and Laboratory grown diamonds?
The essay title refers to the delay between Hamlet discovering the murder of his father and the avenging of him. Hamlet learns of the murder from his father’s ghost in Act 1 Scene 4, and he is enraged and swears immediate revenge. When he calms down he decides that it is unwise to take action until he is sure that the ghost speaks the truth. The play put on in Act 3 Scene 2 confirms it is true, and yet still Hamlet does nothing. Hamlet does eventually kill his uncle in Act 5 Scene 2, when it is too late, as Hamlets own death is brought about. It is this sad storyline that gives the play the description as a “revenge tragedy” Full of melodrama and violence, revenge tragedies were very popular in England towards the end of the 16th century. Apart from Shakespeare’s “Hamlet”, one of the most popular plays was “The Spanish Tragedy” (1589) by Thomas Kyd. In this play, the main character Hieronomo seeks to avenge the murder of his son. There is a delay between this decision and the murder, and this is due to practical problems in getting to the murderer. Another popular play was “Antonio’s Revenge” (1602) by John Marston. In this, the revenge is delayed to enhance its brutality. All revenge tragedies have to have a delay – this is essential as otherwise the play would end too quickly. For this reason, the Jacobean audiences wouldn’t have noticed the delay in Hamlet. The fact that there appears to be no obvious cause for the delay wasn’t pointed out until 1736, by Thomas Hanmer. Since then, several different critics have sought for the reason of Hamlet’s delay. Hamlet is a philosopher, and there are some basic questions that all philosophers ask, which none can answer with certainty. Several of these involve the future – what happens after death? Is there an after life, a definitive heaven and hell? Throughout the play, Hamlet seems very unsure about his own beliefs concerning death and the afterlife. He puts off the murder when he sees Claudius praying, as he worries that dying in this fashion will send Claudius to heaven. If this were to happen, would he, Hamlet, be avenged? In his “To be or not to be” soliloquy, Hamlet debates the pros and cons of committing suicide: “To die, to sleep/ To sleep, perchance to dream – aye, there’s the rub/ For in that sleep what dreams may come (? )” This suggests that Hamlet is worried about what would happen if he committed suicide, which is the “unforgivable sin”. He reasons that if death was merely eternal sleep, who would suffer in life? The only reason that people stay alive is for “the dread of something after death”. In the end he admits: “conscience does make cowards of us all/ And thus the native hue of resolution/ Is sicklied o’er with the pale cast of thought. ” In this passage, Hamlet is referring not to conscience as it is thought of today, but as consciousness. Here he compares determination to a boldness, which can be imagined as a bright colour, being made paler by the influence of thought. This is certainly true of Hamlet in the play. It is noticed that the only time action is taken is when no thought precedes it. The murder of Polonius for example, took place very much in the heat of the moment. Even the Queen proclaims it a “rash and bloody deed! ” All this seems to imply that Hamlet believes that there is a form of life after death, be it heaven and hell or an eternal sleep filled with dreams. However, towards the end of the play he begins to voice the opinion that after death there is nothing. “Alexander died, Alexander was buried, Alexander returneth to dust; the dust is earth… ” This shows the influences of Montaigne. Montaigne was a French writer who died whilst Shakespeare was in his late twenties, whose works leant towards Epicureanism. This belief system is based on the teaching of the philosopher Epicurus, who believed that everything, including the soul, is composed of atoms, and death means the redistribution of these atoms. Montaigne believed that after death, the body became nothing more than dust. Therefore, no belief systems need to be followed. There is no point to doing anything, so there would be no point in taking revenge. These are clearly the lines along which Hamlet starts to think – “Dust” is one of the most frequently used words in the play. However, rather than being the reason for his procrastination, this belief could be used as an excuse for not completing his task. The reasons for Hamlet’s delay have been a source of interest for many critics, and several different theories have been but forward. Samuel Coleridge, a poet from the Romantic Era remarks: “Shakespeare’s way of conceiving characters out of his own intellectual and moral facilities, by conceiving any one intellectual or moral faculty in morbid excess and then placing himself, thus mutilated and diseased, under given circumstances. ” This refers to Shakespeare’s writing style, in which the main character in a tragedy, the tragic hero suffers from a ‘fatal flaw’- this idea was derived from the Greek theatre. Tragic heroes were usually good people who had one distinguishing characteristic, and when this was played upon, their entire character would change. For example, in “Macbeth”, his fatal flaw was: “Vaulting ambition, which o’er-leaps itself / And falls on the other”. For Macbeth, it was the Witches who drew out his ambition, and for Hamlet it was the Ghost who drew out his fatal flaw. However, Hamlet’s actual fatal flaw is undecided, although many critics feel it is irresolution. Coleridge states: “In Hamlet I conceive him to have wished to exemplify the moral necessity of a due balance between our attention to outward objects and our meditation on inward thoughts… ” Coleridge believes that Hamlet’s flaw is that he cannot distinguish between the real and imaginary world. He thinks on the matter too much and because of this he is unable to actually complete the task that he set out to do. In agreement to this, a German romantic poet and critic by the name of August Wilhelm von Schlegel says “Hamlet”: “is intended to show how a calculating consideration which aims at exhausting, so far as human foresight can, all the relations and possible consequences of a deed, cripples the power of acting… ” This again voices the opinion that Hamlet spends far too much time thinking about the act opposed to actually doing it – he is irresolute. Some critics feel that this may be because Hamlet, despite being 30 years old, still has very childlike tendencies: “He was a full grown adult, yet he still attended school… it took him a very long time to stop grieving about his father because he didn’t want to move past that part of his life” This may be due to Hamlet’s position in life. After all, he is the Prince of Denmark, and must have very little to occupy his time. He has not fought in battles; all he has achieved is further education and experience of culture in other countries. He has been taught how to think but not how to act, and as a result, he does not know what to do when put in such difficult circumstances: “The time is out of joint. O cursed spite/ That I was ever born to set it right! ” Hamlet is aware that he has been put in a situation that is not suited to him – he is not like “Hercules”. Far from suggesting he is childish, this explains how Hamlet, as a philosopher, is beyond his time. In the voicing of another point of view, the German poet Goethe says of Hamlet: “A lovely, pure and most moral nature, without the strength of nerve which forms a hero, sinks beneath a burden which it cannot bear and must not cast away. ” Some people view Hamlet’s lack of action as a display of his moralistic nature. They think his innocence and gentleness are repulsed by the mere thought of committing such a heinous act. It is possible that Shakespeare did wish Hamlet to have a moralistic character. Shakespeare lived in the Elizabethan age whilst religion was still commonplace, and Revenge was prohibited by Ecclesiastical Law. However, the general Elizabethan view was that, in order for the world to continue properly, revenge must be sought in some cases, such as in the protection of personal honour. Indeed, Elizabeth I was anxious that if she were ever to be murdered that her death would be avenged, and it is likely Shakespeare had this in his mind whilst writing the play. There were two types of avenger, a scourge, and a minister. A scourge was an evil person who, because he sought revenge, would damn himself to hell, which is what God wished. A minister was appointed by God to take revenge in the name of justice, but this could only happen at the appointed time, or he too would go to hell “But heaven hath pleased it so/ To punish me with this, and this with me/ That I must be their scourge and minister” Hamlet sees himself as a minister, but he has not been given an appointed time, and this may be partly reason for his delay. Here he talks of having to revenge his father as a punishment. It is not certain what the punishment is for, but it may be related to the idea that Hamlet has feelings for his mother. More modern theories on Hamlet’s delay are far more dubious. Many critics have looked to Freud’s psychoanalysis for an explanation. For example, the critic Ernest Jones has suggested that the Oedipus Complex is the main reason for Hamlet’s delay. Either Hamlet cannot kill Claudius because he identifies with him as they both love Gertrude, or killing Claudius may mean admitting to himself that he is in love with his mother, which is something that disgusts him greatly. When remembering the closeness between his mother and father he exclaims “Heaven and earth, must I remember? “. This may simply be because remembering his father when he was alive is painful, or because at the time it caused much jealousy of his father from Hamlet. It is obvious that Hamlet is disgusted by the marriage between his mother and his uncle: “O, most wicked speed, to post/ With such dexterity to incestuous sheets” This suggests that he really would horrify himself if he realised that he was in love with his mother, so this supports the psychoanalysis theory. Despite this, these explanations still seem unlikely, as the critics are trying to analyze imaginary characters. Some of the above theories seem far more likely than others, although none seem to have hit on the right explanation exactly. The critic A. C. Bradley’s calls him: “The Hamlet who scarcely speaks to the King without an insult, or to Polonius without a gibe; the Hamlet who storms at Ophelia and speaks daggers to his mother…. ” Although Hamlet may look like he is not killing the King because of his morals, close reference to the texts show that his other actions do not appear so good hearted. For example, the murder of Polonius, or the sending of Rosencrantz and Guildernstern to their deaths in England, without any sign of regret. It has also been suggested that Hamlet was weak, but the way he lugged around Polonius’s body after murdering him is not a sign of frailty. These actions back up A.C. Bradley’s opinion that the main reason for Hamlets delay is not that he is a procrastinator by nature, but that he is suffering from melancholia. The Elizabethans and Jacobeans believed in the “four humours”, which were four fluids that affected your health. The lack or excess of one would put your humours out of balance and you would become unwell. The balance of the humours would also affect your mood. Imbalance of blood would lead to happiness, yellow bile to anger, phlegm to calmness, and black bile to sadness or melancholia. Melancholia is not simply depression, but the uncontrolled swinging between moods. This theory seems the most likely, as Hamlet may put off the murder of Claudius, but his other actions do not seem to fit the other suggested reasons. In his book “Dyalogue of Comforte Against Tribulacyion”, Sir Thomas More explains how there are two types of sufferers – those who are willing to accept comfort and those who refuse it. Hamlet is of the second group, and members of this category are: “so drowned in sorrow that they fall into a carelesse deaddelye dulnesse regarding nothing… ” This description appears to suit Hamlet, and it supports the theory that he suffers from melancholia. His melancholia could be thought of as a kind of madness at times, and perhaps this is a deliberate ploy of Shakespeare’s; as the players act out a play within a play, Hamlet pretends to be mad whilst actually being mad. Evidence for this is found by close reference to the text. It is noticed that when Hamlet is sane his speech is written in prose , but on pretending to be mad, his speech loses all structure and just becomes blank verse. However, in the scene in the grave yard when he is with Horatio and the grave digger, his words are no longer written in prose. This could be Shakespeare’s way of showing that at points during the play, Hamlet’s melancholia is so severe, it could be said he is mad. One example from the text is in Act 3 Scene 4, when Hamlet is reproaching his mother, and he sees the ghost. When the ghost first appeared, the guards and Horatio saw him, as well as Hamlet. This time the ghost can be seen only by Hamlet, and this may be due to his guilty conscience that he is the “tardy son”. In conclusion, the theory put forth by Bradley does appear the most likely. Evidence to back up this is found in Hamlet’s first soliloquy “O that this too too sullied flesh would melt”. At this point in the play, he has no idea that his father did not die from a snake bite. Despite this, he is already contemplating suicide: “that the Everlasting had not fixed/ His canon ‘gainst self – slaughter” Even under the circumstances this reaction seems over the top. His “unmanly grief” as Claudius calls it may be due to his melancholia. It also explains his obsessive attention to detail, his swinging between moods and his hypochondriasis. This is one theory for which evidence can be found throughout the entire play, and which explains all of Hamlet’s actions.
The emergence of the modern nation-state was particularly advanced in Spain, England and France (see “The modern nation-state” 8/14/2013), with the result that by the end of the fifteenth century, these nations had developed a significant capacity for conquest. The centralized Western European nation-states were still no match for China or Japan, but they had developed the capacity to conquer the indigenous empires and societies of America. The Spanish conquest of America was aided by environmental factors. The steel shields and swords of the Spanish were more advanced than the bronze and stone weaponry of the indigenous, who had not discovered iron. And the Spanish had horses, the Sherman tanks of pre-modern warfare; whereas in America, large mammals had become extinct during human colonization. And the Spanish conquest was aided by the relative lack of immunity to diseases carried by the European conquerors, a consequence of less contact among populations in America than in Africa, Asia, and Europe (see “What enables conquest?” 8/9/2013). The Spanish imposed a colonial system characterized by forced labor and the exportation of gold and silver to Spain. The bullion was used by Spain to purchase manufactured goods from Northwestern Europe, thus facilitating the commercial expansion and agricultural modernization of Northwestern Europe. These dynamics created a European-centered world-economy that encompassed Western Europe as its core and Latin America and Eastern Europe as its periphery, a territory much larger than that of the Chinese world-empire. And it increased the economic and military power of the centralized states of Britain and France, which after 1750 began a conquest of vast regions of Africa and Asia, seeking to expand even further the geographical territory of the European-centered world-economy (see “What is a world-system?” 8/1/2013; “The origin of the modern world-economy” 8/6/2013; “Conquest, gold, and Western development” 8/8/2013). Thus, we can see in sum the dynamics that enabled Western European conquest and domination of the world, which reached its culmination during the twentieth century. The common interests of the monarchs and a rising merchant class created the modern nation-state, a centralized state with an advanced capacity for conquest, but not as advanced as the empires of Southeast Asia. Spain proceeded to conquer the empires and societies of America, leading to the formation of a European-centered world-economy, which further increased the capacity for conquest of the Western European states, particularly Britain and France. After 1750, the European nation-states, increasingly powerful, undertook a project of conquest and domination of vast regions of Asia and Africa, which was dialectically related to the modernization of industry and further increased the power of the Western European nation-states. But the European nation-states would find formidable resistance in the world-empires of Southeast Asia. The region had developed food production early, and its empires were advanced (see “Food production and conquest” 8/12/2013). For centuries, China was by far the largest and most advanced world-empire, and its conquests had included the empire of Vietnam. As a result of its considerable strength, European powers deferred invasions of China until the nineteenth century. Because of European invasions beginning in 1839, China was compelled to accept treaties that led to her partial de-industrialization, but China was never conquered, colonized, and peripheralized like most of Southeast Asia, South Asia, Africa, Latin America, and the Caribbean (Fairbank 1986; 1992; Frank 1979). For its part, Japan was not invaded by the European powers during the period of the expansion of the European-centered world-economy from 1750 to 1914. As a result, Japan experienced “independent national development” (quoted in Frank 1979:153). Its project for a Japanese-centered world-system in Asia clashed with the European-centered world-system in the twentieth century, and it was brought to an end by the Japanese defeat in World War II and the subsequent occupation by the United States, leading to Japan’s incorporation as a core nation in the European-centered world-economy. French invasions of Vietnam were selective, and its conquest of Vietnam was partial. French troops had landed in Da Nang in central Vietnam in 1858, but imperial troops compelled the French to withdraw. Beginning in 1859, the southern region was occupied by French troops, and Cochin China was developed as a colony directly administered by the French. As a result, in the south, French settlers developed plantations, and Saigon emerged as a commercial and industrial center. And in the northern region, Hanoi was attacked and several cities along the Red River were occupied by French troops in the 1880s. The French protectorate of Tonkin in the north in effect functioned as a French colony, although Cochin China was more attractive to most French settlers and investors. But most of the empire of Vietnam, stretching between Tonkin and Cochin China and including the imperial capital of Hue, had not been conquered by the French. The Vietnamese emperor ceded political influence over this central region to the French, and it became the French protectorate of Annam, with the Vietnamese imperial court and bureaucracy functioning as a puppet government. To be sure, the emperor was compelled to concede the transformation of the countryside into the production of rice for export and to provide labor for the plantations, thus fulfilling the economic goals of the French colonial project. Nevertheless, because of the indirect form of French rule, which was in effect a concession to the Vietnamese emperor, the peripheralization of Annam was less thorough than in Cochin China (Duiker 2000:9, 12-13, 42, 110-11; see “French colonialism in Vietnam” 4/25/2013). The ceding of political influence and territory to the French by the emperor was opposed at the outset by a faction in the imperial court and by the Confucian scholar-gentry class, many of whom favored continued military resistance against French aggression. A movement of opposition to French colonialism began immediately. It would be a movement not only in opposition to French colonialism but also in opposition to the collaboration with French colonial rule by the Vietnamese imperial court. This will be the subject of our next post. Duiker, William J. 2000. Ho Chi Minh. New York: Hyperion. Fairbank, John King. 1986. The Great Chinese Revolution: 1800-1985. New York: Harper & Row. __________. 1992. China: A New History. Cambridge, MA: The Belknap Press of Harvard University Press. Frank, Andre Gunder. 1979. Dependent Accumulation and Underdevelopment. New York: Monthly Review Press. Key words: Third World, revolution, colonialism, neocolonialism, imperialism, democracy, national liberation, sovereignty, self-determination, socialism, Marxism, Leninism, Cuba, Latin America, world-system, world-economy, development, underdevelopment, colonial, neocolonial, blog Third World perspective, Vietnam, French colonialism, French Indochina, Cochin China
Health Protection, Southern Region SMALLER EUROPEAN ELM BARK BEETLE, Scolytus multistriatus (Marsham) Importance. - This beetle is the prime vector of the Dutch elm disease fungus which has destroyed millions of American elms since its introduction into the United States. The beetle attacks all native and introduced species of elms. Identifying the Insect. - Adults are reddish-brown beetles about 1/4 inch (3 mm) long. The underside of the posterior is concave and armed with a prominent projection or spine on the undersurface of the abdomen. The larvae are typical, white or cream-colored, legless grubs, about the same size as adults. Beetle feeding at twig crotch. (Click for detail. JPG 34K). Identifying the Injury. - Beetles excavate a 1 to 2 inch (25 to 50 mm) straight egg gallery parallel with the wood grain. Larval mines are roughly perpendicular to the egg gallery. The result is a design resembling a long-legged centipede on the inner bark and wood surface. Symptoms of the disease are described under "Dutch Biology. - Smaller European elm bark beetles overwinter as larvae under the bark and develop into adults in the spring, emerging after the leaves expand. Adults feed at twig crotches of healthy elms, infecting the tree with Dutch elm disease. Then they fly on to other elms for breeding. These attacked trees have usually been weakened by drought, disease, or other stress factors. After boring through the bark, the beetles excavate their egg galleries, grooving the inner bark and wood surface in the process. When larvae are full-grown, they construct pupal cells at the end of their larval mines. New adults emerge by boring directly through the bark, leaving it peppered with tiny "shot holes." There are two generations annually. Control. - The most effective method of reducing losses is probably through removal of dead and dying elms and the pruning of dead and dying limbs. Several chemical insecticides may be applied as preventative sprays or to kill beetles before they spread to uninfested trees.
Anthropologists use human biological evolution to answer the question of what makes humans different from animals They use fossils, cultural remains and the study of DNA as evidence supporting the development of humanity. Physical Anthropologists study human biological evolution Palaeontology – study of fossils Archaeology – study of cultural remains -- ? Jacques Boucher de Crevecoeur (1788 – 1868) Found stones shaped into tools and weapons Primitive weapons less advanced than what can be created today Shows evolution Neanderthals Did not evolve tool making Survived for millions of years but eventually died out Charles Darwin (1809 -1882) Studied on the Galapagos islands Found that within each group of plant or animal there was variation Natural Selection: Animals and plants adapt to their environment to survive and produce similar offspring. Survival of the Fittest It was thought that humans followed the same patterns of evolution. In 1924 Raymond Dart discovered the fossil of a child in South Africa Australopithecus Africanus Subsequent discoveries showed an evolutionary pattern The genetic make up of humans and primates varies by only 1-2 percent Why are we the dominate species? Features we share: Opposable thumbs 3 dimensional vision (allows us to judge distances) Our children remain dependant for a long time and require a lot of care to learn and develop Highly developed brain which allows us to learn and think (however the human brain is more developed) Social (humans more advanced) Can be aggressive and territorial There are many different opinions as to which factor is the most important in human evolution Bipedalism – the ability to walk upright over long distances and perform tasks while standing Ability to communicate complex and abstract ideas through language It is thought the combination of using tools, hunting in a group and communicating with language led to the rapid growth of the human brain Our ability to share/cooperate with others Developed symbols and art 1) The Brain 2) The Cognitive Process (advanced reasoning, problem solving, complexity of our thinking) 3) Personality The Central Core: All vertebrates Controls basic functions (breathing, eating) The Cerebrum: Sets us apart from other species. Controls human senses, thoughts, language, memory Use information to draw conclusions – uniquely human Deductive Reasoning (general to specific): Eg. All humans have brains – Sonia is a human – Therefore Sonia has a brain. Inductive Reasoning (specific to general): Eg. Most men enjoy sports – I am a man – I enjoy sports Humans problem solve throughout the day Can use reasoning to help solve problems 1. Identify the problem 2. Develop a strategy to solve the problem (trial and error, hypothesis, rule of thumb, insight) 3. Carry out the strategy 4. Determine if the strategy worked Complexity of our thinking Human are able to think about what others are thinking. Humans are able to think within different time frames We can remember the past and consider what might happen in the future. Characteristics and behaviours that make us unique It is thought that our personality is shaped by our genetics and environment (nature – nurture) Animals may appear to have personality traits This may be more the result of instinct and conditioning Sociologists characterize a human as different from animals because of human culture. Culture: The abilities, ideas and behaviours people have acquired to become members of society O Canada Hockey Canadian flag Symbols Something or someone that represents something else. Symbols have a particular meaning for people Human culture (beliefs, ideas, behaviours) is constantly evolving and changing. It is through interactions in society that culture develops. To be part of the human culture we need to cooperate, have laws/rules and have a capacity for knowledge (we can learn and teach others)
The phenomenon called El Niño can disrupt weather patterns around the world. In 1988 it caused devastating floods and mudslides in California. Around the same time, it also caused these floods across the south. But it all started thousands of miles from each location. Picture the Equatorial Pacific. That’s South Asia and Australia on the left and South America on the right. Pacific trade winds normally blow from east to west, pushing warm surface water with them. The warm water evaporates, adding moisture to the air and bringing on the annual monsoons to the region. In the Eastern Pacific, as the surface water is pushed westward, cold water wells up from the deep to replace it. The cold water helps keep the air and the South American coast dry. But sometimes the trade winds stop. Scientists don’t know precisely why but when they do, the warm water moves back east. That’s when El Niño takes hold. The pattern of rainfall is reversed. Australia and South Asia suffer drought and coastal South America is hit by storms but warm water in the Eastern Pacific causes other changes as well. High atmosphere winds, called jet streams, circle the planet. El Niño can alter the path of those winds, driving pacific storms to California. As the jet stream winds continue across the continent, they can keep the colder Arctic temperatures at bay. That can make for a warmer winter in the Northeast and Northwest. That’s why in 2010, when El Niño once again brought flooding to California, farther up the west coast in Vancouver, the Winter Olympics were hampered by unusually warm weather. It was all thanks to El Niño.
Besides torching forests and houses, wildfires throughout the United States are also releasing a lot of smoke and particulate matter into the air. A map created using data from a NASA satellite shows the location and amount of pollution wildfires raging across the West have spewed into the skies above the United States. The new map shows relative levels of tiny particles called aerosols, which have an important impact on weather and climate and are unsafe to breathe at certain concentrations. It was created by information gathered by the Suomi National Polar-orbiting Partnership satellite on June 26. The highest concentrations of aerosols appear reddish-brown, while the lowest are light yellow. Heavy concentrations of smoke and aerosols are visible to the northeast of the North Schell, Dump and Wood Hollow fires in Nevada and Utah. Thick smoke plumes from wildfires across Colorado have moved east and south into the plains states. Further south in Texas and New Mexico the map shows particulates from fires there, like the Whitewater-Baldy Complex fire, the largest in New Mexico's history. The instrument on the satellite that measures aerosols, the Ozone Mapper Profiler Suite, works by analyzing the amount of light scattered and reflected by the atmosphere; smoke and dust reflect much more radiation than clear skies. Researchers who've analyzed the data used to make the map said that the western wildfires have affected air quality as far away as the East Coast.
You already know abut the North American Conveyor current. Briefly: The major ocean currents happen because the equatorial ocean is warmer, and since water (unlike land) can move (though not as fast as air) the dissipation of this heat across the surface of the Earth results in warm water moving, at the surface, north or south away from the Equator, where it loses its heat and finds it way back to the equatorial regions, usually as deeper, cooler water. Conveniently, this process also involves increasing the salinity of the water far from the equator, as evaporating water becomes saltier. This saltier water is therefore both cold and dense, so it sinks, drawing the warm surface water into the evaporation regions. Something like this is happening at a small scale around all the oceans, but the density driven conveyor is the biggest driver of ocean currents, most significant with respect to weather, and most famous, in the North Atlantic. With global warming, the fresh water budget and distribution in the northern latitudes, in the Atlantic, changes, with more fresh water coming out of the Arctic and off of Greenland. This freshens up the hypersaline engine of the Atlantic Conveyor, also known as the Atlantic Meridional Overturning Circulation (AMOC). When that engine slows or shuts down, the currents in the entire North Atlantic, and beyond, change. Here is the number one reason this is important (though number two may be more important, I’ll get to that in a moment). You know how England is warm and Maine is cold, though they are both really far north? London, Saskatoon, and Adak are all at about the same latitude. Paris, Quebec City, and Thunder Bay. Northern Europe is warmish, and habitable, even in Scandinavia, because of the heat that the AMOC transfers from ocean to land. If the AMOC shuts down or moves really far south, Scandinavia, which is at the same latitude as Hudson Bay, will act more like Central Canada, which it does not do today. Visiting London from Minneapolis in the Winter now means going to a warmer (if dreary and foggy) place. Without the AMOC, it will be more like going to central Canada. The above strip maps make it look like there is an equivalence across different longitudes at a given latitude. This is not true. The ocean, even without the AMOC, will still warm Western Europe. But now, there is a gradient of warmth from eastern North America over to Europe, where a mostly non freezing winter shifts north to a degree that is nothing short of spectacular. Without the AMOC, that shift will be modest. And, interior areas in Eurasia, such as Moscow, will also cool down (though relatively not as much). Newly published research tells us something new and troubling about AMOC deterioration. Current climate models suggest that this may happen, but it is unclear to what degree and when. Physical evidence shows the actual real life weakening of AMOC in recent years. So, reality seems to be outpacing the models. Some have suggested that this means that AMOC varies a lot, and will likely swing partly out and back in. Others are not so sure. The recent research identifies a bias in the generally used climate models that causes AMOC to be more stable and long lasting, under global warming, than it might in real life. When the model is run with and without the bias corrected, you get very different results (see graph above). This is a preliminary finding. The model has not been run enough times, and a few other things that are usually done have not yet been done. But the results are interesting enough that it is getting some serious attention. One of the world’s experts on this topic, Stefan Rahmstorf, has written this up on RealClimate: The underestimated danger of a breakdown of the Gulf Stream System. The original research is here, but you may need a subscription. I should mention that the collapse of the AMOC that happens when this model is run occurs in the somewhat distant future. That makes it worse, of course, because even more people will be living in, and depending on, the affected region than today. But it also allows us to ignore the problem because, hey, who cares about what happens to our children anyway, right? Oh, and on that other thing that could happen if AMOC shuts down. This is speculative, but we do know that in the past large areas of ancient versions of the Atlantic Ocean and other seas have essentially died, become anoxic over large areas, so they become sources of dead matter rather than edible fish and stuff. This is how many of the major oil supplies we exploit today formed. I would imagine that shutting off the relatively restricted North Atlantic basin from much of the global circulation would be a first step in killing the ocean. So, there goes that food supply, and possibly, that source of oxygen. You know, for eating and breathing and stuff.
Students will be able to: - Evaluate search algorithms - Intentionally access high-quality information - Why is it important to know how search engines work? - How do search engines affect the way I find information? - Not All Search Engines Are Alike handout - Search engine results packets - Search Google, Bing and Yahoo! for five terms or phrases from a current topic in your class—the more specific the terms, the better. Print off the first two pages of each search engine’s results for each term to make 15 packets. search engine [surch en-juh-n] (noun) A computer program that searches documents, especially on the World Wide Web, for a specified word or words and provides a list of documents in which they are found algorithm [al-guh-rith-uh m] (noun) A complex mathematical equation used by search engines to find data directory [dih-rek-tuh-ree] (noun) An organizing unit in a computer's file system for storing and locating files index [in-deks] (noun) A method of sorting data by creating keywords or a listing of the data In this lesson, students will learn how different search engines work. They will analyze results provided by three of the more popular search engines and evaluate their effectiveness in an advertisement for the search engine. 1. Start by asking students how often they use search engines. Which ones do they use most? What are some of the most common problems they experience with search engines? How have they solved some of these problems? 2. Distribute the Not All Search Engines Are Built Alike handout. Read the handout as a class. (Note: If you don’t have time, you could assign the handout reading as homework the night before.) After students have read the article, divide the class into small groups and have them discuss the questions on the handout. Then ask a spokesperson from each group to summarize their discussions to the class. 3. Next, list the five terms you used to create your search engine results packets on the board. Keep students in their groups, and distribute one search engine results packet to each group. Provide time for the groups to review the search results. Then facilitate a discussion using the following questions: - Did the three search engines provide results in the same order? If not, how did the orders differ? - Which search engine provided the most understandable page titles? - Did any of the search engines provide information without the user having to click on a link? Which one(s)? - Did all the search engines provide descriptive information about the page titles? Did this information help you decide if the webpage was a source you were looking for? - Did all the search engines provide images of the term or phrase? If not, which one(s) didn’t? Were the images useful? - Did any of the search engines also include ads? Which ones? Were the ads distracting? - Were all the headings on each search engine’s first page of results relevant to the topic? How about the second page? Where in the list did the results seem to go off topic from the searched term or phrase? - Of the three search engines tested, which one do you think provided the best search results and why? 4. Finally, have students use their answers to the above questions to create an advertisement explaining why their favorite search engine is better than others. Alignment to Common Core State Standards Cite strong and thorough textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text. Determine a central idea of a text and analyze its development over the course of the text, including how it emerges and is shaped and refined by specific details; provide an objective summary of the text. Determine the meaning of words and phrases as they are used in a text, including figurative, connotative, and technical meanings; analyze the cumulative impact of specific word choices on meaning and tone Delineate and evaluate the argument and specific claims in a text, assessing whether the reasoning is valid and the evidence is relevant and sufficient; identify false statements and fallacious reasoning. Cite strong and thorough textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text, including determining where the text leaves matters uncertain. Determine two or more central ideas of a text and analyze their development over the course of the text, including how they interact and build on one another to provide a complex analysis; provide an objective summary of the text. Determine the meaning of words and phrases as they are used in a text, including figurative, connotative, and technical meanings; analyze how an author uses and refines the meaning of a key term or terms over the course of a text Integrate and evaluate multiple sources of information presented in different media or formats (e.g., visually, quantitatively) as well as in words in order to address a question or solve a problem.
Typewriter, any of various machines for writing characters similar to those made by printers’ types, especially a machine in which the characters are produced by steel types striking the paper through an inked ribbon with the types being actuated by corresponding keys on a keyboard and the paper being held by a platen that is automatically moved along with a carriage when a key is struck. The invention of various kinds of machines was attempted in the 19th century. Most were large and cumbersome, some resembling pianos in size and shape. All were much slower to use than handwriting. Finally, in 1867, the American inventor Christopher Latham Sholes read an article in the journal Scientific American describing a new British-invented machine and was inspired to construct what became the first practical typewriter. His second model, patented on June 23, 1868, wrote at a speed far exceeding that of a pen. It was a crude machine, but Sholes added many improvements in the next few years, and in 1873 he signed a contract with E. Remington and Sons, gunsmiths, of Ilion, New York, for manufacture. The first typewriters were placed on the market in 1874, and the machine was soon renamed the Remington. Among its original features that were still standard in machines built a century later were the cylinder, with its line-spacing and carriage-return mechanism; the escapement, which causes the letter spacing by carriage movement; the arrangement of the typebars so as to strike the paper at a common centre; the actuation of the typebars by means of key levers and connecting wires; printing through an inked ribbon; and the positions of the different characters on the keyboard, which conform almost exactly to the arrangement that is now universal. Mark Twain purchased a Remington and became the first author to submit a typewritten book manuscript. The first typewriter had no shift-key mechanism—it wrote capital letters only. The problem of printing both capitals and small letters without increasing the number of keys was solved by placing two types, a capital and lowercase of the same letter, on each bar, in combination with a cylinder-shifting mechanism. The first shift-key typewriter—the Remington Model 2—appeared on the market in 1878. Soon after appeared the so-called double-keyboard machines, which contained twice the number of keys—one for every character, whether capital or small letter. For many years the double keyboard and the shift-key machines competed for popular favour, but the development of the so-called touch method of typing, for which the compact keyboard of the shift-key machines was far better suited, decided the contest. Another early issue concerned the relative merits of the typebar and the type wheel, first applied in cylinder models brought out in the 1880s and later. In modern machines of this variety the type faces are mounted on a circle or segment, the operation of the keys brings each type to correct printing position, and the imprint of type on paper is produced by a trigger action. The type-wheel machines offer an advantage in the ease with which the type segments may be changed, thus extending the range and versatility of the machine. On nearly all typewriters the printing is done through an inked ribbon, which is fitted on spools, travels with the operation of the machine, and reverses automatically when one spool becomes completely unwound. On other machines an inking pad is used, the type contacting the pad prior to printing. Test Your Knowledge A significant advance in the typewriter field was the development of the electric typewriter, basically a mechanical typewriter with the typing stroke powered by an electric-motor drive. The typist initiates the key stroke, the carriage motion, and other controls by touching the proper key. The actuation is performed by the proper linkage clutching to a constantly rotating drive shaft. Advantages of this system include lighter touch, faster and more uniform typing, more legible and numerous carbon copies, and less operator fatigue. Especially valuable as an office machine capable of a high volume of output, electric typewriters are produced by all major typewriter manufacturers. The first electrically operated typewriter, consisting of a printing wheel, was invented by Thomas A. Edison in 1872 and later developed into the ticker-tape printer. The electric typewriter as an office writing machine was pioneered by James Smathers in 1920. In 1961 the first commercially successful typewriter based on a spherical type-carrier design was introduced by the International Business Machines Corporation. The sphere-shaped typing element moves across the paper, tilting and rotating as the desired character or symbol is selected. The motion of the element from left to right eliminates the need for a movable paper carriage. Typewriter composing machines Special-purpose typewriting machines have been developed for use as composing machines; that is, to prepare originals that look as if they had been set in printer’s type (or at least more so than ordinary typewriting does), from which additional copies can be printed. Ordinary typewriting cannot compare in quality, style, and versatility with printing from type produced directly on metal slugs by standard composing machines, but the high cost of skilled typesetting labour prompted the development of composing typewriters that require far less operator training. Since the fundamental requirement of a composing typewriter is the ability to supply different styles and sizes of type, the type-wheel machine is far more suitable than the typebar. Other major requirements of a typing machine whose output must resemble print are the proportional spacing of characters in a word (rather than centring every character within the same width, as in ordinary typewriting) and justification, or alignment of the right-hand margin. An electric typebar machine was developed that provided proportional spacing—assigning space for each character in proportion to its width. The other requirement, margin justification, proved more difficult to attain. Most of these machines provided for preliminary typing of a line, determining the necessary compensation for the line length, and retyping to the exact length. A more complicated machine was introduced that would automatically justify a line of type with one keyboarding. This was accomplished by a system in which the operator typed manually into a storage unit, from which a computer first automatically compensated for line length and then operated a second typing mechanism. By mid-20th century the typewriter had begun to be used as a composing machine in spite of its limitations, and it became more popular as improvements were developed. The need for high-speed printing machines to convert the output of computers to readable form prompted the introduction of a specialized high-speed form of “typewriter” in 1953. In this class of machines, the paper is fed between a continuously rotating type wheel and a bank of electrically actuated printing hammers. At the instant the proper character on the face of the type wheel is opposite the proper hammer, the hammer strikes the paper and prints the character, while the type wheel continues to rotate. By this means, speeds up to 100,000 characters per minute have been attained, as compared with about 1,000 characters per minute attainable with conventional typebar mechanisms. A number of different models operating on this principle were developed; all of them required elaborate electronic controls to solve the complex synchronization problem. Many other high-speed-output devices for computers were developed. Most of them utilize techniques that are remote from the typewriter field, in some cases using printing mediums other than paper. Speeds of up to 10,000 characters per second were attained by certain nonmechanical systems, which, although not actually typewriters, compete with typewriters as computer-output devices.
Insulin-like growth factor (IGF), formerly called somatomedin, any of several peptide hormones that function primarily to stimulate growth but that also possess some ability to decrease blood glucose levels. IGFs were discovered when investigators began studying the effects of biological substances on cells and tissues outside the body. The name insulin-like growth factor reflects the fact that these substances have insulin-like actions in some tissues, though they are far less potent than insulin in decreasing blood glucose concentrations. Furthermore, their fundamental action is to stimulate growth, and, though IGFs share this ability with other growth factors—such as epidermal growth factor, platelet-derived growth factor, and nerve growth factor—IGFs differ from these substances in that they are the only ones with well-described endocrine actions in humans. There are two IGFs: IGF-1 and IGF-2. These two factors, despite the similarity of their names, are distinguishable in terms of specific actions on tissues because they bind to and activate different receptors. The major action of IGFs is on cell growth. Indeed, most of the actions of pituitary growth hormone are mediated by IGFs, primarily IGF-1. Growth hormone stimulates many tissues, particularly the liver, to synthesize and secrete IGF-1, which in turn stimulates both hypertrophy (increase in cell size) and hyperplasia (increase in cell number) of most tissues, including bone. Serum IGF-1 concentrations progressively increase during childhood and peak at the time of puberty, and they progressively decrease thereafter (as does growth hormone secretion). Children and adults with deficiency of growth hormone have low serum IGF-1 concentrations compared with healthy individuals of the same age. In contrast, patients with high levels of growth hormone (e.g., acromegaly) have increased serum IGF-1 concentrations. The production of IGF-2 is less dependent on the secretion of growth hormone than is the production of IGF-1, and IGF-2 is much less important in stimulating linear growth. Although serum IGF concentrations seem to be determined by production by the liver, these substances are produced by many tissues, and many of the same tissues also have receptors for them. In addition, there are multiple serum binding proteins for IGFs that may stimulate or inhibit the biological actions of the factors. It is likely that the growth-promoting actions of IGFs occur at or very near the site of their formation; in effect, they probably exert their major actions by way of paracrine (acting on neighbouring cells) and autocrine (self-stimulating) effects.
Flooding is a temporary overflow of water onto land that is normally dry. Floods are the most common natural disaster in the United States. Failing to evacuate flooded areas or entering flood waters can lead to injury or death. Floods may: - Result from rain, snow, coastal storms, storm surges and overflows of dams and other water systems. - Develop slowly or quickly. Flash floods can come with no warning. - Cause outages, disrupt transportation, damage buildings and create landslides. Preparing for a Flood Know Your Risk for Floods – Visit FEMA’s Flood Map Service Center to know types of flood risk in your area. Sign up for your community’s warning system. The Emergency Alert System (EAS) and National Oceanic and Atmospheric Administration (NOAA) Weather Radio also provide emergency alerts. Purchase Flood Insurance – Purchase or renew a flood insurance policy. Homeowner’s insurance policies do not cover flooding. It typically takes up to 30 days for a policy to go into effect so the time to buy is well before a disaster. Plan Ahead – Plan for your household, including your pets, so that you and your family know what to do, where to go, and what you will need to protect yourselves from flooding. Learn and practice evacuation routes, shelter plans, and flash flood response. Gather supplies, including non-perishable foods, cleaning supplies, and water for several days, in case you must leave immediately or if services are cut off in your area. In Case of Emergency –Keep important documents in a waterproof container. Create password-protected digital copies. Protect your property. Move valuables to higher levels. Declutter drains and gutters. Install check valves. Consider a sump pump with a battery. If you are under a flood warning: - Find safe shelter right away. - Do not walk, swim or drive through flood waters. Turn Around, Don’t Drown! - Remember, just six inches of moving water can knock you down, and one foot of moving water can sweep your vehicle away. - Stay off bridges over fast-moving water. - Depending on the type of flooding: - Evacuate if told to do so. - Move to higher ground or a higher floor. - Stay where you are. Staying Safe During a Flood - Evacuate immediately, if told to evacuate. Never drive around barricades. Local responders use them to safely direct traffic out of flooded areas. - Contact your healthcare provider If you are sick and need medical attention. Wait for further care instructions and shelter in place, if possible. If you are experiencing a medical emergency, call 9-1-1. - Listen to EAS, NOAA Weather Radio or local alerting systems for current emergency information and instructions regarding flooding. - Do not walk, swim or drive through flood waters. Turn Around. Don’t Drown! - Stay off bridges over fast-moving water. Fast-moving water can wash bridges away without warning. - Stay inside your car if it is trapped in rapidly moving water. Get on the roof if water is rising inside the car. - Get to the highest level if trapped in a building. Only get on the roof if necessary and once there, signal for help. Do not climb into a closed attic to avoid getting trapped by rising floodwater. Staying Safe After a Flood - Pay attention to authorities for information and instructions. Return home only when authorities say it is safe. - Avoid driving except in emergencies. - Wear heavy work gloves, protective clothing and boots during clean up and use appropriate face coverings or masks if cleaning mold or other debris. - People with asthma and other lung conditions and/or immune suppression should not enter buildings with indoor water leaks or mold growth that can be seen or smelled. Children should not take part in disaster cleanup work. - Be aware that snakes and other animals may be in your house. - Be aware of the risk of electrocution. Do not touch electrical equipment if it is wet or if you are standing in water. Turn off the electricity to prevent electric shock if it is safe to do so. - Avoid wading in floodwater, which can be contaminated and contain dangerous debris. Underground or downed power lines can also electrically charge the water. - Use a generator or other gasoline-powered machinery ONLY outdoors and away from windows. Contact your agent to protect your home and add this coverage. We can provide coverage from many insurance carriers so you receive the insurance for your budget and needs!
LUCY READING - IKKANDA Like other enveloped viruses, HIV exits its host cell enshrouded in the cell’s membrane, which contains membrane molecules such as the human leukocyte antigens (HLA). The HLA proteins act as a set of cell identification marks: every person expresses a slightly different HLA set. These molecules differentiate one person from another and allow the immune system to detect foreign invaders, and to reject tissue from other people or animals. Interestingly, each HIV particle has many more human HLA on its envelope surface than it has its own gp120 viral coat proteins, which the virus needs to bind to CD4 and CCR5 or CXCR4 on the lymphocyte surface in order to enter cells.
What are Learning Difficulties ? ADHD, ASD, Dyslexia What does ‘learning difficulties’ mean ? ‘Learning difficulties’ is a common term in today’s society, which manifests in different ways and can cause various difficulties in daily life. For one pesron it might be lack of attention, for another it might be struggling to read fluently, these are different groups of learning difficulties. The following groups of specific learning difficulties (‘spLD’) are widely accepted: - Attention Deficit/Hyperactive Disorder or Attention deficit disorder (ADHD or ADD): a combination of inattention, hyperactivity and impulsivity. - Autism Spectrum Disorder (ASD): social difficulties, communication impairment and restricted behaviours - Dyslexia: difficulties with reading - Dysgraphia: difficulties in writing - Dyspraxia: motor difficulties that affects movement and co-ordination - Dyscalculia: difficulties in understanding and learning mathematics Specific learning difficulties impact a person’s motor skills, information processing and memory. What the impact is and the severity differs from person to person, as does the person’s ability to cope with the difficulty and their reaction to any support offered. The fact is though, those with specific learning difficulties will experience greater challenges in life than those without. Challenges that can be supported with Neurofeedback. How can Neurofeedback help with learning difficulties ? Neurofeedback is a tool that help to improve brain regulation, and so the short answer is that if a learning difficulty is associated with brain dis-regulation, then yes it is likely Neurofeedback will help. Problems of attention and hyperactivity have been linked to brain deregulation since the 1970s, and a brain dis-regulation test for ADHD has recently been approved in the US. The evidence for Neurofeedback to improve ADHD symptoms is very well established and endorsed by an American Academy of Pediatrics report at the highest level of Evidence-Based Intervention, with up to 80-90% success rates reported, and we do not need a medical diagnosis to help. Brain dis-regulation is also associated with other learning difficulties, please refer to our specialist pages to learn how neurofeedback may be able to help, as well as the learning disability Down Syndrome: - Neurofeedback for ADHD symptoms - Neurofeedback for ASD symptoms – autism spectrum disorder, including Asperger’s syndrome - Neurofeedback for Dyslexia, Dysgraphia, Dyspraxia and Dyscalculia - Neurofeedback for Down Syndrome To read more about this, please read the next page on What is ADHD?
Strategies For Studying The Lecture Notes After examining your notes on the chapters you will begin to see that Christopherson has provided you with detailed outlines. Information in the chapters is organized in a hierarchical arrangement of main heading and subheadings. Lectures are similarly organized. However, your notes are your outline. Ask yourself, "how does the subject matter presented in lecture fit with that in the chapters?" It will be: the same as that in the text, more detailed or less detailed than the text, or completely different (but linked to some information in the text). You must go through the intellectual exercise of fitting lecture material into the framework/outline that you identified from you analysis of the text. Once so organized, you will note that lecture material is also organized in a hierarchical fashion with main heading, subheadings, and definitions. Compare the two sources of information (lecture and readings). Where information is presented twice you have found yourself a key issue. These issues/ definitions/ topics are your highest priority (why else would I have repeated them?). If you know how they relate to the whole and to each other you cannot lose! Remember: Follow the water! What are the Earth systems (or spheres)? They have three "parts"? What are the three descriptive criteria we use to describe climate/ climate regions? Which two does the Koeppen system use? What do you need to know about those criteria besides "average" conditions? What does the Koeppen system consider besides "average" conditions? Definition: Natural regions are…? Why does a Koeppen regional climate map look so much like a map of regional biomes? Can you identify the locations of climate regions and biomes on a map of North America? See the last page of the study guide for maps (A, B, C, D, E, Figure 10-4, page 268), Biomes (using the terminology on Figure 20-3 page 596-597), and soils (Figure 18-9 pages 538-539). Definition: solar climate classification? What did Alexander Von Humboldt invent that, along with instrumentation made the solar climate classification idea obsolete and ushered in modern climatology? Water definitions: latent heat, specific heat, adhesion, cohesion, capillary forces, universal solvent, geomorphic agent, infiltration, percolation, permeability, porosity, field capacity, permanent wilting point, saturation, hygroscopic water, gravitational water, baseflow. Relate them to textures: sand, silt (silt loam), clay. How much fresh water is there (in terms of % of total water on Earth)? How much of that water is "NOT frozen"? What do we mean when we refer to water as the "universal solvent". What does this unique property of water mean to the biosphere? To the lithosphere? To groundwater contamination? The 4 "facilities" in the hydrologic cycle are…? Which one stores the most fresh not frozen water? The 3 main outputs back to the atmosphere are…? Infiltration-runoff ratio: list precipitation factors; list surface factors? How do these factors influence rates of infiltration versus runoff? The three lifting mechanisms that lift moist air so it cools adiabatically and precipitation can occur are? Definitions: respiration, transpiration, evaporation, evapotranspiration, potential evapotranspiration, actual transpiration. What happens to Potential Evapotranspiration when atmospheric temperature? What are the two inputs in the soil-moisture balance equation that together make up actual evapotranspiration? NEXT PAGE OF STUDY GUIDE
Students will be able to: - describe the role of hand washing in prevention of disease transmission; - explore various careers in the health field, and; - describe a health career that is of interest to them, and explain why. - Paper towels - UV light - Glo-Germ powder (see lesson plan for details) - Overhead projector - Butcher paper/flip chart - Teacher Reference 1. Hand Washing Lab (PDF) - Computer with Internet capability. - Newspaper want ads (hard copy or on-line) - Telephone book, yellow pages - Blank index cards - Tell students that they will be learning how to test the effectiveness of hand washing, in this case, the hand washing of food handlers. - The following lab provides instructions for students to test for the presence of bacteria. Teacher Reference 1. Hand Washing Lab (PDF) - When the lab has been completed, discuss the role of food handlers in the spread/transmission of disease. Students may be asked to share their food handling experience. - Consider the scenario in which patrons of a local fast food restaurant became ill after eating at the restaurant. How could health officials determine the source of the infection? Was it caused by food handlers with unclean hands, or previously contaminated food? - Discuss the types of health care professionals who would be called upon to solve this problem. Write the list on the butcher paper/flip chart. - Ask students to suggest what should be done about this situation, whom they would call to help, what types of help is needed for the sick people and the restaurant, and how to prevent this from happening again at this restaurant. - Begin this lesson by reviewing the list of health careers developed in the first lesson. Ask if there are any other types of health careers that they know of. The students may not know about careers such as data management, environmental health, safety, veterinary medicine, pharmacy, physical and psychological therapy, etc. Add these to the list. - Ask students to pick one career from the list that might interest them. Assign each student the task of finding out as much about that field as they can, using a variety of sources. Make reference to the sources listed in the "Student Resources" section below. - Tell students that their research should, at a minimum, include responses to the questions below: - What does the job involve on a daily basis? - What kind of education is required to work toward entering that field? - What kind of training is needed for the job and how long will it take? - What will it cost for training for the job? - How much does this career pay? (start-up to top) Each student will write a two-page report about a career in the health field that is of interest to them. The report will include information about what kind of work is done, what level of education is needed for the job, levels of pay, and why this career is suitable to the student's personality, interests, capabilities, and personal goals. National Science Education Standards: - Scientists conduct investigations for a wide variety of reasons. For example, they may wish to discover new aspects of the natural world, explain recently observed phenomena, or test the conclusions of prior investigations or the predictions of current theories. - The severity of disease symptoms is dependent on many factors, such as human resistance and the virulence of the disease-producing organism. Many diseases can be prevented, controlled, or cured. Some diseases, such as cancer, result from specific body dysfunctions and cannot be transmitted. - Individuals and teams have contributed and will continue to contribute to the scientific enterprise. Doing science or engineering can be as simple as an individual conducting field studies or as complex as hundreds of people working on a major scientific question or technological problem. Pursuing science as a career or as a hobby can be both fascinating and intellectually rewarding. Health Education Standards: - Knows and uses health care terminology. - Understands the distinctions among the various levels of care. - Understands how technology is used in the health care industry - Understands the components of the health care delivery system. - Uses a variety of communication skills to interact with clients. English Language Arts Standards: - Students use a variety of technological and information resources (e.g., libraries, databases, computer networks, video) to gather and synthesize information and to create and communicate knowledge. - Maria Crassas, Science Teacher, Francis Scott Key Middle School, Silver Spring, Maryland - Gillian Davis, Health Occupations Teacher, Benson High School, Portland, Oregon - Damian Kreske, Biology Teacher, Woodrow Wilson High School, Washington, DC
Resonant circuits are critical for the generation and selection of desired RF/microwave frequencies. For any transmission line, including stripline, microstrip, or waveguide, a suitable length can be used as a resonator, with dimensions for the resonant structure that correspond to the desired wavelength. When that resonant structure is in the form of a cavity, it is simply called a cavity resonator. High-frequency cavity resonators, for example, serve as excellent starting points for RF/microwave oscillators capable of generating low-noise signals and for filters used to select signals at specific frequencies. For example, cavity resonators can be embedded within a multilayer circuit substrate, to achieve a high-quality resonance without a larger metal cavity or tuning screw. Excellent performance is available from such multilayer cavity resonators, given available high-frequency circuit laminates and the pre-impregnated glass fabric (prepreg) materials. Cavity resonators are essentially hollow conductors or sections of a printed-circuit board (PCB) which can support electromagnetic (EM) energy at a specific frequency or group of frequencies. An EM wave entering the cavity that is resonant within the cavity will bounce back and forth within the cavity with extremely low loss. As more EM waves enter at that resonant frequency, they reinforce and strengthen the amplitude of the existing resonating EM waves. The resonant frequency or frequencies of a cavity depend on several factors, including the dimensions of the cavity, the materials that form the cavity, and how energy is launched and/or extracted from the cavity. A resonant cavity is sometimes referred to as a form of in-circuit waveguide, short-circuited at both ends of the waveguide structure so that EM energy builds within the cavity at a designed frequency or band of frequencies. The size of a cavity resonator, for example, is a function of the desired resonant frequency and the characteristics of the PCB materials used for the resonator. PCB materials with higher dielectric constants will support smaller cavity resonators for a given frequency than circuit substrate materials with lower dielectric constants. While there are many ways to create a cavity resonator in a PCB, most methods rely on either building up materials around an empty area on the PCB, or removing materials from a PCB structure to form an empty area, such as by means of laser ablation. In forming a window-type resonant cavity in a multilayer circuit assembly, the different layers that create the circuit assembly also form the walls of the resonant cavity. Such circuit-material layers often include a high-performance circuit material, such as RT/duroid® 5880, RO4003C™ LoPro™, or RO4350B™ LoPro laminates from Rogers Corp. (www.rogerscorp.com), and a compatible prepreg material, such as RO4450F™ prepreg, also from Rogers Corp., to bond the circuit layers together. In the window-type approach to forming cavity resonators, windows are punched into some of the circuit layers used to assemble a multilayer circuit. As the laminate and bonding or prepreg layers are assembled, the layers forming the windows will create the walls of the soon-to-be resonant cavity. The size of this cavity, of course, determines the ultimate frequency or frequencies of the resonant cavity, so manufacturing efforts are usually focused on keeping the dimensions of the resonant cavity tightly controlled. Ideally, prepreg materials used for bonding the multilayer structure have the flow characteristics required for a multilayer resonant cavity. For example, in a multilayer construction in which voids must be filled, such as in circuits with plated copper, prepregs with “high-flow” characteristics are desired. But when bonding of multilayers is needed, without flow into the resonant cavity formed by those multiple laminate layers, a “low-flow” prepreg is preferred, with a high glass transition temperature (Tg) for good reliability. Because the bonding materials in a multilayer circuit assembly will flow during lamination, designers must be wary of bonding materials that lack good flow control and might flow into the resonant window or cavity area, changing the dimensions of the resonant cavity (and its operating frequency or frequencies). An effective multilayer prepreg should exhibit low loss, good adhesion to commercial PCB laminates, stable dielectric constant with temperature and frequency, and the capability of supporting multiple or sequential laminations if needed. Ideally, any prepreg in a multilayer circuit assembly with a resonant window should have not only low-flow characteristics, but predictable flow characteristics. The predictability allows for tight control of the circuit manufacturing process. In a circuit with a resonant cavity, a prepreg with predictable flow may alter the size of the cavity because of that flow, but it will be in a manner that can be predicted and even modeled in a commercial EM computer-aided-engineering (CAE) software program such as Ansoft HFSS. However, if the prepreg has low-flow characteristics without predictable flow, the final size of the resonant cavity will vary according to the flow characteristics, as will the resonant frequency or frequencies of the cavity. As an example, RO4450F prepreg is a low-flow prepreg material with relatively well-controlled flow characteristics. It is compatible with RO4350B or RO4350B LoPro laminates and well suited for forming multilayer cavity resonators with consistent and predictable characteristics. In contrast, Rogers’ 2929 is also a durable prepreg material, but with greater flow than RO4450F material. Although both are candidates for a multilayer cavity resonator design, the fabrication and lamination conditions will dictate which prepreg provides a greater level of consistency in a final production run. RO4450F and the RO4400™ family of prepreg materials are based on the RO4000® core materials and readily compatible with those laminates in multilayer constructions, such as in cavity resonator designs. The prepregs feature a number of key attributes that contribute to reliable performance in multilayer constructions, including a high post-cure glass transition temperature (Tg) of greater than +280°C, an indication that the prepregs are capable of handling multiple lamination cycles. The RO4400 prepregs also support FR-4-like bonding process conditions (+177°C), enabling the use of standard lamination equipment. Optimum performance from any multilayer cavity-resonator-based design, whether for generating or filtering signals, requires careful consideration of the type of feed structure used with the cavity resonator, especially at higher frequencies. A number of approaches provide good results through millimeter-wave frequencies, including slot and probe excitation techniques. Using a slot is fairly straightforward and requires very simple fabrication while probe excitation, which can be somewhat more demanding in terms of fabrication, can yield extremely wideband results. Some cavity-resonator filters, for example, have used feed approaches as simple as a microstrip line through a coupled slot in the ground plane. In the case of either slot or probe feed in a multilayer cavity-resonator construction, high-quality prepreg materials help ensure minimal loss and stable performance. The next blog will continue this discussion on PCB resonant cavities. But it will take a somewhat different point of view, focusing on buried waveguide structures and providing several examples. Do you have a design or fabrication question? John Coonrod and Joe Davis are available to help. Log in to the Rogers Technology Support Hub and “Ask an Engineer” today.
by Vladimir V. Lytkin KONSTANTIN TSIOLKOVSKY - THE PIONEER OF SPACE TRAVEL Konstantin Eduardovich Tsiolkovsky ( 1857 - 1935 ) is typically portrayed as a lone genius who worked largely in isolation from centres of higher learning and industry. An attack of scarlet fever at age ten ruined his hearing and cut off Tsiolkovsky from a normal education and social development. However, by reading all the books in the library of his father, a forester, Tsiolkovsky managed to partially self-educate himself. In 1873, when he was 16, he went to Moscow to continue this process. In Moscow he started to dream about the possibility of space travel and interplanetary flight. After coming back to his father's home in 1876, he worked for three years as an apprentice teacher of mathematics, physics, and chemistry. In 1879 he passed his examinations to qualify as a schoolteacher. He lived in Borovsk, Kaluga Province (1880 - 1892 ) and then in Kaluga ( 1892 - 1935 ). He continued working as as a teacher until 1921. While living in Ryazan with his father's family, he published his first known scientific work "Astronomical Drawings" in 1879. It schematically depicted the Solar System and the distances between planets. This, his first work, already reflected his interest in the problems of space studies. Later, in Borovsk, Tsiolkovsky wrote "Free Space" (1883). Here he considered the possibility of living in outer space and the effects of zero gravity. For the first time Tsiolkovsky included a drawing of a spacecraft that could orient itself in space with the help of reactive jets (but not change its position by propulsive rockets). It was very important for Tsiolkovsky to prove the possibility of controlled motion of an artificial vehicle in free space. In 1903 he published an article "The Investigation of Space by Means of Reactive Devices". Here he first outlined his theory of spaceflight and published the basic equation for reaching space by rocket that is still known to students as the "Tsiolkovsky Equation". It was the first theoretical proof of the possibility of spaceflight. Over the next three decades he further developed his ideas on rocketry and space travel, publishing, along with numerous papers and monographs, a science fiction novel, for popularisation of his ideas. In his articles he describes how space rockets would be built, and the main future principles of rocketry and space exploration. Let us mention some of them : 1. Space rockets have to use liquid engines using two components: fuel and oxidiser. The best combination would be hydrogen and oxygen, but the most useful combination would be kerosene and oxygen. It would eventually be possible to design a nuclear engine. 2. Different ways of guiding space rockets would be developed. Easiest would be to use a graphite rudder in the rocket's propulsive jet. Another possibility would be to correct the direction of the space rocket by moving all of the engine or it nozzle. 3. Gyroscope systems would be used to control the orientation of the rocket in space. 4. It would be possible to regulate the temperature inside of space rockets with the help of special outer coverings with differing the solar reflectivity. 5. For making spacewalks, extra vehicular activity would require the design of special pressure suits and air locks. Outside the space rocket, cosmonauts would work tethered to the rockets with the cords. 6. Tsiolkovsky described the effects of living under zero gravity in space rockets, and considered possible ways of protecting cosmonauts from the high gravity forces of powered flight and return to earth. 7. Among the most interesting ideas of Tsiolkovsky was the construction of long duration near-Earth (and than interplanetary) space stations. Later it would be possible to design and build "Space Islands" - huge habitats for thousands of people. 8. In 1926 Tsiolkovsky wrote his well known "Plan Of Space Exploration" (this saw manned colonization of the universe in 16 stages). 9. Tsiolkovsky suggested design of special launch ramps for space rockets - using a special ramp booster as the first stage of space rocket. 10. In 1929 Tsiolkovsky wrote and published his work "Rocket Space Trains". He suggested a method of reaching of escape velocity using a multistage booster, consisting of separate rockets joined together and launched simultaneously. These very last calculations about multistage boosters pushed Tsiolkovsky to the conclusion that the first space flights would take place within 20 to 30 years. He made this prediction during his last radiio speech from Moscow on May 1, 1932. These were the tremendous visions of the great thinker and scientist Tsiolkovsky, who called himself "The Citizen of the Universe". Konstantin E. Tsiolkovsky's 16 Stages of Space Exploration. ( 1926 ) 1. Design of rocket-propelled airplanes with wings. 2. Progressively increasing the speeds and altitudes reached with these airplanes. 3. Designing of a pure rocket without wings. 4. Developing the ability to land on the ocean surface by rocket. 5. Reaching of escape velocity and first flight into space. 6. Lengthening of the rocket flight time into space. 7. Experimental use of plants to make an artificial atmosphere in spacecraft. 8. Using of pressurised space suits for activity outside spacecraft. 9. Making of orbital greenhouses for plants. 10. Building of the large orbital habitats around the earth. 11. Using solar radiation to grow food, to heat space quarters, and for transport needs throughout the solar system. 12. Colonization of the asteroid belt. 13. Colonisation of the entire solar system and beyond. 14. Achievement of individual and social perfection. 15. Overcrowding of the solar system and galaxy colonisation. 16. The sun begins to die and the people remaining in the solar system's population move to other solar systems. 1. Bainbridge, William S.,1983. The Spaceflight Revolution. Krieger, Malabar, Florida. 2. Kosmodemyansky, Arkady A., 1987. Konstantin Eduardovich Tsiolkovsky (in Russian). Nauka,Moscow. 3. Rynin, Nikolay A., 1931. K.E.Tsiolkovsky : Life, Writings and Rockets. Vol.3, No.7 of Interplanetary Flight and Communication. Leningrad. Translated in Jerusalem, 1971. 4. Rynin, Nikolay A., 1932. Theory of Space Flight. Vol.3, No.8. (the same ed.) 5. Samoilovitch, Sergei I., 1969. Citizen of the Universe (in Russian), Tsiolkovsky State Museum of the History of Cosmonautics, Kaluga. 6. Tsiolkovsky, Konstantin E., 1995. Exploration of the Universe with Reaction Machines : Exploring the Unknoun. The NASA History Series. NASA SP 4407, Washington, D.C.
In the early- to mid-1950s, Dr. Paul Kuroda from the University of Arkansas described the possibility of naturally occurring nuclear reactors lurking in the crust of ancient Earth. The key is an isotope of Uranium called U-235, which occurs naturally in small amounts. If enough of this isotope were pooled together under specific circumstances, Kuroda theorized, the natural reactor would go critical, and self-sustaining fission would occur. Such a reactor could not exist today, because too much of the Earth's natural U-235 has decayed... but a billion and a half years ago, there was enough of it around to make the idea plausible. In point of fact, it has since been discovered that it actually happened. In 1972, the radioactive remains of such a nuclear reactor was found in the state of Gabon in West Africa, in the Oklo mines. Uranium extracted from that mine was abnormally short in U-235 isotopes, and upon examination, French scientists found that the uranium isotope levels had an uncanny resemblance to those in spent nuclear fuel from modern nuclear power plants. The evidence was strong enough to suggest a natural reactor, and further exploration confirmed it. At the time of discovery, scientists were uncertain exactly how the Oklo reactor had operated without exploding or melting down. For 150 million years, it ran like clockwork with a 30 minute reaction cycle, followed by a 2.5 hour cool-down cycle, putting out an average of 100 kilowatts of power. And it was always exactly 30 minutes per cycle, without significant variation, which was baffling. But recent studies have finally solved the mystery by discovering the regulating mechanism: Water. Under normal conditions, radioactive atoms like U-235 cast off neutron particles at speeds so high that most of the neutrons skip off the surface of other atoms and fly away. But if you put enough of the radioactive material together, the cast-off neutrons bounce around inside the mass, some slowing down enough to be absorbed into another atom's nucleus. The extra neutron causes the nucleus to become unstable and immediately split, which releases a large amount of energy. If one has enough radioactive material in sufficient density that a lot of nuclei split very rapidly (critical mass), the reaction increases exponentially, and results in an atomic explosion. Any less than that (subcritical mass), and it causes a sustained fission reaction, giving off energy as heat and radiation. But researchers have determined that Oklo didn't even have an appreciable, consolidated subcritical mass of Uranium... it was too spread out. Instead, water would seep down through crevices to fill up the gaps between the uranium deposits, and act as a "neutron moderator," slowing down the neutrons enough to allow them to hit-and-split other nuclei. When the reaction caused a sufficient heat increase, the water would boil off, removing the neutron moderator, and stop the process. The cavity would then slowly refill with water during the cooling period, starting the cycle again. Fifteen such natural reactors have been found in the Oklo area, and they are now collectively referred to as the "Oklo Fossil Reactors." These natural reactors are providing useful data on long-term storage of spent nuclear fuel, as well as some insights into possible improvements in man-made reactors.
The Polynomial Method The polynomial method is another basic combinatorial technique that occasionally works. One way to describe the method is as a way to translate a combinatorial statement into the vanishing of a certain polynomial modulo . A demonstration of the method Theorem: Every graph (or hypergraph) G with n vertices and 2n+1 edges contains a nontrivial subgraph H with all vertex-degrees divisible by 3. (This is a theorem of Noga Alon, Shmuel Friedland, and me from 1984.) Before the proof: If we want to get a subgraph with all vertex degrees even then we need n edges (or n+1 edges for hypergraphs). This has a simple linear algebra proof which also gives an efficient algorithm. From-scratch proof sketch: Associate with every edge e of the graph a variable . Consider the two polynomials P= , and If the theorem is false then P-Q=0, as polynomials over the field with three elements. This is impossible since P is a polynomial of degree 4n while Q is a polynomial which has a monomial of degree 4n+2 with nonzero coefficient. The theorem follows more directly from a theorem of Chevalley-Warning and even more directly from a theorem of Olson, but the above proof serves best our purpose. Remarks about the polynomial method: 1) The polynomial method has many applications but only in specific cases. It is not nearly as widely applicable as, say, the probabilistic method. 2) Good basic references: A. Blokhuis, Polynomials in finite geometries and combinatorics. In Keith Walker, editor, Surveys in Combinatorics, 1993, pages 35-52. Cambridge University Press, 1993. 3) The polynomial method is related to the “linear algebra method” in combinatorics. Often, however, direct linear algebraic proofs lead to efficient algorithms while this is not known for applications of the polynomial method. For example, no polynomial algorithm to find the graph H in the above theorem is known, and there is a related complexity class introduced by Christos Papadimitrou . The polynomial method is closely related to arguments coming from the theory of error-correcting codes, and to arguments in TCS related to interactive proofs and PCP. The modular EDP. The following is an equivalent way to formulate the Erdős 1932 conjecture that the discrepancy for EDP is unbounded. 1) Consider the sequence as a sequence with modulo , where is a prime that we can choose as large as we want. 2) Then every number modulo can be expressed as a sum of the sequence along a HAP modulo . Translating EDP (in this form) into a statement about polynomials modulo is cumbersome. But one thing we may have going for us is that it suggests a natural extension of EDP where the supposed-to-vanish polynomial is simpler. Modular EDP Conjecture: Consider a sequence of non-zero numbers modulo p. Then if n is sufficiently large w.r.t. p, then every number can be expressed as a sum of the sequence along a HAP modulo p. As in the original EDP we can consider general sequences or just multiplicative sequences. The Polynomial identity required for the modular EDP Here is the polynomial identity in n variables we need to prove over when grows to infinity with as slow as we wish. For every , , These polynomials are not familiar but they are related to generating functions which arise in permutation statistics. In particular, when we look at the product and expand it to monomials, the coefficients have a combinatorial meaning in terms of permutations and inversions. Given a permutation , and an integer we can ask how many inversions are there between and a smaller integer. This is a number between 1 and . The coefficient of in the above product is the number of permutations where there are integers contributing j inversions. The proposed identity (*) may be expressed in terms of modular properties of such permutation statistics. Challenge: Prove the modular EDP using the polynomial method. What does the LDH tell us about the modular EDP? It is especially easy to apply the large deviation heuristic to the modular version of EDP. Suppose we want to compute the probability that all HAP-sums miss the outcome . Given , the probability that is not is . So we are interested in the value of with . (Restricting our attention to multiplicative sequences will divide the exponents on both sides by .) Solving this equation gives us . The LDH heuristic comes with a firm prediction and a weak prediction. In this case the LDH gives a) (Firm prediction) There are sequences violating the modular EDP when . b) (Weak prediction) There are no such sequences when . The firm prediction is correct by the logn discrepancy constructions for EDP and as a matter of fact the LDH itself gives an even stronger prediction of for -sequences. By restricting our attention to sequences we see that the weak prediction is incorrect and LDH for the modular EDP is blind to the special substructure of sequences. Note that the firm conjecture is far from being known when we extend the modular EDP and replace all integers by a random subset of integers, or by square-free integers , or by SCJ-systems of integers etc.
This was the prediction posited by the by the United States Engineering Corps. They believed that the constant rushing of the water was causing damage at the base of the cliff, making it unstable, and while they could deal with a very slow year on year retreat, they weren’t prepared for a large flood which would ensue if the cliff fell away in a large chunk. So they decided to perform a geological survey on the cliff face and perform any required maintenance work But how does one work on a waterfall that is hurling 168,000 m3 of water over its top every minute? The Army Corps of Engineering took the simplistic approach. The water was a problem – so they took away the water. In an unprecedented move they de-watered the Niagara Falls, rerouting the water through a newly cut riverbed which allowed the water to rejoin the river further down. After this the cliff and several miles of river were left bone dry. Over the course of the next six months they cleared away debris from the base of the cliff and drilled some holes into the cliff to maintain a constant moisture level. They also set up a walkway. A mere 20 feet away from the cliff face it became a great site for the tourists who wandered across with perplexed looks. Especially when they were allowed to amble down the dried out river bed. The dry times had to come to an end and after installing some hydroelectric turbines at its base the Engineers decided to blow up their temporary dam and allow the water to flow once more. To this day the Niagara Falls gallantly gush water day by day by day. Nothing has since impeded their flow. In this case, the mission was a success. However thousands of tourists never saw and never will see the splendour of a dry Niagara Falls, one of the few modern day wonders.
From MicrobeWiki, the student-edited microbiology resource The typical classifications of the tundra biome are: short growing seasons, low temperatures, drought, nutrient limitations, heavy winds, and high intensities of solar radiation. These extreme climatic conditions allows for the accumulation of soil organic matter. This causes the tundra biome to act as a carbon sink despite a low rate of primary productivity. Recent warming in this area due to climate change could have a significant impact on widespread global processes. The decomposition of soil organic matter could release much carbon into the biosphere, while the replacement of existing floral communities by plants less suited for extreme conditions and with higher photosynthetic capabilities could act as a carbon sink. The tundra biome consists of a 20% of the Earth’s terrestrial surface (Kaplan 1996), and contains approximately 14% of the global carbon stored within soils (Billings 1987); making it a significant contributor to widespread global processes. With recent climactic warming throughout the globe, there is concern that an increase in temperatures could facilitate higher levels of microbial decomposition. This would contribute to additional increased in atmospheric carbon dioxide levels. This increase in decomposition could also lead to higher rates of other microbial processes due to an increase in nutrient availability (Bucheridge et al. 2010). Changes in vegetative communities could also lead to changes in microbial communities. Temperature changes could also directly select for microorganisms that have a thermal optima under the new conditions. Physical and Chemical Environment During the warmest summer months, the tundra biome typically has an average temperature nearly 10 degrees Celsius. During the coldest months, the average temperature is found to be around -20 degrees Celsius (Remer 2009). The tundra also exhibits extremely low amounts of precipitation. The typical amount of precipitation received throughout the tundra biome is only approximately 15-20 cm a year (Remer 2009). This can effect microbial communities directly by limiting water availability and indirectly in many ways ranging from soil structure to differences in vegetative community which can influence the microbial communities. Because of lower temperatures and high levels of moisture (due mostly to permafrost), decomposition is limited. This leads to nutrients existing in a form that is not readily available to many organisms. Other than this, unfrozen water content of soils below 0 degrees Celsius limit the diffusion of nutrients that plant and microorganisms can uptake, leading to further nutrient limitation (Ostroumov and Siegert 1996). The tundra biome typically exhibits periods of high winds. Heavy winds directly influence vegetative communities, which can thus indirectly influence microbial communities. High Levels of Solar Radiation The tundra biome has a relatively higher level of solar radiation compared to other biomes. Lower temperatures result in lower decomposition rates. Even though the tundra exhibits low annual precipitation, much of the ground remains saturated through much of the year. This water is typically frozen and inaccessible to plants. This high moisture content limits oxygen availability, which is needed to decompose the dead organic matter. Despite the low productivity of the tundra climate, organic matter accumulates because the decomposition of plant litter is limited by low soil temperatures and often wet, anaerobic conditions (Heal et al. 1981; Graglir et al. 2001). This leaves much of the nutrients that plants and microorganisms need for growth and development in a form that remains inaccessible to them (Jonasson and Shaver 1999). Enhanced Mycorrhizal Relationships Mycorrhizas are the mutualistic symbiosis between plant roots and fungi. Mycorrhizal associations are believed to be most beneficial in habitats where plants face strong nutrient limitations. Although the strong mycorrhizal dependence of the majority of plants across the globe is well known, it is even more important for tundra floral species because of a limiting nutrient supply caused by lower relative decomposition rates. When the plants lose their tissues, the nitrogen contained within them becomes reaccessible slowly relative to most other plants existing elsewhere. In these habitats, mycorrhizal plants tend to be strongly dependent on their fungal relationships for nutrient acquisition (Allen and Allen 1991). Enhanced Vegetative Defenses Against Herbivory and Parasitism Herbivory also poses a greater detrimental effect to these plant species. Tundra plant species cannot easily reacquire nutrients lost to herbivores. Because of this, they need to allocate more resources to chemical defenses. A study performed by Cates and Orians (1975) found that evergreen shrubs had stronger chemical defenses than their deciduous relatives. These defenses on herbivory could potentially influence decomposers that would break down the leaf litter of these plants. The same study also recognized that Graminoids of the region retained a relatively high content of nutrients stored in rhizome reserves as well as overall belowground biomass. The Tundra and Climate Change As mentioned before, the decomposition of plant litter is limited by low soil temperatures and often wet, anaerobic conditions (Heal et al. 1981; Graglia et al. 2001). Even though productivity in the tundra biome is relatively small, there are large accumulations of undecomposed organic matter. It is estimated that the tundra contains 14% of the global carbon stored within soils (Billings 1987). Since the end of the last glacial maximum to the present, the tundra has constituted as a large carbon sink, and may have been a contributing factor in the pre-industrial decrease of atmospheric carbon dioxide (Adams et al. 1990; Gorham 1991). Because of recent warming of the tundra due to climate change, the inevitable release of carbon from this carbon pool could pose a serious threat. This could cause possibly lead to a vicious positive feedback cycle that would contribute greatly to higher levels of carbon dioxide. Specialized tundra plant species remain successful in harsh environmental conditions because they are able to maintain photosynthetic rates yielding higher energy values than they consume, as is necessary for the existence of all primary producers. For these plant species, the tundra setting acts as prolific scenario largely because of the lack of competition, parasites, and diseases (Callaghan et al. 2004). As temperatures begin to increase, it is plausible that these plant communities will be replaced by plants less suited for extreme conditions and with higher photosynthetic capabilities. The degree to which plant species can tolerate or take advantage of changing climate conditions depends on characteristics such as growth form, phenology, and allocation and storage patterns of carbon and nutrients (Shaver & Kummerow 1992). If a plant species exchanges defenses to an extreme climate with photosynthetic capability in non-extreme climate, it typically cannot contend well with other species with an opposite mechanism. With increased temperatures, one would expect a negative feedback mechanism due to an increase in the vegetative carbon pool of the tundra. This difference in vegetative communities and turnover rates could highly impact microbial communities. Prevailing Feedback Cycle One could speculate which feedback mechanism would have a greater impact on the global carbon cycle. Again, a positive feedback mechanism would be induced by increasing temperatures accelerating decomposition and reducing the carbon sink within the soil. A negative feedback cycle could occur due to a higher rate of photosynthesis and thus a greater amount of carbon in the vegetative pool. Already, tundra ecosystems have changed substantially in terms of shrub abundance, primary production, and carbon exchange (Walker et al. 2004). One recent study carried out by Qian et al. (2010) claims that soil organic carbon in this region has actually increased in the beginning of the 21st century due to greater inputs from higher amounts of primary production. Organic matter accumulates because the decomposition of plant litter is limited by low soil temperatures and often wet, anaerobic conditions (Heal et al. 1981; Graglir et al. 2001). Recent warming in this area due to climate change could have a significant impact on widespread global processes. The decomposition of soil organic matter could release much carbon into the biosphere, while the replacement of existing floral communities by plants less suited for extreme conditions and with higher photosynthetic capabilities could act as a carbon sink. Gram Negative Bacteria in water In comparison to other biomes, the amount of gram negative bacteria found in throughout the tundra is relatively high (Belova et al. 2009) There are many fungal organisms with unique properties in the tundra to deal with temperature stress. Fungi are the primary organisms responsible for decomposition there. Mycorrhizal fungi are also highly important in providing nutrients to producers due to nutrient limitation throughout the tundra (Hobbie et al. 2009). Many nutrients are retained in the accumulated undecomposed organic matter. Nonsymbiotic N-fixing Bacteria Due to lack of vegetation, there are many Nonsymbiotic N-fixing Bacteria in order for the microbial community to meet nitrogen demands in an already nutrient poor environment. This could change if increased decomposition rates and temperature increased nutrient availability and primary production (Bucheridge et al. 2010). Examples of organisms within the group Gram Negative Bacteria Gram positive Bacteria Nonsymbiotic N-fixing Bacteria O. Roger Anderson is a microbiologist at Lamont-Doherty Earth Observatory who studies bacteria, amoebas, fungi and other microorganisms. He has been putting together mathematical models to estimate the role of microorganisms in the global carbon cycle in the face of climate change. This study addresses the positive feedback cycle hypothesis listed earlier on this Microbe Wiki page. This topic has been taken up by a number of other researches at institutions all across the globe. Here is a website that goes into further detail: Dr. Feng Sheng Hu from the University of Illinois at Urbana-Champaign addresses both the positive and negative feedback cycles by looking at the change of carbon storage and advancing tree lines. He also does studies the history of both between glacial and interglacial cycles, which can be used to gain a better understanding of what is going on today. Here is a website that goes into further detail: An article written by Ned Rozell covers some interesting research about fungal decomposers in the tundra and specializations they require to thrive in the extreme conditions. A forum and links to the studies can be fount at: Adams, J. M., H. Faure, L. Fauredenard, J. M. McGlade, and F. I. Woodward. 1990. Increases in terrestrial carbon storage from the last glacial maximum to the present. T. Nature 348:711-714. Allen, E. and M.F. Allen. 1991. The mediation of competition by mycorrhizae in successional and patchy environments. In Grace JB, Tilman D (eds) Perspectives on plant competition. Academic Press, San Diego, Calif., pp 367–389 Belova, S. E., T. A. Pankratov, E. N. Detkova, E. N. Kaparullina, and S. N. Dedysh. 2009. Acidisoma tundrae gen. nov., sp nov and Acidisoma sibiricum sp nov., two acidophilic, psychrotolerant members of the Alphaproteobacteria from acidic northern wetlands. International Journal of Systematic and Evolutionary Microbiology 59:2283-2290. Billings, W. D. 1987. Carbon balance of Alaskan tundra and taiga ecosystems – past, present, and future. Quaternary Science Reviews 6:165-177. Buckeridge K. M., Zufelt E., Chu H. Y. & Grogan P. 2010. Soil nitrogen cycling rates in low arctic shrub tundra are enhanced by litter feedbacks. Plant and Soil 330: 407-421. Cates, R. G. and G. H. Orians. 1975. Successional status and palatability of plants to generalized herbivores. Ecology 56:410-418. Gorham, E. 1991. Northern peatlands –role in teh carbon-cycle and probable responses to climatic warming. Ecological Applications 1:182-195. Graglia, E., S. Jonasson, A. Michelsen, I. K. Schmidt, M. Havstrom, and L. Gustavsson. 2001. Effects of environmental perturbations on abundance of subarctic plants after three, seven and ten years of treatments. Ecography 24:5-12. Heal, O. W., P. W. Flanagan, D. D. French, and S. F. MacLean, Jr. 1981. Decomposition and accumulation of organic matter in tundra. Tundra ecosystems: a comparative analysis.:587-633. Hobbie, J. E., E. A. Hobbie, H. Drossman, M. Conte, J. C. Weber, J. Shamhart, and M. Weinrobe. 2009. Mycorrhizal fungi supply nitrogen to host plants in Arctic tundra and boreal forests: 15N is the key signal. Canadian Journal of Microbiology 55:84-94. Jonasson, S. and G. R. Shaver. 1999. Within-stand nutrient cycling in arctic and boreal wetlands. Ecology 80:2139-2150. Kaplan, E. 1996. Biomes of the World: Tundra. Hong Kong: Marshall Cavendish Corporation. Kummerow, J. 1983. Root surface leaf-area ratios in arctic dwarf shrubs. Plant and Soil 71:395-399. Ostroumov, E. and C. Siegert. 1996. Exobiological aspects of mass transfer in microzones of permafrost deposits, Adv. Space Res. 18 (12):79–86. Qian H. F., Joseph R. & Zeng N. (2010) Enhanced terrestrial carbon uptake in the Northern High Latitudes in the 21st century from the Coupled Carbon Cycle Climate Model Intercomparison Project model projections. Global Change Biology 16: 641-656. Remer, L. 2009. Temperature and Precipitation Graphs. NASA Earth Observatory. http://earthobservatory.nasa.gov/Experiments/Biome/graphs.php. November 12, 2009. Walker, D. A., H. E. Epstein, W. A. Gould, A. M. Kelley, A. N. Kade, J. A. Knudson, W. B. Krantz, G. Michaelson, R. A. Peterson, C. L. Ping, M. K. Raynold s, V. E. Romanovsky, and Y. Shur. 2004. Frost-boil ecosystems: Complex interactions between landforms, soils, vegetation and climate. Permafrost and Periglacial Processes 15:171-188.
is the key word here. Perl attempts to convert strings to numbers via a very specific procedure: - Start at the left. If the string is null, return 0. - Move right character by character. If you see stuff that isn't numeric or whitespace, stop and return 0. Throw away whitespace. If you never see a number, return 0. - When you find the first digit, collect characters as long as they still look like they belong to a number (Note that "0x" is a set of characters that could belong to a number). If you hit the end of the string, evaluate the number you have so far using these rules, and return that. - If you see more "can't belong to this number" characters before you reach the end of the string (notice that this is contextual; e.g., if we'd gotten "0b", anything other any "1" or "0" can't belong), stop, evaluate whatever you have using these rules, and return that. So you have two strings, each of which has no numbers in it at all. This means that they both are evaluated as empty strings, or zero, so they are equal. If you had use warnings on, it would have told you that you were comparing two strings and that you would probably not get the intended results. This is one of those cases where "what you meant" and "what you said" do not match, and the warning would have let you know that. As a note, this is why "0 but true" - the "true zero" - is indeed that: a value equal numerically to zero, but when evaluated as part of a logical expression is true. It's zero by the number-collecting logic (start with "0", next character is " ", so end of number, value 0) and not a null string, undef, zero, or quoted zero by the test-for-false logic. Edit: corrected number-detection logic: non-whitespace leading characters cause an immediate drop out with a zero value; whitespace doesn't count until you hit the number; any character that "can't belong" causes evaluation to stop. Added "0 but true" why-does-that-work explanation.
Deafness is not an uncommon problem for dogs. Congenital deafness has been reported for approximately 80 breeds, with the list growing at a regular rate it can virtually appear in any breed. For the person seeking to buy or adopt a pet, failing to check for deafness can cause unexpected hardships and may ultimately end the relationship. Deafness can occur by two processes. Sometimes dogs have congenital deafness, meaning they are born deaf, or deafness develops within a month after birth. Other dogs develop deafness (called acquired deafness) at any time later in life. This can occur due to the use of drugs, particularly some antibiotics, from noise trauma, ear infections and from age-related hearing loss. The current common way that most veterinarians test for deafness is behaviorally, by making a loud noise and then observing the dog’s behavior. While this seems to be a foolproof method, there are inherent weaknesses to this casual examination. Dropping a large book may convince you that the dog actually "heard" a sound. In reality, he may have felt the vibration of the floor, through the pads of his feet. Banging pots and pans together may also prove futile. A puppy that has spent its life devoid of sound often learns to constantly scan for visual cues. If the puppy perceives a subtle change in ambient light, shadows or peripheral movement, as you bang a pot, it may still beat the loud noise test. A better way to test hearing is to perform an Auditory Brainstem Response (ABR) in conjunction with an Otoacoustic Emission test (OAE). An ABR is an objective and quantitative test that measures the electrical potential produced by the brain in response to sound stimuli by the synchronous discharge of the neurons in the auditory nerve and brainstem.. Instead of being determined by behavior, the ABR is an efficient, objective, and quantitative way to determine if there is a complete or partial hearing loss in a dog. The ABR is regularly performed in humans to assess hearing function and to predict or identify other medical issues related to neural and/or otological function and would be a valuable tool to use in canines as well. It is very important to be able to accurately determine if there is a complete or partial deafness because there is a greater likelihood of death from being hit by cars because of the inability to hear the vehicles coming. Dogs who cannot hear can be more prone to injuries, since they cannot hear commands or objects coming towards them. There are also behavioral concerns. A deaf dog can startle easily when asleep and this can cause aggression and fear. This research is very significant in that its outcome will be used to more efficiently determine if there is a hearing loss in dogs and will allow us to improve the quality of living of deaf dogs and their owners. Presently Auditory Evoked Potential (AEP) testing and evaluation is not taught in veterinary medicine. It is hope d that animal audiology will emerge in a similar manner as has been embraced by the human medical community. The relationship of the “animal audiologist” to the veterinarian can be the same as the relationship of the human audiologist to the (ear-nose and throat specialist) ENT medical doctor. Once we are able to accurately determine if there is a hearing loss, we can now move on to proper breeding, training, and handling of deaf dogs and potentially aid in eliminating genetic and sensorineural hearing loss in dogs. The impact of establishing canine “normative data” will be a tremendous advancement in animal welfare and veterinary medicine.
The Jolly Phonics Teacher Checklist provides a useful guide to best practice for teaching Jolly Phonics. Teachers can use the checklist to evaluate whether they are teaching Jolly Phonics correctly, and reflect on their practice. – Learning the Letter Sounds – Letter Formation – Identifying the Sounds in Words (Segmenting) – Tricky Words – Struggling Students
Grade 5 English Language Arts Syllabus Instructor: Rachel Miller Room: E118 Contact Time: 8:50-9:30 am, M-F School Phone: 918-478-2465 Email: [email protected] Website: www.fortgibsontigers.org The purpose of this course is to provide students concepts in Reading and Language Arts through reading, writing, speaking, listening, and reasoning in preparation for college and career readiness. The content will include, but is not be limited to, the following: - Students will develop and apply effective communication skills through speaking and active listening. - Students will recognize high-frequency words and read grade-level text smoothly and accurately, with expression that connotes comprehension. - Students will read and comprehend increasingly complex literary and informational texts. - Students will comprehend, interpret, evaluate, and respond to a variety of complex texts of all literary and informational genres from a variety of historical, cultural, ethnic, and global perspectives. - Students will expand academic, domain-appropriate, grade-level vocabulary through reading, word study, and class discussion. - Students will apply knowledge of grammar and rhetorical style to analyze and evaluate a variety of texts. - Students will comprehend, evaluate, and synthesize resources to acquire and refine knowledge. - Students will evaluate written, oral, visual, and digital texts in order to draw conclusions and analyze arguments. - Students will read independently for a variety of purposes and for extended periods of time. Students will select appropriate texts for specific purposes. Fort Gibson Schools has been a state leader in transitioning to a more relevant and rigorous form of instruction. In this course the following will be expected: - Reading: Students will use active reading strategies to read various materials and text. - Speaking: Students will express his/her original thought verbally in a clear and concise manner. - Reasoning: Students will analyze charts, graphs, or diagrams and make inferences about the data. - Writing: Students will express their original thoughts in writing through quick writes and/or formal writing. Students will be taught using the following category of techniques: - Quadrant A-Students will know key ideas and details from a text. - Quadrant B-Students will apply knowledge from multiple sources to explain, write, or speak about a subject knowledgeably. - Quadrant C-Students will compare and contrast events, ideas, concepts, or information from multiple texts. - Quadrant D-Students will debate different points of view on topics from multiple texts. To be a successful school takes the cooperation of all involved. A handbook will be provided to each student that explains the most pertinent information and rules that each student should know. A more detailed policy book is available for view in our library. Specific expectations for our classroom include: - Bring pencil, books, and all needed materials to class every day. - Be in your seat when the tardy bell rings. - Keep hands, feet, books, and objects to yourself. - No profanity, rude gestures, cruel teasing or put-downs. - Follow directions. - Red Ink Pen - ELA Textbooks - Reading Journal - Test/Quizzes 60% - Daily Work 40% - Students are expected to turn in all homework and assignments on the date set by the teacher. - If the assignment is incomplete or missing, an Incomplete/Missing Assignment sheet must be filled out and stapled to the assignment. The student will miss recess to finish/redo homework and 10 points will be deducted from the assignment if it is turned in by the end of the school day. If the assignment is not turned in by the end of the school day, half credit will be given if the assignment is turned in at the beginning of the following school day. If the assignment is not turned in, a zero (0) will be recorded in the grade book. - Students who are absent are given one (1) day for each day they are absent to turn in their work. - For school related absences, students must coordinate with their teacher to get their work in advance. Progress Reports will be sent out on the fourth and eight week of each trimester. Report Cards will be sent out at the end of each trimester. Parent Teacher Conferences will be as follows: - October 15 and October 16 from 4-7pm - March 11 and March 12 from 4-7pm REVISED AUGUST 2018
These figures prove that a net volume of around 28 gigatons of carbon was released into the atmosphere as a result of agricultural development in the pre-industrial period of the last millennium. These emissions remained very small for hundreds of years, and it was only during the period between the 16th and 18th centuries that they affected the concentration of atmospheric carbon dioxide beyond a level that could be explained by natural climate variations alone. As a result, it would appear that humans did not increase the carbon dioxide concentration in the atmosphere until a relatively late point in time – albeit still prior to the advent of industrialization. However, this increase in carbon dioxide was too small to perceptibly alter the temperature at the global level. At the regional level, in contrast, humans already influenced the climate prior to industrialization. Simulations show that, due to the changes in the albedo of the land surface through land use, mankind altered the energy balance in some regions as early as a thousand years ago. In Europe, India and China, in particular, the amount of absorbed solar radiation decreased by around two watts per square meter. A change of this magnitude at the regional level is just as large as the current global greenhouse effect; however, it has the opposite impact, as it causes cooling rather than warming. Even historical events can leave their traces on the climate through such biogeophysical effects. For example, there was a clear reversal of the increasing human influence on Europe’s energy balance in the 14th century. This change was brought about by the bubonic plague, which claimed the lives of around one third of the population and in the wake of which large expanses of agricultural land were temporarily abandoned. The Mongol invasion of China in the 13th century and the diseases spread among the high cultures of the Americas by the invasion of the Europeans had similar consequences.
The Cold War refers to the period between the end of the Second World War and the collapse of the Soviet Union in 1991, during which the world was largely divided into two ideological camps — the United States-led capitalist “West” and the Soviet-dominated communist “East. The Cold War refers to the period between the end of the Second World War and the collapse of the Soviet Union in 1991, during which the world was largely divided into two ideological camps — the United States-led capitalist “West” and the Soviet-dominated communist “East.” The former included Canada, as its government structure, politics, society, and popular perspectives aligned with those in the US, Britain, and other free democratic countries. The global US-Soviet struggle took many different forms and touched many areas, but never became “hot” through direct military confrontation between the two main antagonists. The Cold War was rooted in the collapse of the American-British-Soviet alliance that defeated the Germans and Japanese during the Second World War. Already divided ideologically and deeply suspicious of the other side’s world plans, American and British diplomatic relations with Joseph Stalin’s Soviet Union severely cooled after the war, over several items. In particular, the Soviets placed and kept local communist parties in power as puppet governments in once-independent countries across Eastern Europe, without due democratic process. This situation led former British Prime Minister Sir Winston Churchill to state in 1946 that an “iron curtain” had descended across the European continent. That same year, the Canadian government revealed that it had given political asylum to Igor Gouzenko, who, in September 1945, as a cipher clerk at the Soviet Embassy in Ottawa, had stolen documents showing Soviet spies at work in American, British, and Canadian government and scientific departments. This event brought home the new world reality to Canadians. The following year, in 1947, American financier and presidential adviser Bernard Baruch aptly observed in a speech that, "we are today in the midst of a cold war." The Deep Freeze The period 1947-1953 became the Cold War’s “deep freeze,” as East-West negotiations on the future of Europe broke down and stopped. The international climate worsened with several high-profile events. Canadians were involved in some of them, including the formation of the North Atlantic Treaty Organization (NATO), a western security pact designed to defend Western Europe against Soviet invasion, and in which Canada was a member; and the Korean War (1950-1953) in which Canadian forces fought with the United Nations against communist North Korean and Chinese forces supported by the Soviets. As the Gouzenko affair showed, the Cold War was felt as much at home as abroad. There were communist “witch hunts” in Canadian government and society as in the US, perhaps more subdued, but with real consequences. Communists were identified and purged from trade unions while Canadian diplomats with allegedly questionable loyalties were put under suspicion. Tragically, diplomat Herbert Norman committed suicide in 1957 after almost a decade of various accusations and investigations by American intelligence agencies into his supposed communist associations, which remain shrouded in mystery and are still debated by scholars today. Canada and the Cold War Serious East-West diplomatic discussions resumed after the death of Stalin in 1953, but international tensions remained high for the next several decades. On a global scale, Canada contributed armed forces to peacekeeping operations throughout the world, including in areas divided between communist and anti-communist factions. Canadian political and military leaders, who at times critiqued American actions against communism in the Middle East, Latin America, and Asia, still prepared for possible war against the Soviets in Europe. The Canadian NATO commitment on the continent included an army brigade group in West Germany and air force fighter jets capable of carrying nuclear weapons. For both Canada’s government and its people, the fear of nuclear war between the US and the Soviet Union remained ever-present throughout the 1950s, 1960s, 1970s, and 1980s. Canadians were active at various levels in trying to avoid such a calamity. The Cold War began winding down in the late 1980s amid uncompromising anti-Soviet policies by the US, coupled with new efforts at openness by the Soviet leadership, and a surge of freedom movements inside the European communist states. This culminated in the tearing down of the Berlin Wall in 1990 (which had separated West and East Germany since 1961) and the fall of the Soviet Union the following year.
We have all craned our heads skywards and marvelled at the machines that ride the skies above our heads; many of us have been fortunate to be carried to destinations near and far in all manner of these remarkable machines, machines that we often take for granted. But what makes up an aircraft? What are the differences that separate the graceful soaring of a glider from the roar of a fighter climbing into the blue? In this article we have a lot of ground to cover and some of it might seem intimidating, but one of the great things about flight simulation is that we can put this theory into practice. As the title of the article suggests we are going to be working through the basics of Aviation 101, from what an aircraft is, through the basic principles of flight and onwards to types of flight. Buckle in, we’re going flying! Let us start though with something innocuous, what is an aircraft? An aircraft is any machine capable of flying by means of buoyancy or aerodynamic forces. So, what does that mean? Well the machine part is fairly obvious, but the latter part is what we should look at first before we look into what makes up aircraft more closely. All aircraft, no matter their size or shape, are affected by the same four forces: Thrust, Drag, Lift and Weight. - Thrust – the forward force that propels a body through the air - Drag – the resistant force of a body travelling through the air - Lift – the force that acts at right angles to the direction of motion - Weight – the response of mass to the pull of gravity Image Credit: NASA These forces are acting on an aircraft anytime it is airborne, even if they may not be obvious. There are two distinct types of aircraft and they are divided based on how they employ these forces, and more importantly how they derive their Lift – or the thing that keeps them up in the air! An Aerodyne generates its lift by the movement of air over a wing or wings whilst an Aerostat derives its lift from a gas. So now we have the four forces with us, let us dig a bit deeper into these two classes of aircraft. We are going to start at the beginning (literally) as the first manned flight was aboard a lighter-than-air aircraft, in other words an Aerostat. Aerostats all share the fact that their lift is generated through the use of gas; which may be heated air or a gas which has a lighter density than air such as helium or hydrogen. This gas is contained inside an envelope to which can be attached the crew compartments and sometimes additional structures such as engines, cargo storage or even miniature wings. But how does this work? The science behind it relates to fluid dynamics (as air is really just a very thin liquid) and Archimedes Principle. A balloon will continue rising into the sky until the pressures are balanced at a point where the lift can support the weight but will not rise further. A balloon controls these pressures by releasing gas in order to descend and dumping weight to rise up again. Balloons rely solely on winds as their means of thrust and lack any form of directional control. Airships differ from their more uncontrolled counterparts in that they feature an engine to provide thrust and would have small control surfaces (think miniature wings) to provide directional controls. The ability to master directional controls coupled with the lifting properties of lighter-than-air aircraft and relatively low costs to operate, Airships would find major prominence throughout the first half of the 20th century, being a common sight across the skies of Europe and the Americas. Today both forms of Aerostats can be found alive and well, their low operating costs seeing them come back into favour. Whether it be a champagne breakfast watching the sunrise beneath a hot-air balloon or the platform filming the Superbowl; Aerostats, once thought on the verge of abandonment, are here to stay. Aerostats are a fairly rare thing in the virtual aviation world however, mainly due to their flight-models not being catered for in great detail. Nevertheless you can use these thanks to some very innovative developers out there (my favourite is the Zeppelin NT). In contrast to their lighter-than-air cousins an Aerodyne is heavier-than-air and derives its lift from a thrust force designed to drive airflow over the wings. In an aerodyne the four forces we touched on earlier come more into play, as balancing them is critical to not only get an aerodyne airborne but to maintain control over it once it’s up there Image Credit: NASA Horizontal wings are the primary source of lift on an aerodyne. They are usually bulged on one side and towards the front leading to a difference in pressure resulting in the upward force of lift – also known as Bernoulli’s Principle. By adjusting this flow of air over these surfaces a pilot can control an aerodyne. When a pilot moves the controls of an aerodyne, it is to control the three Axes of Flight – Pitch, Roll and Yaw. Pitch adjusts the up and down direction of the nose of the craft and are controlled by elevators. These are usually located as part of the horizontal stabilizer (the little wing at the back of the aeroplane). Roll adjusts the rotation of an aircraft and is controlled by ailerons. Ailerons are located on each wing and act to push one wing up while pushing the other down. Yaw is the side to side motion of the aircraft and is controlled by manipulation of the rudder. Moving the rudder changes the airflow past the tail of an aircraft and pushes the tail in one direction or another. By balancing these three axes and the thrust keeping the air flowing over the wings, an aerodyne is able to maintain controlled flight. There are two primary classifications of Aerodynes Fixed-wing aerodynes are the most common type in existence and range in size and form from small to truly massive in scale; from a Piper J-3 Cub all the way through a C-17 Globemaster III and beyond, they are all examples of Fixed-wing aerodynes and share many physical characteristics. Prototypical components include: - Fixed Horizontal wings – These generate lift as outlined above and maintain roll stability - Fuselage – The elongated body of the aircraft that connects the components together - Horizontal stabiliser – Essentially an inverted wing that aids in maintaining vertical stability - Vertical stabiliser – Another wing-like surface that aids in maintaining horizontal stability - Landing gear – A set of wheels, floats or skids designed to support the aircraft whilst on the surface Powered fixed-wing aerodynes – aeroplanes – generate thrust through an engine; whether that engine be a propeller, an air-breathing jet engine or a closed cycle rocket engine. Un-powered or free flying fixed-wing aerodynes – gliders – initially will rely on a launch mechanism to get them into the air to generate their initial lift; such as a tow aircraft or being launched off a hill. Fixed-wing aerodynes have been designed to operate in a wide variety of roles and operating environments encompassing recreational and training activities, through agricultural and cargo transportation to passenger carrying. Rotary-wing aerodynes share many attributes with their Fixed-wing compatriots, such as fuselage, horizontal and vertical stabilisers and landing gear. The primary difference between them is in the way the wings operate. Instead of using fixed horizontal wings, Rotary-winged aerodynes use a series of wings (rotor blades) spinning around a mast to produce their lift. The same aerodynamic principles still apply, except that the turning of the rotor blades induces the lifting force as opposed to relying on forward motion of the whole aircraft. To achieve stable flight the spinning of the rotor blades needs to be balanced either by a tail rotor – a smaller set of blades mounted perpendicular to the plane of the main rotor – or through the use of multiple sets of rotors blades (like a Boeing-Vetol CH-47 Chinook). The significant advantage of a Rotary-winged design is that it can maintain its position horizontally as well as vertically, resulting in the ability to land and take off without the necessity of runways and prepared landing areas as a Fixed-wing aerodyne does. The challenge that it raises is that the thrust to weight ratio must be very high in order to drive the rotor blades fast enough to generate the required lift to get airborne. Rotary-winged craft provide capabilities of operating in confined areas and terrain that is otherwise inaccessible to aircraft. This along with their ability to maintain their relative position through hovering have led them to prominence with emergency services, tourism and remote location supply roles around the world. Having now defined what an aircraft is, the next step is to discuss the categories they are grouped by. All aircraft operate within atmosphere, however their operation is classified into a series of regimes based around their normal operational speed relative to sea level. General Aviation (100 – 350 Miles Per Hour) This regime is in common use by helicopters and smaller aircraft. These aircraft fulfil a variety of roles including agriculture, emergency service support, small scale passenger transport and water operations. Examples of aircraft in this regime include the Grumman Albatross and the Piper Aztec. Subsonic (350 – 750 Miles Per Hour) This regime is populated mostly by commercial and military jet aircraft where the aircraft operate up to just below the speed of sound. Some aircraft in this category are also considered to be transonic in that they have the ability to operate in excess of the speed of sound but do not do so routinely. These aircraft are characteristically used for commercial carriage of freight and passengers as well as military combat and support applications. Examples of aircraft in this category include the Boeing 737 and the Lockheed Martin F-16. Supersonic (760 – 3500 Miles Per Hour / Mach 1 – 5) Aircraft that populate this regime are designed to operate primarily at speeds faster than the speed of sound. Presently this realm is exclusive to military aircraft, high speed fighters, bombers and reconnaissance aircraft. There has only ever been one commercially operated aircraft in the class, the Concorde. Aircraft in this category are characterized by very clean aerodynamic designs and often highly swept wing in a delta style. The Convair F-106 Delta Dart and the English Electric Lighting are examples of this type of aircraft. Hypersonic (3500 – 7000 Miles Per Hour / Mach 5 – 10) Hypersonic aircraft design is the highest tier of aircraft operation. Due to the rise in atmospheric friction as airspeed increases as well as the change of the attributes of air at such high speeds Hypersonic flight offers daunting challenges to engineers. This speed range is most commonly experienced by spacecraft upon re-entry to Earth’s atmosphere or rockets leaving it. Thus far there has only ever been a single manned aircraft that has achieved Hypersonic flight – the North American X-15. The X-15 was a rocket powered aircraft that was carried aloft by a B-52 mother-ship before being launched to the edge of the atmosphere and beyond it. Since the end of the program in 1968 the only other aircraft to achieve hypersonic flight have been unmanned research aircraft. In this article, we have introduced the definitions of an aircraft, discussed the main types of aircraft and what they constitute of. Beyond this we have also examined the classifications of flight regimes as well as a brief introduction to some of the methods of achieving and maintaining controlled flight – all told a lot of ground has been covered. Aviation is a science that is always evolving, and an exciting field to be part of; both in the real world and the virtual world.
The Latino Movement In post-World War II America, Spanish-speaking groups faced discrimination as well. Coming from Cuba, Puerto Rico, Mexico and Central America, they were often unskilled and unable to speak English. Some worked as farm laborers and at times were cruelly exploited while harvesting crops; others gravitated to the cities, where, like earlier immigrant groups, they encountered serious difficulties in their quest for a better life. Chicanos, or Mexican-Americans, mobilized in organizations like the radical Asociacion Nacional Mexico-Americana, yet did not become confrontational until the 1960s. Hoping that Lyndon Johnson's poverty program would expand opportunities for them, they found that bureaucrats failed to respond to less vocal groups. The example of black activism in particular taught Hispanics the importance of pressure politics in a pluralistic society. The National Labor Relations Act of 1935 had excluded agricultural workers from its guarantee of the right of labor to organize and bargain collectively. But Cesar Chavez, founder of the overwhelmingly Hispanic United Farm Workers, demonstrated the efficacy of direct action in seeking recognition for his union. Taking on the grape growers of California, Chavez called for a nationwide consumer boycott that finally provided exploited migrant workers with union representation. Similar boycotts of lettuce and other products were also successful. Though farm interests continued to try to obstruct Chavez's organization, the legal foundation had been laid for representation to secure higher wages and improved working conditions. Hispanics became politically active as well. In 1961 Henry B. Gonzalez won election to Congress from Texas. Three years later Elizo ("Kika") de la Garza, another Texan, followed him, and Joseph Montoya of New Mexico went to the Senate. Both Gonzalez and de la Garza later rose to positions of power as committee chairmen in the House. In the 1970s and 1980s, the pace of Hispanic political involvement increased, and by the time Bill Clinton became president, two prominent Hispanics were named to his cabinet: former San Antonio mayor Henry Cisneros as secretary of housing and urban development (HUD), and former Denver mayor Frederico Pena as secretary of transportation.
Answer the Questions by Playing a Matching Game Once you have completed your first reading of the passage and have achieved a solid understanding of what it says, you should move on to the questions. If you come across a question you don’t entirely understand, try to restate the question in your own words. Once you know what information the question is looking for, refer back to the passage, using your notes as guidelines. The most reliable method for choosing the correct answer is essentially playing a matching game. Before looking at the answers, you should try to answer the question in your own words. By doing this, you can avoid being influenced by incorrect but tempting answer choices. Once you’ve come up with your answer, look at the answers provided by the ACT writers and pick the one that best matches your answer. If your answer doesn’t match up with any of the choices, you probably did something wrong. In that case, you can quickly go over the question again or move on to the next question, marking the current question so you can come back to it. Here’s a summary of the process for answering questions: Read the question and, if necessary, restate it in your own words so you understand what it is asking. back to the passage. an answer in your own words, without looking at the your answer to the choices provided.
Conjunctivitis, commonly called pink eye, is an inflammation of the mucous membrane that lines the eyelids and covers the white portion of the eye. Symptoms of pink eye include redness, swelling, itching, and pus in the membrane. This condition may be caused by contact lens solution, allergy, bacteria, virus, smoke, dust, Reiter’s syndrome in men, or chemical irritants such as chlorine, smog, or those found in makeup. Pink eye is highly contagious when the cause is a virus and is spread in the same manner that common cold is. If not treated, the condition can leads to bronchitis or pneumonia because of drip-page from the eye into the nasal passages and down the throat. A deficiency of vitamin A, vitamin B6 or riboflavin may cause pink eye symptoms. The diet should be adequate in these nutrients to help prevent the condition. Certain forms of pink eye are the result of calcium deficiency. In addition, meals should include those foods that dampen inflammation. Etiology of pink eye (conjunctivitis) The causes of pink eye fall into two broad categories: - From a persistent irritation (such as lack of tear fluid or uncorrected refractive error) - Toxic (due to irritants such as smoke, dust, etc.) - As a result of another disorder (such as Stevens -Johnson syndrome) Symptoms of pink eye Typical symptoms exhibited by all patients include reddened eyes and sticky eyelids in the morning due to increased secretion. Any conjunctivitis also causes swelling of the eyelid, which will appear partially closed (pseudoptosis). Foreign- body sensation, a sensation of pressure, and burning sensation are usually present, although these symptoms may vary between individual patients. Intense itching always suggests an allergic reaction. Photophobia and lachrymation may also be present but can vary considerably. Simultaneous presence of blepharospasm suggests corneal involvement (keratoconjunctivitis). Diagnosis of pink eye Physical examination reveals peripheral injection of the bulbar Conjunctival vessels. In children, possible systemic symptoms include sore throat or fever, if the conjunctivitis (pink eye) is suspected of being of adenoviral origin. Lymphocytes are predominant in stained smears of Conjunctival scrapings if conjunctivitis is caused by a virus. Polymorphonuclear cells (neutrophils) predominate if conjunctivitis is due to bacteria; eosinophils, if it is allergy related. Culture and sensitivity tests indentify the causative bacterial organisms and indicate appropriate antibiotic therapy. Treatment of pink eye Treatment for pink eye varies with the cause. Bacterial conjunctivitis requires topical applications of the appropriate broad-spectrum antibiotic. Although viral conjunctivitis resists treatment, a sulfonamide or broad spectrum antibiotic eye-drops may prevent a secondary infection. Patients may be contagious for several weeks after onset. The most important aspect of treatment is preventing transmission. Herpes simplex infection generally responds to treatment with trifluridine drops or vidarabine ointment or oral acyclovir, but the infection may persist for 2 to 3 weeks. Treatment for vernal (allergic) conjunctivitis includes administration of corticosteroid drops followed by cromolyn sodium, cold compresses to relieve itching and occasionally, oral antihistamines. Instillations of a one-time dose of erythromycin or 1% silver nitrate solution (Crede’s procedure) into the eyes of neonates prevents Gonococcal conjunctivitis. - Apply compresses and therapeutic ointment or drops, as ordered. Don’t irrigate the eye, as this will spread the infection. Have the patient wash his hands before he uses the medications. Tell him to use clean washcloths or towels frequently so he doesn’t infect his other eye. - Teach the patient to instill eye-drops and ointment correctly -without touching the bottle tip to his eye or lashes. - Remind the patient that the ointment will blur his vision. - Stress the importance of safety glasses for the patient who works near chemical irritants. - Notify public health authorities if cultures show N. Gonorrhoeae. Prevention of pink eye To prevent conjunctivitis from occurring or recurring, teach your patient to practice good hygiene. Encourage the following prevention tips. Practice good hygiene To encourage good eye hygiene, teach proper hand washing technique because bacterial and viral conjunctivitis are highly contagious. Stress the risk of spreading infection to family members by sharing washcloths, towels, and pillows. Suggest the use of tissues or disposable wipes to reduce the risk of transmission from contaminated linens. Caution the patient against rubbing his infected eye, which could spread infection to his other eye. Use cosmetic carefully If the patient uses eye cosmetic, instruct her not to share them. Also, encourage her to replace eye cosmetic regularly. Keep contact lenses clean If the patient wears contact lenses, teach him to handle and clean contact lenses properly. Also, while his eyes are infected, he should stop wearing the lenses until the infections clears. Avoid contact with contagious people Because conjunctivitis is highly contagious, particularly among children, infected children should avoid close contact with other children. Warn the patient with “cold sores” to avoid kissing others on the eyelids to prevent the spread of the disease. Homeopathic treatment of pink eye – Homeopathy is one of the most popular holistic systems of medicine. The selection of remedy is based upon the theory of individualization and symptoms similarity by using holistic approach. This is the only way through which a state of complete health can be regained by removing all the sign and symptoms from which the patient is suffering. The aim of homeopathy is not only to treat pink eye but to address its underlying cause and individual susceptibility. As far as therapeutic medication is concerned, several remedies are available to cure pink eye symptoms that can be selected on the basis of cause, sensations and modalities of the complaints. For individualized remedy selection and treatment, the patient should consult a qualified homeopathic doctor in person. There are following remedies which are helpful in the treatment of pink eye. Aconite Nap – From cold, injury, dust, surgical operations; scrofulous inflammation with enlarged glands Belladonna: – head remedy for pink eye, when eyes are blood-shot and very red. Much inflamed and painful Euphrasia- With watering from the eyes which is acrid Argentum Nit. – Profuse, purulent discharge; cornea opaque, lids sore, thick, swollen and uncreated; agglutinated in morning. The canthi are red as blood. Mucus obstructs the vision unless frequently wiped off. Catarrhal, ulcerative; opacities of cornea. Better by cold application Kreosote- Inflamed and red eyes bleed easily. Apis M. – Swelling of the lids with stinging, shooting pain and photophobia Rhus Tox. – Thick purulent discharge. Profuse hot lachrymation, Restlessness, Worse about midnight Alumina Silicate- Pain in the eyes; burning in the evening as from smoke, Inflammation in the open air with itching, Burning in the lids and in the canthi, Pain in the eyes as from sand Hepar Sul. – has remarkable action over pink eye symptoms specially when there is Inflammation of the eyes with offensive thick, purulent discharge. Ulcers of the cornea with bloody, offensive discharge John D. Krischmann, Nutrition Search, Inc; 2006; 153 Gerhard, K. Lang- Ophthalmology: a pocket textbook atlas; 74-75 By: Lippincott Williams & Wilkins – Professional guide to diseases; 2008; 665-66
An anonymous reader writes "'Houston, we've had a problem,' said astronaut Jack Swigert on April 13, 1970. But the problem wasn't as simple as three astronauts potentially trapped in the void of space, 200,000 miles from Earth. The catastrophic risk came from the SNAP-27 radioisotope thermoelectric generator (RTG), a small nuclear reactor that was going to be placed on the moon to power experiments, carrying Plutonium 238 in Apollo 13's lunar module. As luck would have it, NASA had experience losing RTGs – a navigation satellite failed to reach orbit in 1964 and scattered small amounts of plutonium over the Indian Ocean. The SNAP-27 had been engineered to make it back to Earth intact in such an incident. The plutonium, like the astronauts, apparently survived reentry and came to rest with what remained of the lunar module in the Tonga Trench south of Fiji, approximately 6-9 kilometers underwater (its exact location is unknown). Extensive monitoring of the atmosphere in the area showed that no radiation escaped."
Liquid, gaseous or solid biofuels hold great promise to deliver an increasing share of the energy required to power a new global green economy. Many in government and the energy industry believe this modern bioenergy can play a significant role in reducing pollution and greenhouse gases, and promoting development through new business opportunities and jobs. Modern bioenergy can be a mechanism for economic development enabling local communities to secure the energy they need, with farmers earning additional income and achieving greater price stability for their production. But it is not that simple. Biofuels remain a complex and often contentious issue. Over the past few years the risks of competition with food production and potential negative impacts on the atmosphere, biodiversity, soil and water have been highlighted. The way biofuels are made and used is critical: they may either help mitigate or contribute to climate change, reduce or exacerbate impacts on ecosystems and resources. Issues related to biofuels are complex and interconnected: they require solid planning and balancing of objectives and trade-offs. Safeguards are needed and special emphasis should be given to options that help mitigate risks and create positive effects and co-benefits. Biofuels Vital Graphics is designed to visualise the opportunities, the need for safeguards, and the options that help ensure sustainability of biofuels to make them a cornerstone for a Green Economy. It is meant as a communications tool, rather than providing new analysis. It builds on a 2009 report by the International Panel for Sustainable Resource Management of the United Nations Environment Programme, Towards Sustainable Production and Use of Resources: Assessing Biofuels, and refers to research produced since.
The World Health Organization says lead poisoning has devastating health consequences, especially for children. The WHO is raising awareness about the problem during International Lead Poisoning Prevention Week. The theme is Lead-Free Kids for a Healthy Future. It’s estimated that 143,000 people die every year from lead poisoning. Lead exposure also contributes to 600,000 new cases annually of children with intellectual disabilities. Much of the problem is blamed on lead paint. Carolyn Vickers is Team Leader for Chemical Safety in the WHO’s Department of Public Health and Environment. “Lead poisoning is considered by WHO to be one of the top 10 chemical exposures of major public health concern. And it’s particularly worrying because it affects children and a developing fetus. It also affects adults through occupational exposure with the high burden in developing countries.” But it’s not just in developing countries. L”ead exposure is a big problem in most if not all countries. In some countries lead paint is still used. That’s obviously adding every year to the number of houses, schools and buildings that are treated with lead paint. But also even in developed countries lead paint has been applied for many decades and when people undertake activities, such as renovating their home, it causes the lead to form dust, which children can become exposed to. So it is actually a problem in most countries,” she said. She said lead dust particles can be so fine that people don’t even know they’re being exposed. In children, lead can damage the developing nervous system, including the brain. IQ can be affected. High lead exposure can cause irreversible damage. “Here we’re talking about different kinds of lead exposure. Before, I was talking about lead paint. But children can also be exposed to lead through activities, such as hazardous work. If children are involved in recycling of batteries or are playing with batteries where recycled. Also children can be exposed to lead if they’re engaged in hazardous mining activities in developing countries. And here we see various serious cases of lead poisoning,” said Vickers. For adults, heavy exposure can come from working in battery recycling, smelting or painting. It can affect adults’ kidneys and blood pressure. One of the major ways many countries have reduced lead in the environment is to ban its use in gasoline. “As a result of that action there has been a decrease worldwide in exposure to lead. That’s a very encouraging sign and it’s proving that action leads to good outcomes -- and that the next step is to tackle lead paint, which we believe, is very achievable,” she said. Lead may be found in paint pigment. She said, “There are some global suppliers of pigment. So it’s feasible to tackle a large amount of it by encouraging or requiring manufacturers that ship their pigment products to stop doing that and to only use non-lead versions. Then the next step is to educate paint formulators about the hazards of lead paint -- to encourage them to look for the non-lead alternative. And to encourage governments to pass regulation, legislation or other relevant controls to prohibit lead decorative paints.” Thirty countries have phased out lead paint. The WHO, U.N. Environment Fund and the Global Alliance to Eliminate Lead Paint have set a target of 70 countries by 2015.
Bits Per Second (bps) Definition - What does Bits Per Second (bps) mean? Bits per second (bps) is a measure used to show the average rate at which data is transferred between a computer and a data transmission system. The bit rate is generally measured in bits per second (bps) and sometimes bytes per second (Bps). Techopedia explains Bits Per Second (bps) Bits per second is the standard measure of bit rate speed. However, millions of bits can be transferred in a second and measuring in single bit units can be cumbersome. To simplify data transfer rates, an International System of Units prefix is used. These include kilo, mega and giga. Techopedia Deals: FRESHeBUDS Pro Magnetic Bluetooth Earbuds Join thousands of others with our weekly newsletter The 4th Era of IT Infrastructure: Superconverged Systems: Approaches and Benefits of Network Virtualization: Free E-Book: Public Cloud Guide: Free Tool: Virtual Health Monitor: Free 30 Day Trial – Turbonomic:
Get the Facts Organ and tissue donation occurs after a person has died. Transplantable organs and tissue can be donated to help the lives of individuals in need. Additional information can be found at Understanding Donation. Anatomical gift means a donation of all or part of a human body, after death, for the purpose of transplantation, therapy, research or education. Donation is important because thousands of people die or suffer needlessly each year due to a lack of organ and tissue donors. A transplant is often the only hope. A single donor can save the lives of up to eight (8) people and enhance the lives of at least 50 others. Vital organs and tissues can be donated for transplantation. Organ donation is an option for people who have been declared legally dead by brain death criteria. Tissue donation is an option for people who have been declared legally dead by brain or cardiac death criteria. - Organs – heart, kidneys, pancreas, lungs, liver, and small intestine. Visit organdonor.gov for more information. Organ transplants are life-saving. - Tissue – cornea, skin, bone, heart valves, blood vessels, and tendons. Tissue donation such as skin for burn victims or eye donations for sight-restoring cornea transplants give people a chance to lead full, productive lives. For more details about tissues that can be donated and how they are used to help others visit our Donation Types page - Bone Marrow – a living donation Organs must be recovered as soon as possible after death is legally declared. Tissue can be removed up to 24 hours after death.