content
stringlengths
275
370k
You can make a contribution to support Simple trACtors online, safely and securely, using our PayPal account. Electricity 4: Series and Parallel Circuits In this section we will look at some simple DC series and parallel circuits. The purpose will be to observe the relationships occurring between Voltage, Current, Resistance and power. We will explore how changing the Voltage affects the Current and the fact that Power is a square law function relative to Voltage and Current. In all of these examples we will assume that there is NO resistance in the wires, ammeter or the battery [Not possible in the real world] . Figure 1, below, demonstrates a simple complete circuit with one resistor. It could be viewed as a Series circuit because it is in series with the ammeter, and could also be considered a Parallel circuit because the load resistor is connected directly across the battery. In this circuit the current flows from the Battery through the Ammeter, R1 and back to the Battery. Note the current is equal in all parts of this circuit. In the first example, if the voltage was unknown, we could find it using Ohms Law E = IR. [E] ?? = 1a [I] X 10 ohms [R] = 10 volts In the 2nd example we will change the Battery Voltage to 20v and leave the resistor 10 ohms. Now to find the current this circuit will draw we will use Ohms Law again. I = E/R [I] ?? = [E] 20v / [R] 10 ohms = 2 amps Now if we calculate the Power P=EI we see that: [P] ?? = [E] 20v X [I] 2 a. = 40 watts Note that when the Source Voltage is doubled and the Resistance remains constant, The Power increases by 4. This is a "Square Law" function. If the Voltage had been increased by 3 times the Power would have increased by 9 times. In the 3rd example we have a 12 Volt Battery and the ammeter is indicating 48 amps of current flowing. What is the value of R1? [R] ?? = [E] 12 v /48a [I] = .25 ohm The Power would then be: [P] ?? = [E] 12v X [I] 48a = 576 watts The purpose of this exercise was to illustrate the effects of changing Voltage, Current and Resistance in a circuit and their relationship to each other and to observe how these changes affect the total Power. Figure 2, below, illustrates two resistors connected in series. Note that in this circuit the Current will be the same in all parts of the circuit. In this circuit the current path is from the battery through the ammeter through R1 then R2 then back to the battery. The value of the two resistors ADD together when they are in SERIES: R1 + R2 = 5 + 15 = 20 ohms With 20 volts applied, E = IR = 20 v = 1a [I] X 20 ohms [R]. Note: The voltage drop across R1 would be 5 V, as shown by: [E] 5v = [I] 1a X [R] 5 ohms The same would be true for R2. NOTE: The voltage drop in a SERIES circuit is always proportional to the resistance.If one was to assume R2 was a light and that R1 was a piece of wire that had 5 ohms resistance, it is obvious that ¼ of the power would be lost getting to the light. This is how one adapts or utilizes these concepts in practical applications. If you understand what happens in a circuit like this, then when you take a voltmeter and start measuring you can understand what it is telling you and locate the problems. Figure 3, below, illustrates a circuit with 2 resistors in parallel, and what happens. In this circuit we will not go through all of the E = IR calculations. Note that Both Resistors have 20 volts across them. If we take the Current flowing through R1, which is 1amp, and the Current flowing through R2 and add them together we get 1a + 4a = 5a, or the total circuit current is 5a. If we look at E = IR, then: 20v @ 5a = 4 ohms. Using P = EI the total power provided by the battery is: 20v X 5a =100 watts Of these 5 watts are dissipated as heat in R2 and 20 are dissipated in R1. Note the parallel resistance formula in line 4, calculates to 4 ohms which agrees with the power calculations previously done. If more than 2 resistors are connected in parallel the formula would be the one shown in the bottom line. Note: IN THIS CIRCUIT WITH 2 RESISTORS IN PARALLEL THE CURRENT THROUGH THE RESISTORS IS INVERSELY PROPORTIONAL TO THE RESISTANCE OF THE RESISTORS. Note also that when we insert Ohms Law [E = IR, and I = E/R] into the Power Formula: P = EI, then P = [IR]I, therefore P = [I]2 R. If we insert P = E[E/R], we get the formula P = [E] 2/R. You can see these calculations used in line 2 and 3 above. This circuit should demonstrate how current flow is inverse to resistance values in parallel circuits and that the current flow is not the same in all parts of the circuit. Further Note: If a resistor were to replace the ammeter, you would have a series parallel circuit and the 2 parallel resistors R1 and R2 would look like a 4 ohm resistor to the series resistor and the voltage drop across the two would be proportional to added resistor and the 4 ohms for this pair. From here we will proceed to some practical troubleshooting and magnetism, not necessarily in that order, but whatever I get finished next. Al... NOTE: This site is designed to be viewed in at least 1024 x 768 resolution in Internet Explorer v6 or later, Mozilla Firefox v4 or later, and Google Chrome. It may not display correctly in other resolutions or on other devices. Links on this site are monetized. liability or responsibility for the accuracy of data on this site.
This video-based, On Demand Online course demonstrates a research‑based, classroom-proven framework for teaching reading comprehension processes and strategies to students in the intermediate grades. You'll see teachers artfully combining whole groups and small groups, explicit instruction and independent practice to improve the reading comprehension of all their students, grades 3-6. This course provides a wealth of opportunities to see teachers demonstrate key attributes of good instruction across different grade levels, different comprehension strategies and different types of text. This course is designed to familiarize intermediate-grade educators with a powerful model for teaching reading comprehension. Featuring video clips filmed in intermediate-grade classrooms, this course highlights essential components of explicit comprehension instruction, including how to effectively introduce and review comprehension processes and strategies with the entire class; techniques for helping students apply specific strategies to instructional-level text while engaging the rest of the class in meaningful, independent comprehension practice; and, finally, you'll see strategies that encourage students to think more deeply about their own reading skills and how well they support their understandings of text. You will learn how to: - Implement a research-based instructional model for the explicit, effective teaching of comprehension processes - Maximize the benefits of whole group and small group instruction and independent literacy work - Combine interactive modeling and active student engagement strategies to guide and deepen students' thinking about text - Ensure that all students are able to apply a broad range of comprehension strategies as they read Watch an excerpt from this course: Optional Brandman University Graduate Level Professional Development Credit course code: EDLU 9660 There are no reviews for this course at this time.
Big nature and tiny us By Bruce Walker Iceland's Eyjafjallajokull Volcano has forced tens of thousands of airline flights in Europe and the North Atlantic to shut down. The last time this volcano erupted, in 1821, it continued for two years. No one knows when the eruption will stop this time. This uncontrolled and unpredictable explosion of nature's power steps across our puny civilizations with frightening ease. Nineteen years ago, Mount Pinatubo in the Philippines coughed 20 million tons of sulfur dioxide. The Chicon eruption in Mexico nine years earlier perceptibly cooled the planet. A few years before that, Mount St Helens, erupted in 1980 and threw gases and particles in the sky which were clearly visible for hundreds of miles. The Icelandic Laki eruption in 1783 was believed by Ben Franklin to have cooled the planet. In 1815 the Tambora Volcano in Indonesia produced the "year without a summer" - distant New England experienced snowfalls in July and livestock in America died in huge numbers. Krakatoa, exactly one hundred year after Laki, was twenty times more powerful than Mount St. Helens and it cooled the planetary temperature by more than one degree. These volcanoes are dramatic evidence of a mundane truth: We exercise very little control over our planet. Nature has much more power than we do, and that power is easy to see and to believe. No one needs a hockey stick generating software program to prove that a simple, natural volcano produces very real global cooling. What if the Eyjafjallajokull results in a significant cooling of Earth? The Church of Global Warming is in high gear trying to dissuade us from seeing the Icelandic volcano, which has crippled human travel in a way that no manmade environmental change has ever done, is not really that big a deal. The Pinotubo Volcano spewed 20 million tons of sulfur dioxide into the atmosphere and lowered the global temperature by one degree Fahrenheit. The Icelandic volcano today is spewing 750 tons of sulfur dioxide into the air each second, according to the Icelandic Institute of Earth Sciences. That does not sound like a lot, until one does the math: that is 2.7 million tons an hour. How much of that is entering the stratosphere? Atlantic Monthly reports that the ash cloud is extending seven miles into the stratosphere. So, maybe this volcano will cool the Earth perceptibly too. The headline story is this: Big Nature and Tiny Us. Humans and their technologies are helpless against the whims of volcanoes, tsunamis, earthquakes, and the other burps and hiccups of our planet. We have known for many decades that some day in the near future, California and most of the Pacific Coast might be violently savaged by the shifting of the San Andreas Fault and that cities might quickly wind up at the floor of the Pacific Ocean. What would that do to "the environment"? In the narrow and petty minds of Warmers, the consequence would be tens of millions of internal combustion engines and modern homes would stop ruining the environment, but, of course, the true impact would be vastly more deadly to man and his tenuous hold upon life here on Earth…and upon the environment of our world. Why are these busybodies not working on ways to keep plate tectonics from producing this calamity? Because no one can really stop the drift of continents or the volcanoes, earthquakes, and hurricanes which nature causes: Big Nature and Tiny Us. One fine day a meteor or an asteroid will smash into our planetary home. We will have little advance warning. There is not much we can do to stop it. We can scarcely predict when this will happen. The impact could easily cause the destruction of all human life and maybe the extermination of much animal and plant life on Earth. Despite the conflict about manmade global warming in the scientific community, there is no disagreement that this disaster will, eventually, happen and that it will cause indescribable harm. Yet the clergy of the Church of Global Warming proposes virtually nothing at all to meet this threat which the dark ocean of outer space whispers is not an "If?" but "when?" The Church of Global Warming is not really concerned about how well the environment at all. The obsession is with controlling individual human behavior. Liberated man is the enemy of Warmers, and human liberty is the hated object of these Warmers. Their goal, in the simplest terms, is raw political power, whatever harm this power may cause the rest of us. They must paint man as a creature which must be regulated, licensed, and taxed into regimented slavery in a vast empire of pseudo-science. The truth – that nature is enormous and we are puny - would lead us to conquer what we can to make our lives safer, richer, and happier. So, like Druid or Aztec priests before them, these modern clergy of offended nature propose the myth that what we innocently do keep the sun from rising or spring from coming or the gods from spewing forth lava from their homes inside volcanoes. Only by priests making sacrifices which we present them can our offense of nature be placated. All mischief must have a cause in the conduct of man, because otherwise we could discard our chains and live as free men. These modern mythmakers seek to drag us back to the dim days of long ago when dreary pantheons demanded goods and lives from us, and the mediators of this process were always scolding, angry, fat priests. Bruce Walker is the author of two books: Sinisterism: Secular Religion of the Lie and The Swastika against the Cross: The Nazi War on Christianity. Get weekly updates about new issues of ESR!
Before the world can convert electronics to some new kind of technology, researchers must first understand how these new technologies work. Towards that end, researchers at the Joint Quantum Institute have actually combined multiple, new technologies to better study them. Quantum dots, plasmonics, and microfluidics are the three technologies and each, alone, could impact the future in many ways. Quantum dots are like designer atoms as the particles can have their photoelectric properties tailored to whatever someone wants. Microfluidics is the study of how nanoliters of fluid behave within equally small channels. Plasmonics deals with a curious coupling of electrons and photons that enable light to be squashed down to sizes normally impossible, and then travel along a metal as though it were an electron. The researchers combined these technologies by placing a silver nanowire in a microfluidic crossed-channel device, with quantum dots floating in the liquid. A green laser is then aimed at the system, causing the dots to emit red light, one photon at a time. These photons are absorbed by the nanowire, which changes its electric field. The changing electric field induces changes in the quantum dots then, causing them to produce a different colored light, which the researchers could see with a CCD camera. Potentially this setup could be used to create plasmonic equivalents to electronic circuit components. For now though it should provide insight into the plasmonic effects of nanowires, by showing how they affect nearby quantum dots. Source: Joint Quantum Institute
Exact Body Fat Percentage cannot be precisely determined, but multiple methods are available to estimate it. These include: a formula that uses your weight in pounds and waist circumference, the use of calipers to measure skin fold thickness, or bioelectrical impedance calculation. Some body fat is required for overall health. It plays an important role in protecting internal organs, providing energy, and regulating hormones. Excess body fat is linked to an increased risk for diseases such as cancer, diabetes, and heart disease. Essential Body Fat Minimum Required Fat Men 6–13% F Women + 32% Methods of Calculating Body Fat - Water Displacement Method The Fat Cells in humans are composed almost entirely of pure Triglycerides with an average density of about 0.9 kilograms per liter. Most modern body composition laboratories today use the value of 1.1 kilograms per liter for the density of the “fat free mass”, a theoretical tissue composed of 72% water (density = 0.993), 21% protein (density = 1.340) and 7% mineral (density = 3.000) by weight. body density can be determined with great accuracy by completely submerging a person in water and calculating the volume of the displaced water from the weight of the displaced water. Estimation of body fat percentage from underwater weighinghas long been considered to be the best method available, especially in consideration of the cost and simplicity of the equipment. - Air Displacement Method A technique for measuring fat mass has been developed using the same principles as under water weighing. The technique uses air, as opposed to water and is known as Air Displacement Plethysmography (ADP). Subjects enter a sealed chamber that measures their body volume through the displacement of air in the chamber. Body volume is combined with body weight (mass) in order to determine body density. The technique then estimates the percentage of body fat and lean body mass (LBM) through known equations (for the density of fat and fat free mass). - Bioelectrical Impedance Anaylsis The bioelectrical impedance analysis (BIA) method is a more affordable but less accurate way to estimate body fat percentage. The general principle behind BIA: two conductors are attached to a person’s body and a small electric current is sent through the body. The resistance between the conductors will provide a measure of body fat, since the resistance to electricity varies between adipose, muscular and skeletal tissue. Fat-free mass (muscle) is a good conductor as it contains a large amount of water (approximately 73%) and electrolytes, while fat is anhydrous and a poor conductor of electric current. - Skin Fold Method The skinfold estimation methods are based on a skinfold test, also known as a pinch test, whereby a pinch of skin is precisely measured by calipers at several standardized points on the body to determine the subcutaneous fat layer thickness. These measurements are converted to an estimated body fat percentage by an equation. Some formulas require as few as three measurements, others as many as seven. The accuracy of these estimates is more dependent on a person’s unique body fat distribution than on the number of sites measured. - BMI Method Body fat can be estimated from one’s Body Mass Index (BMI). The BMI is calculated from an individual’s weight divided by the square of the height if expressed in kg/m2, multiplied by 703 if expressed in lbs/in2. Formulae for Fat Calculate Your Body Fat Child body fat % = (1.51 × BMI) − (0.70 × Age) − (3.6 × sex) + 1.4 Adult body fat % = (1.20 × BMI) + (0.23 × Age) − (10.8 × sex) − 5.4 Where sex place 1 for males and 0 for females. - Why “Skinny” Isn’t Everything: Losing The Body Fat! (americanlivewire.com)
What Is the Water Cycle? Have students work in groups of four. Put each group’s materials on a tray and place in a central location for Materials Managers to pick up. Place a container of sand with a scoop in a central area. Let groups measure out the quantities they need. Materials per Student Group - 20 ice cubes (approximately) - 2 cups of sand - 8-oz measuring cup, or 250 mL beaker - Cardboard shoebox - Clear plastic wrap (enough to completely cover top of the shoebox) - Foil (to cover shoebox interior) - Lamp with incandescent bulb, or sunny window - Large rubber band (to secure the plastic wrap) - Plastic tray to hold materials Materials per Student - Copy of "The Water Cycle" page - Sheet of drawing paper A plastic shoebox-sized container may be used. If no clear lid is is available, cover the shoebox with plastic wrap and secure it with a rubber band. Your slide tray is being processed. Funded by the following grant(s) The Environment as a Context for Opportunities in Schools Grant Numbers: 5R25ES010698, R25ES06932 Foundations for the Future: Capitalizing on Technology to Promote Equity, Access and Quality in Elementary Science Education
- Show a photo and have the kids write a sentence or paragraph about it. - FaceTime with another classroom in your building or another state. - Play a trivia game with the entire class. - Practice test questions for upcoming exams. - Build math skills with interactive math games. - Use the NASA, National Geographic, and various zoo apps for classroom lessons. - Share student presentations created from another source. - Use the Earth and Moon app for classroom lessons. - Take a virtual field trip. - Use the camera...take different pictures and create a writing assignment using the pictures. Feb 4, 2013 Limited Digital Devices For classrooms which may experience a digital device limitation, an alternative may be to connect your single device to your SMARTBoard or to an LCD projector. Whether you use a VGA video adapter for an iPad (shown) or a video adapter for a laptop, tablet, or other device, you can actively engage your students in a lesson as a whole class or in small groups. Getting the students to participate in your activity by manipulating the app or program on the device or directly on the SMARTBoard can be very enriching for them. Here are a few ideas of what you can do with just a single device.
RADIO “EYES” PEER INTO SPACE (Dec, 1955) RADIO “EYES” PEER INTO SPACE WITH STRANGE-LOOKING instruments that catch radio waves from the stars, Australian scientists are probing the mysteries of the universe. So far they have identified 100 “radio stars”—highly localized sources of cosmic static. Several of the instruments are used to study the radio waves that originate on the sun’s surface. A “quiet” sun emits a constant radio signal indicative of its temperature. But when great columns of fire shoot from its surface, the radio waves change and the earth experiences radio fadeouts and magnetic storms. Near Sydney, three separate rhombic aerials rotate to follow the sun from sunrise to sunset. They are connected to radio receivers that tune rapidly from 40 to 240 megacycles each second, and record when and on what wavelength any of these transitory disturbances occur. Another installation consists of 48 parabolic “dish” antennas arranged in two rows at right angles to each other. Operating in a very narrow range, the system can pinpoint any existing active radio areas on the sun. The distribution of neutral hydrogen in space is investigated with a small radio telescope that has a bowl antenna 36 feet in diameter. Hydrogen gas is audible on its receiver as a steady humming note. A larger radio telescope in use is 80 feet in diameter and is mounted in a bowl hollowed out of the ground on a cliff near the entrance to Sydney Harbor. Australia’s biggest radio telescope, however, is the “Southern Cross,” named for its shape. Each of the 1500-foot arms of the cross contains hundreds of receiving elements which pick up the minute static of far-distant radio stars. Eventually this equipment will be augmented by a giant saucer-shaped aerial 250 feet in diameter that will rotate between two towers 200 feet from the ground. This will allow the telescope to scan any section of sky. * * *
Ancient Indian CivilizationEdit The earliest known farming cultures in south Asia emerged in the hills of Balochistan, Pakistan, which included Mehrgarh in the 7th millennium BCE. These semi-nomadic peoples domesticated wheat, barley, sheep, goat and cattle. Pottery was in use by the 6th millennium BCE. Their settlement consisted of mud buildings that housed four internal subdivisions. Burials included elaborate goods such as baskets, stone and bone tools, beads, bangles, pendants and occasionally animal sacrifices. Figurines and ornaments of sea shell, limestone, turquoise, lapis lazuli, sandstone and polished copper have been found. By the 4th millennium BCE we find much evidence of manufacturing. Technologies included stone and copper drills, updraft kilns, large pit kilns and copper melting crucibles. Button seals included geometric designs. Indus Valley CivilizationEdit By 4000 BCE a pre-Harappan culture emerged, with trade networks including lapis lazuli and other raw materials. Villagers domesticated numerous other crops, including peas, sesame seed, dates, and cotton, plus a wide range of domestic animals, including the water buffalo which still remains essential to intensive agricultural production throughout Asia today. There is also evidence of sea-going craft. Archaeologists have discovered a massive, dredged canal and docking facility at the coastal city of Lothal, India, perhaps the world's oldest sea-faring harbour. Judging from the dispersal of artifacts the trade networks integrated portions of Afghanistan, the Persian coast, northern and central India, Mesopotamia (see Meluhha) and Ancient Egypt (see Silk Road). Archaeologists studying the remains of two men from Mehrgarh, Pakistan, discovered that these peoples in the Indus Valley Civilization had knowledge of medicine and dentistry as early as circa 3300 BCE. The Indus Valley Civilization gains credit for the earliest known use of decimal fractions in a uniform system of ancient weights and measures, as well as negative numbers (see Timeline of mathematics). Ancient Indus Valley artifacts include beautiful, glazed stone faïence beads. The Indus Valley Civilization boasts the earliest known accounts of urban planning. As seen in Harappa, Mohenjo-daro and (recently discovered) Rakhigarhi, their urban planning included the world's first urban sanitation systems. Evidence suggests efficient municipal governments. Streets were laid out in perfect grid patterns comparable to modern New York. Houses were protected from noise, odors and thieves. The sewage and drainage systems developed and used in cities throughout the Indus Valley were far more advanced than that of contemporary urban sites in Mesopotamia. "Ancient Civilizations/Ancient Indian Civilization" (Wikibooks) http://en.wikibooks.org/wiki/World_History/Ancient_Civilizations#Ancient_Indian_Civilization
What is Tonsillitis? A tonsil infection (tonsillitis) usually has symptoms of a sore throat, fever, painful and/or difficult swallowing, swollen neck glands (lymph nodes) and sometimes muffled voice. The tonsils are composed of tissue that is similar to the lymph nodes or glands found in the neck or other parts of the body. Nearby are the adenoids, located high in the throat behind the nose and soft palate (the roof of the mouth), and not easily visible by just looking into the mouth. Together, they are part of a ring of glandular tissue (Waldeyer's ring) encircling the back of the throat. If someone suffers from frequent tonsil infections, it’s important to see a doctor to determine the cause and to find out if your doctor deems it necessary to remove them surgically to prevent further infections. What is a Tonsillectomy? Tonsillectomy is removal of the tonsils. Tonsillectomy is sometimes recommended for frequently recurring tonsillitis. Tonsillectomy may also be recommended as part of a comprehensive plan in the treatment of sleep disordered breathing or if there is concern for a tumor of the tonsil. Tonsillectomy is performed in the operating room under general anesthesia. Often, adenoidectomy is done at the same time as tonsillectomy.
The striped kelpfish is geographically distributed in lower intertidal zones along the eastern Pacific from British Columbia to Baja California Sur, feeding on marine invertebrates of the benthic epifauna (2). This species is of a variable red or brown color with dark striations along length of the body bearing 34-37 dorsal spines at maturity (1). Species distribution and genetic differences of the Gibbonsia genus intertidal fish is influenced by their seasonal adaptability to hot and cold water temperatures (3). The distribution patterns of these morphologically similar species: G. elegans, G. metzi, and G. montereyensis, suggests that a wider distribution reflects a greater ability to extreme temperature adaptations (3). The Gibbonsia metzi species has the widest distribution of the three total species and also has the greatest cold-temperature adaptation ability (3). - Hart, J.L., 1973. Pacific fishes of Canada. Bull. Fish. Res. Board Can. 180:740. - Eschmeyer, W.N., E.S. Herald and H. Hammann, 1983. A field guide to Pacific coast fishes of North America. Boston (MA, USA): Houghton Mifflin Company. - Davis, B. J. (1977). Distribution and temperature adaptation in the teleost fish genus Gibbonsia. Marine biology, 42(4), 315-320. (Photograph) Morgan Stickrod. https://www.inaturalist.org/observations/20483174
National accounts are designed to provide a systematic summary of national economic activity and have been developed to assist in the practical application of economic theory. The Australian system of national accounts includes national income, expenditure and product accounts, financial accounts, the national balance sheet and input-output tables. At their summary level, the national income, expenditure and product accounts reflect key economic flows - production, the distribution of incomes, consumption, saving and investment. At their more detailed level, they are designed to present a statistical picture of the structure of the economy and the detailed processes that make up domestic production and its distribution. The financial accounts show the financial assets and liabilities of the nation and of each institutional sector, the market for financial instruments and inter-sectoral financial transactions. The balance sheet is a comprehensive statement of produced and non-produced assets, liabilities to the rest of the world and net worth. Input-output tables show which goods and services are produced by each industry and how they are used. The national accounts include many detailed classifications (e.g. by industry, by purpose, by commodity, by state and territory, and by asset type) relating to major economic aggregates.
In post 19.2, we saw that the ideal gas equation pV = nRT, provides a good model for the behaviour of most gases except at very high pressures. Here p represents the pressure of the gas, V its volume, n the number of moles and R is the ideal gas constant. In post 18.11, we saw that the concept of a simple harmonic oscillator can explain the behaviour of many oscillating systems. The idea behind this concept is that the energy dissipated is negligible, so no energy need be supplied for the oscillation to continue. The behaviour of a mechanical oscillator confined to a line is then given by d2x/dt2 = –ω2x (1) Here x is the displacement of the oscillator at time t and ω is the angular frequency of the oscillation. More details are given in post 18.11; an equation of the form of equation 1 is derived from the simple harmonic oscillator concept in post 18.6; a simple explanation of ω is given in post 16.14. The fundamental difference between equation 1 and the ideal gas equation is that it contains something that is differentiated (post 17.4). Equation 1 is an example of a differential equation; the equation in the picture at the beginning of this post is also an example. The properties of a gas do not change with time. But the behaviour of an oscillator changes with time and so is described by a differential equation. From the analogy between electrical and mechanical systems (post 18.24) the current in a circuit when a capacitor discharges through an inductor, when there is no additional energy source and the resistance is negligible (so no energy is supplied or dissipated) is given by d2Q/dt2 = –ω2Q (2) where ω = (1/LC)1/2. Here C is the capacitance of the capacitor (post 18.19) and L is the inductance (post 18.21) of the inductor. Equation 2 represents the behaviour of an electrical simple harmonic oscillator. When an object is bouncing on a spring, we can usually assume that k and m are constant; in an electrical circuit, C and L are often constant. Then the values of x, in equation 1, and Q, in equation 2, depend only on t; we say that x and Q are functions of t. When k and m are constant, equation 1 is a linear differential equation and can be solved. Similarly, when C and L are constant, equation 2 is a linear differential equation. The differential equation in the picture, at the beginning of this blog, is linear when k ,M and m are constant. Equation 2 has two trigonometric solutions x = acos(ωt) (3) x = asin(ωt) (4) where a is the amplitude of oscillation, as shown in post 18.6. Both are valid mathematical solutions. We must choose the one that best fits the physical system we wish to model, as described in post 18.6. If equations 3 and 4 are solutions of equation 1, then so is x = pacos(ωt) + qasin(ωt) (5) where p and q are any constants. You can verify this result for yourself, using the method of appendix 3 in post 18.6. When p and q are constants, equation 5 is called a linear combination of equation 3 and 4. In general, any linear combination of solutions of a differential equation is itself a solution. When p = 1 and q = i, the square root of minus 1 (post 18.16) x = acos(ωt) + iasin(ωt) = aeiωt, (6) according to Euler’s relation (post 18.17), where e is defined in post 18.15. Once again, you can verify that aeiωt is a solution of equation 1 using the method of appendix 3 in post 18.6 and noting that d(ent)/dt = nent, where n is a constant (post 18.15). Previously, we have seen differential equations used to describe growth and decay (post 18.15) and simple harmonic oscillators (posts18.6, 18.7, 18.8 and 18.11) and the time-dependence of the current in electrical circuits (posts 18.20 and 18.22). In future posts, I hope to show how they can be used to describe waves and diffusion. 18.17 Euler’s relation, oscillations and waves 18.16 The square root of minus 1 and complex numbers 18.15 More about exponential growth: the number e 18.14 Wave shapes – Fourier series 17.37 More about torque – cross products of vectors 17.36 More about work – line integrals 17.19 Calculating distances from speeds – integration 17.4 Displacement, velocity and acceleration 17.3 Three-dimensional vectors
Coding is an essential skill for students in our ever-changing digital world. By learning to code, students are really learning problem solving strategies, how to share and communicate their ideas, and design unique solutions for our modern world. Are you ready to teach your students coding? Not sure where to begin? Scratch is a project of the Lifelong Kindergarten Group at the MIT Media Lab. Scratch is provided free of charge and has easy step by step tutorials for parents and educators. The Roughneck Tech Club started Scratch today by animating their name. Students followed the step by step tutorial to create their first program. Scratch provides a series of tutorials to help students learn to navigate the coding program. As students successfully coded their names with special effects and sounds, they began to learn that coding involves learning precise logical steps, troubleshooting errors, and being creative. Not only did students learn beginning coding skills, but they learned teamwork. Students who had difficulty with a coding step could rely on their neighbors to give them a helping hand.
Art As Visual Input Visual art manifests itself through media, ideas, themes and sheer creative imagination. Yet all of these rely on basic structural principles that, like the elements we’ve been studying, combine to give voice to artistic expression. Incorporating the principles into your artistic vocabulary not only allows you to objectively describe artworks you may not understand, but contributes in the search for their meaning. The first way to think about a principle is that it is something that can be repeatedly and dependably done with elements to produce some sort of visual effect in a composition. The principles are based on sensory responses to visual input: elements APPEAR to have visual weight, movement, etc. The principles help govern what might occur when particular elements are arranged in a particular way. Using a chemistry analogy, the principles are the ways the elements “stick together” to make a “chemical” (in our case, an image). Principles can be confusing. There are at least two very different but correct ways of thinking about principles. On the one hand, a principle can be used to describe an operational cause and effect such as “bright things come forward and dull things recede”. On the other hand, a principle can describe a high quality standard to strive for such as “unity is better than chaos” or “variation beats boredom” in a work of art. So, the word “principle” can be used for very different purposes. Another way to think about a principle is that it is a way to express a value judgment about a composition. Any list of these effects may not be comprehensive, but there are some that are more commonly used (unity, balance, etc). When we say a painting has unity we are making a value judgment. Too much unity without variety is boring and too much variation without unity is chaotic. The principles of design help you to carefully plan and organize the elements of art so that you will hold interest and command attention. This is sometimes referred to as visual impact. In any work of art there is a thought process for the arrangement and use of the elements of design. The artist who works with the principles of good composition will create a more interesting piece; it will be arranged to show a pleasing rhythm and movement. The center of interest will be strong and the viewer will not look away, instead, they will be drawn into the work. A good knowledge of composition is essential in producing good artwork. Some artists today like to bend or ignore these rules and by doing so are experimenting with different forms of expression. The following page explore important principles in composition. All works of art possess some form of visual balance – a sense of weighted clarity created in a composition. The artist arranges balance to set the dynamics of a composition. A really good example is in the work of Piet Mondrian, whose revolutionary paintings of the early twentieth century used non-objective balance instead of realistic subject matter to generate the visual power in his work. In the examples below you can see that where the white rectangle is placed makes a big difference in how the entire picture plane is activated. The example on the top left is weighted toward the top, and the diagonal orientation of the white shape gives the whole area a sense of movement. The top middle example is weighted more toward the bottom, but still maintains a sense that the white shape is floating. On the top right, the white shape is nearly off the picture plane altogether, leaving most of the remaining area visually empty. This arrangement works if you want to convey a feeling of loftiness or simply direct the viewer’s eyes to the top of the composition. The lower left example is perhaps the least dynamic: the white shape is resting at the bottom, mimicking the horizontal bottom edge of the ground. The overall sense here is restful, heavy and without any dynamic character. The bottom middle composition is weighted decidedly toward the bottom right corner, but again, the diagonal orientation of the white shape leaves some sense of movement. Lastly, the lower right example places the white shape directly in the middle on a horizontal axis. This is visually the most stable, but lacks any sense of movement. Refer to these six diagrams when you are determining the visual weight of specific artworks. There are three basic forms of visual balance: Symmetrical balance is the most visually stable, and characterized by an exact—or nearly exact—compositional design on either (or both) sides of the horizontal or vertical axis of the picture plane. Symmetrical compositions are usually dominated by a central anchoring element. There are many examples of symmetry in the natural world that reflect an aesthetic dimension. The Moon Jellyfish fits this description; ghostly lit against a black background, but absolute symmetry in its design. But symmetry’s inherent stability can sometimes preclude a static quality. View the Tibetan scroll painting to see the implied movement of the central figure Vajrakilaya. The visual busyness of the shapes and patterns surrounding the figure are balanced by their compositional symmetry, and the wall of flame behind Vajrakilaya tilts to the right as the figure itself tilts to the left. Tibetan scroll paintings use the symmetry of the figure to symbolize their power and spiritual presence. Spiritual paintings from other cultures employ this same balance for similar reasons. Sano di Pietro’s ‘Madonna of Humility’, painted around 1440, is centrally positioned, holding the Christ child and forming a triangular design, her head the apex and her flowing gown making a broad base at the bottom of the picture. Their halos are visually reinforced with the heads of the angels and the arc of the frame. The use of symmetry is evident in three-dimensional art, too. A famous example is the Gateway Arch in St. Louis, Missouri (below). Commemorating the westward expansion of the United States, its stainless steel frame rises over 600 feet into the air before gently curving back to the ground. Another example is Richard Serra’s Tilted Spheres (also below). The four massive slabs of steel show a concentric symmetry and take on an organic dimension as they curve around each other, appearing to almost hover above the ground. Asymmetry uses compositional elements that are offset from each other, creating a visually unstable balance. Asymmetrical visual balance is the most dynamic because it creates a more complex design construction. A graphic poster from the 1930s shows how offset positioning and strong contrasts can increase the visual effect of the entire composition. Claude Monet’s Still Life with Apples and Grapes from 1880 (below) uses asymmetry in its design to enliven an otherwise mundane arrangement. First, he sets the whole composition on the diagonal, cutting off the lower left corner with a dark triangle. The arrangement of fruit appears haphazard, but Monet purposely sets most of it on the top half of the canvas to achieve a lighter visual weight. He balances the darker basket of fruit with the white of the tablecloth, even placing a few smaller apples at the lower right to complete the composition. Monet and other Impressionist painters were influenced by Japanese woodcut prints, whose flat spatial areas and graphic color appealed to the artist’s sense of design. One of the best-known Japanese print artists is Ando Hiroshige. You can see the design strength of asymmetry in his woodcut Shinagawa on the Tokaido (below), one of a series of works that explores the landscape around the Takaido road. You can view many of his works through the hyperlink above. In Henry Moore’s Reclining Figure the organic form of the abstracted figure, strong lighting and precarious balance obtained through asymmetry make the sculpture a powerful example in three-dimensions. Radial balance suggests movement from the center of a composition towards the outer edge—or vise versa. Many times radial balance is another form of symmetry, offering stability and a point of focus at the center of the composition. Buddhist mandala paintings offer this kind of balance almost exclusively. Similar to the scroll painting we viewed previously, the image radiates outward from a central spirit figure. In the example below there are six of these figures forming a star shape in the middle. Here we have absolute symmetry in the composition, yet a feeling of movement is generated by the concentric circles within a rectangular format. Raphael’s painting of Galatea, a sea nymph in Greek mythology, incorporates a double set of radial designs into one composition. The first is the swirl of figures at the bottom of the painting, the second being the four cherubs circulating at the top. The entire work is a current of figures, limbs and implied motion. Notice too the stabilizing classic triangle formed with Galatea’s head at the apex and the other figures’ positions inclined towards her. The cherub outstretched horizontally along the bottom of the composition completes the second circle. Within this discussion of visual balance, there is a relationship between the natural generation of organic systems and their ultimate form. This relationship is mathematical as well as aesthetic, and is expressed as the Golden Ratio: Here is an example of the golden ratio in the form of a rectangle and the enclosed spiral generated by the ratios: The natural world expresses radial balance, manifest through the golden ratio, in many of its structures, from galaxies to tree rings and waves generated from dropping a stone on the water’s surface. You can see this organic radial structure in some natural systems by comparing the satellite image of hurricane Isabel and a telescopic image of spiral galaxy M51 below. A snail shell, unbeknownst to its inhabitant, is formed by this same universal ratio, and, in this case, takes on the green tint of its surroundings. Environmental artist Robert Smithson created Spiral Jetty, an earthwork of rock and soil, in 1970. The jetty extends nearly 1500 feet into the Great Salt Lake in Utah as a symbol of the interconnectedness of our selves to the rest of the natural world. Repetition is the use of two or more like elements or forms within a composition. The systematic arrangement of a repeated shapes or forms creates pattern. Patterns create rhythm, the lyric or syncopated visual effect that helps carry the viewer, and the artist’s idea, throughout the work. A simple but stunning visual pattern, created in this photograph of an orchard by Jim Wilson for the New York Times, combines color, shape and direction into a rhythmic flow from left to right. Setting the composition on a diagonal increases the feeling of movement and drama. The traditional art of Australian aboriginal culture uses repetition and pattern almost exclusively both as decoration and to give symbolic meaning to images. The coolamon, or carrying vessel pictured below, is made of tree bark and painted with stylized patterns of colored dots indicating paths, landscapes or animals. You can see how fairly simple patterns create rhythmic undulations across the surface of the work. The design on this particular piece indicates it was probably made for ceremonial use. We’ll explore aboriginal works in more depth in the ‘Other Worlds’ module. Rhythmic cadences take complex visual form when subordinated by others. Elements of line and shape coalesce into a formal matrix that supports the leaping salmon in Alfredo Arreguin’s ‘Malila Diptych’. Abstract arches and spirals of water reverberate in the scales, eyes and gills of the fish. Arreguin creates two rhythmic beats here, that of the water flowing downstream to the left and the fish gracefully jumping against it on their way upstream. The textile medium is well suited to incorporate pattern into art. The warp and weft of the yarns create natural patterns that are manipulated through position, color and size by the weaver. The Tlingit culture of coastal British Columbia produce spectacular ceremonial blankets distinguished by graphic patterns and rhythms in stylized animal forms separated by a hierarchy of geometric shapes. The symmetry and high contrast of the design is stunning in its effect. Scale and Proportion Scale and proportion show the relative size of one form in relation to another. Scalar relationships are often used to create illusions of depth on a two-dimensional surface, the larger form being in front of the smaller one. The scale of an object can provide a focal point or emphasis in an image. In Winslow Homer’s watercolor A Good Shot, Adirondacks the deer is centered in the foreground and highlighted to assure its place of importance in the composition. In comparison, there is a small puff of white smoke from a rifle in the left center background, the only indicator of the hunter’s position. Click the image for a larger view. Scale and proportion are incremental in nature. Works of art don’t always rely on big differences in scale to make a strong visual impact. A good example of this is Michelangelo’s sculptural masterpiece Pieta from 1499 (below). Here Mary cradles her dead son, the two figures forming a stable triangular composition. Michelangelo sculpts Mary to a slightly larger scale than the dead Christ to give the central figure more significance, both visually and psychologically. When scale and proportion are greatly increased the results can be impressive, giving a work commanding space or fantastic implications. Rene Magritte’s painting Personal Values constructs a room with objects whose proportions are so out of whack that it becomes an ironic play on how we view everyday items in our lives. American sculptor Claes Oldenburg and his wife Coosje van Bruggen create works of common objects at enormous scales. Their Stake Hitch reaches a total height of more than 53 feet and links two floors of the Dallas Museum of Art. As big as it is, the work retains a comic and playful character, in part because of its gigantic size. Emphasis—the area of primary visual importance—can be attained in a number of ways. We’ve just seen how it can be a function of differences in scale. Emphasis can also be obtained by isolating an area or specific subject matter through its location or color, value and texture. Main emphasis in a composition is usually supported by areas of lesser importance, a hierarchy within an artwork that’s activated and sustained at different levels. Like other artistic principles, emphasis can be expanded to include the main idea contained in a work of art. Let’s look at the following work to explore this. We can clearly determine the figure in the white shirt as the main emphasis in Francisco de Goya’s painting The Third of May, 1808 below. Even though his location is left of center, a candle lantern in front of him acts as a spotlight, and his dramatic stance reinforces his relative isolation from the rest of the crowd. Moreover, the soldiers with their aimed rifles create an implied line between them selves and the figure. There is a rhythm created by all the figures’ heads—roughly all at the same level throughout the painting—that is continued in the soldiers’ legs and scabbards to the lower right. Goya counters the horizontal emphasis by including the distant church and its vertical towers in the background. In terms of the idea, Goya’s narrative painting gives witness to the summary execution of Spanish resistance fighters by Napoleon’s armies on the night of May 3, 1808. He poses the figure in the white shirt to imply a crucifixion as he faces his own death, and his compatriots surrounding him either clutch their faces in disbelief or stand stoically with him, looking their executioners in the eyes. While the carnage takes place in front of us, the church stands dark and silent in the distance. The genius of Goya is his ability to direct the narrative content by the emphasis he places in his composition. A second example showing emphasis is seen in Landscape with Pheasants, a silk tapestry from nineteenth-century China. Here the main focus is obtained in a couple of different ways. First, the pair of birds are woven in colored silk, setting them apart visually from the gray landscape they inhabit. Secondly, their placement at the top of the outcrop of land allows them to stand out against the light background, their tail feathers mimicked by the nearby leaves. The convoluted treatment of the rocky outcrop keeps it in competition with the pheasants as a focal point, but in the end the pair of birds’ color wins out. A final example on emphasis, taken from The Art of Burkina Faso by Christopher D. Roy, University of Iowa, covers both design features and the idea behind the art. Many world cultures include artworks in ceremony and ritual. African Bwa Masks are large, graphically painted in black and white and usually attached to fiber costumes that cover the head. They depict mythic characters and animals or are abstract and have a stylized face with a tall, rectangular wooden plank attached to the top.* In any manifestation, the mask and the dance for which they are worn are inseparable. They become part of a community outpouring of cultural expression and emotion. Time and Motion One of the problems artists face in creating static (singular, fixed images) is how to imbue them with a sense of time and motion. Some traditional solutions to this problem employ the use of spatial relationships, especially perspective and atmospheric perspective. Scale and proportion can also be employed to show the passage of time or the illusion of depth and movement. For example, as something recedes into the background, it becomes smaller in scale and lighter in value. Also, the same figure (or other form) repeated in different places within the same image gives the effect of movement and the passage of time. An early example of this is in the carved sculpture of Kuya Shonin. The Buddhist monk leans forward, his cloak seeming to move with the breeze of his steps. The figure is remarkably realistic in style, his head lifted slightly and his mouth open. Six small figures emerge from his mouth, visual symbols of the chant he utters. Visual experiments in movement were first produced in the middle of the 19th century. Photographer Eadweard Muybridge snapped black and white sequences of figures and animals walking, running and jumping, then placing them side-by-side to examine the mechanics and rhythms created by each action. In the modern era, the rise of cubism (please refer back to our study of ‘space’ in module 3) and subsequent related styles in modern painting and sculpture had a major effect on how static works of art depict time and movement. These new developments in form came about, in part, through the cubist’s initial exploration of how to depict an object and the space around it by representing it from multiple viewpoints, incorporating all of them into a single image. Marcel Duchamp’s painting Nude Descending a Staircase from 1912 formally concentrates Muybridge’s idea into a single image. The figure is abstract, a result of Duchamp’s influence by cubism, but gives the viewer a definite feeling of movement from left to right. This work was exhibited at The Armory Show in New York City in 1913. The show was the first to exhibit modern art from the United States and Europe at an American venue on such a large scale. Controversial and fantastic, the Armory show became a symbol for the emerging modern art movement. Duchamp’s painting is representative of the new ideas brought forth in the exhibition. In three dimensions the effect of movement is achieved by imbuing the subject matter with a dynamic pose or gesture (recall that the use of diagonals in a composition helps create a sense of movement). Gian Lorenzo Bernini’s sculpture of David from 1623 is a study of coiled visual tension and movement. The artist shows us the figure of David with furrowed brow, even biting his lip in concentration as he eyes Goliath and prepares to release the rock from his sling. The temporal arts of film, video and digital projection by their definition show movement and the passage of time. In all of these mediums we watch as a narrative unfolds before our eyes. Film is essentially thousands of static images divided onto one long roll of film that is passed through a lens at a certain speed. From this apparatus comes the term movies. Video uses magnetic tape to achieve the same effect, and digital media streams millions of electronically pixilated images across the screen. An example is seen in the work of Swedish Artist Pipilotti Rist. Her large-scale digital work Pour Your Body Out is fluid, colorful and absolutely absorbing as it unfolds across the walls. Unity and Variety Ultimately, a work of art is the strongest when it expresses an overall unity in composition and form, a visual sense that all the parts fit together; that the whole is greater than its parts. This same sense of unity is projected to encompass the idea and meaning of the work too. This visual and conceptual unity is sublimated by the variety of elements and principles used to create it. We can think of this in terms of a musical orchestra and its conductor: directing many different instruments, sounds and feelings into a single comprehendible symphony of sound. This is where the objective functions of line, color, pattern, scale and all the other artistic elements and principles yield to a more subjective view of the entire work, and from that an appreciation of the aesthetics and meaning it resonates. We can view Eva Isaksen’s work Orange Light below to see how unity and variety work together. Isaksen makes use of nearly every element and principle including shallow space, a range of values, colors and textures, asymmetrical balance and different areas of emphasis. The unity of her composition stays strong by keeping the various parts in check against each other and the space they inhabit. In the end the viewer is caught up in a mysterious world of organic forms that float across the surface like seeds being caught by a summer breeze.
A Quick Introduction to Unix/Files and Processes Everything in Unix is a file or a process. In Unix a file is just a destination for or a source of a stream of data. Thus a printer, for example, is a file and so is the screen. A process is a program that is currently running. So a process may be associated with a file. The file stores the instructions that are executed for that process to run. Another way to look at it is that file is a collection of data that can be referred to by name. Files are created by users either directly (using text editors, running compilers etc.) or indirectly (by running some program - like processing a text input file to produce a formatted file for printing). Examples of files include: - a text document; - a program written in a programming language such as C++ or Java; - a jpeg image; - a directory: directories can be thought of as the analogue of Windows’ folders. Directories are files that contain links to other files. The standard input and output and the standard error streamEdit There are two files that have somewhat opaque names, stdin and stdout. These names refer to default sources of and destinations for data. Consider the process initiated by the command ls. The default output of this process is a list of files in the current working directory which is then displayed on screen. This illustrates the default output stdout which is nothing but the screen. The standard input by contrast is the keyboard - thus also known as stdin. In shell programming, it is often useful to prevent error messages from Unix commands from being displayed on screen. Instead, they are either suppressed or sent to a file. This is done by redirecting the error messages to a filename or to /dev/null - the null device or destination. To use these streams (stdin, stdout, stderr) in the shell, we refer to them by the numerical descriptors rather than by name. To review: Commands effectively take their input from files and direct their output to files. By default, the output file is the screen, and the input file is the keyboard.
The Total column shows the total number of people in that county or town with this surname. For example, there were 535 people called MCSPIRIT in Westby With Plumpton at the time of the 1881 census. The Frequency column shows the percentage of people in this county or town with this surname. For example, a frequency of 50000.0000 in Westby With Plumpton means that 50000.0000% of the people in Westby With Plumpton on census day were called MCSPIRIT. The Index column shows the relative probability of finding someone called MCSPIRIT in this county or town, compared with the probability of finding them anywhere in Britain as a whole. An index of 1 means that if you pick someone at random from this county or town, you have exactly the same probability of picking someone called MCSPIRIT as if you picked at random from the whole of the UK. Where the index is higher than 1, then you are more likely to find someone called MCSPIRIT here than if you picked from the UK as a whole, and where it's lower then you are less likely. The actual figure shows the level of probability - for example, a figure of 2 would indicate that you are twice as likely to find someone called MCSPIRIT here than in the UK as a whole, and 10 would make it ten times as likely. The value of 0.54 in Westby With Plumpton means that you are 0.54 times as likely to find someone with the surname of MCSPIRIT in Westby With Plumpton than you would be in the whole of the UK.
A $12.50 (AUD) induction inductor is one of the cheapest things you can buy, and there’s no reason to believe it can’t be made more efficient. The most efficient inductors can be made from just five ingredients. But there are also other ways to get more efficient, which makes it worth researching the economics behind them. There’s a good chance you’ve already heard the term “inductive motor” from someone who has worked with inductors. But what exactly is an inductor? It’s a term for a type of circuit that uses electrical impulses to drive a mechanical actuator, usually a motor. The motor acts like a stick with a rubber handle. If you want to drive something, you push a lever and it turns the stick in the right direction. A motor is often referred to as a “generator”. Inductive motors can have an output that depends on a source of current. An inductor works like a generator, and uses a voltage source to produce a current in the circuit. In this case, you’re pushing a lever, which in turn produces a current. A simple example of a simple inductor: You put a resistor in the middle, which means the motor generates an electrical current. The output of the motor is what you want. But you could also use an inductive motor with a secondary motor, which creates a more complicated circuit. Inductive motor induction source Wikipedia Source The Wikipedia article on inductors has a short section on “electrical induction”. This describes how an induction motor works: A motor generates current from an electric current through a resistor. The voltage generated by the resistor causes a current to flow through a capacitor. This current flows back through the resistor, producing a voltage which then flows back to the motor. Because of the capacitor’s resistance, the motor’s resistance is low, so the motor doesn’t need to move very much. When you push the lever, the current produced by the motor reaches the capacitor and travels through it, creating a current that flows back. A capacitor is a device that is used to absorb the electrical energy from a source. When a capacitor is in place, the energy flowing through the capacitor is less than the energy being emitted by the source. If the capacitor absorbs a lot of the energy coming from the source, the circuit will produce less current than if the source had a smaller amount of current flowing through it. You can see the basic idea here. This diagram illustrates how a capacitor works: The capacitor works by absorbing a lot more energy than the source of energy. The capacitor can be any type of capacitor. The diagram below shows how a typical capacitor works. In a typical circuit, you connect two wires, one on each end of the circuit, to each of the resistors. The wires go from the motor to the capacitor, and the capacitor goes from the capacitor to the resistor. A typical circuit is described in the Wikipedia article “How to make an inductively active circuit”. When you have a motor that you want a motor to drive, you could use a motor with secondary motors to drive the secondary motor. A secondary motor produces a lot less current. You could then have an inductance motor, a motor having only one primary motor, that uses the same capacitor as the motor motor. You would have a “simple” inductor circuit. You just have a capacitor that you can add a resistor to, which adds an amount of voltage to the current flowing from the primary motor to each capacitor. A simpler circuit can be simplified with some additional equipment. Inductor circuits are usually more complicated because they use secondary motors. The inductor inductor The diagram above shows the basic inductor in action. The resistor on the left is an input to the inductor. If a motor has a primary motor with one motor, the secondary will generate more current than the primary, which will create more resistance. The secondary motor needs to have a larger capacitor to absorb all the current coming from it. A common way to reduce the amount of resistance on a secondary is to use capacitors that have very low resistance, such as copper. When using capacitors with low resistance it makes sense to use the capacitors close together in the inductance circuit. An example of using a capacitor close together The diagram on the right shows a typical inductor configuration. You have a resistor that’s a capacitive switch, so that if a motor generates a lot or a lot, the capacitor will be more likely to have an increase in resistance. In the diagram on this page, you can see that a capacitor near the primary and secondary motor has an increase of resistance. So the motor will be driving a bigger capacitor than before, and that will increase the motor resistance. You need a capacitor with a small resistance to reduce this. A more complex configuration with secondary motor inductors You can have a more complex circuit using
Self-regulation skills are often difficult to teach young students. We are seeing students come to school with bigger and more disruptive behaviors than ever before. Often, these children have never been taught how to stay calm in stressful situations. Young children need visuals and concrete examples to help them to understand complex skills like self-regulation. Everyone gets upset from time to time, but some students need to be taught how to cope when feeling upset in the classroom. The good news is there are some easy-to-implement self-regulation tools that are perfect for teaching kindergarten students how to regulate their emotions and behavior choices in the classroom. Use Social Stories
No students in the history of education are like today’s modern learners. They are complex, tech savvy, energetic individual. They want to challenge, to collaborate, and to work with their peers and incorporate technology they loved into their classroom experience as they can. They have high set of expectations as their educators have on them. They are aware of the ever-changing trends in society today. Resilient learners have more opportunities in global education. Technology is a powerful learning tool at their reach that didn’t have before. It is a better tool to acquire effective knowledge to develop their unique abilities and talents. Amidst the COVID-19 outbreak resilient learners get through and overcome hardship like a scale that needs to balance both sides. In a span of time individual learns and creates strategies to adopt in the environment around them. Reducing the source of stress in following the health care protocols, adopting the new normal implemented by authorities (wearing mask, social distancing, cleaning oneself and surrounding), is providing learners with technology (gadgets; laptop, smart phone, desktop computer) and access to internet connecting them to different online programs given by school administrators. By this interaction with unique ways and means the resiliency of learners is being developed. Through supporting responsive relationship, learners need these life skills to maintain stable relationship with family and friends, caregiving, manage daily life activities. Even though everyone requires to maintain physical distancing, it’s important to communicate through call, video chat, Email, or write letters to people to care about and engage in responsible interaction, protect emotional well-being and manage the stress of living through this challenging time. Learner’s development does not stop during crisis period and supporting that development in building resilience has to take a lot of time and effort, constant interaction is simple everyone can do them in ordinary time throughout the day. Resilient learners do not let adversity define them. They define resiliency by moving forward to a goal beyond themselves. Transcending pain and grief as temporary defeat. They are advocating towards plan, career opportunities to ensure that they will not be left behind and are intone of the future may bring. Neil Librando Abao
Here is a question that I often get asked: Does a Workbook On Circles worksheet answer a student’s question? The short answer is no. The long answer is that there are many ways to ensure the effectiveness of any instructional material, and so students should be guided in the selection of such material by the appropriate authority. To help students choose a program on Circles, the best alternative that is available is to use the resource that has been developed by Richard Lemarchand and his colleagues at the University of Arkansas. These researchers have done extensive research on this subject and have come up with three different sets of helpful segments. Most of the segments that are included on the circle programs for high school students are taken from the web site of the National Science Teachers Association. The NCSTA web site includes a detailed explanation of the activities that will be incorporated into your circles and helps you select which circles should be used and which should not. The segment on Consequences is one of the best for circle students because it helps students to understand their choices about the quality of rewards that they are going to receive. For example, if a student believes that the best way to get a reward is to answer all questions correctly, then they will be better off choosing to work in circles. The other segments that are offered on this site for circle students are designed to give students a taste of what they can expect if they follow a specific approach to answering questions in circles. The first segment that they should try out is the behavior of a certain type of animal, referred to as a “guardian.” This segment consists of eight steps that will show circle students that there are times when they should be looking out for each other, and that the only way to be successful in any situation is to think on your feet. The next segment on group roles is set up to help students understand the relationships between them. There are seven groups that are included on this worksheet and help students to see how the members of their circle should react to different situations. The final segment is designed to help students determine the educational outcome from circle activities. They are presented with nine different techniques that can be employed for various types of circles that students can participate in. If you are thinking about implementing circle activities in your workbook, consider working with a workbook on Circles instead of starting from scratch. With a little effort, students can begin to see the advantages of using circle programs that have been tested and proven to be effective.
Introduction to Tkinter module in Python In this article, we will learn about Tkinter module in Python. This article will help you to understand with Tkinter library in python and gives you a brief idea about the Graphical User Interfaces (GUI) applications in python. Tkinter module in Python First, we have to import Tkinter module by simply and at the same time we create the main window. Basically in this window we have to do all the operations or we can say we use all the functions of Tkinter module. The main window is created by using Tk() function of Tkinter module at the same time we have to close that window by command window.mainloop() as: import tkinter window=tkinter.Tk() window.mainloop() output is simply a Tkinter window that we have created. Functions of Tkinter module - tkinter.Label(window,text=” “).pack() : This method is used to give some label or name to our window.It takes two argument one is on which that you want to put and what you want to put. - tkinter.Frame(window, width, height): This is used as containers in the Tkinter module. It takes 3 arguments window and width or height of frame that the user wants. - Entry(window): This method or widget is used to create input fields in the GUI or entry buttons in our created frame. - checkbutton(window,text=” “): This method is used to create the check buttons in your application. It takes two arguments one is the window on which you want to put that button second is text that you want to apply on this button. - mainloop(): This method is used when you are ready to run your application. - canvas(): This function is used to draw complex pictures on the frame. You may take a look at this: Tkinter pack(), grid() Method In Python The code with all functions that we have used above is: from tkinter import * m=Tk() m.title("NUMBER GUESSING GAME") lable=Label(m,text="CodeSpeedy") lable.pack() frame=Frame(m,width=300,height=300) button1=Button(frame,text="enter") button2=Button(frame,text="number 1") button3=Button(frame,text="number 2") button4=Button(frame,text="number 3") button4.pack(side=LEFT) button3.pack(side=LEFT) button2.pack(side=LEFT) button1.pack(side=LEFT) frame.pack() bottomframe=Frame(m,width=300,height=300) lable2=Label(bottomframe,text="JITENDRA KUMAR") button5=Button(bottomframe,text="Exit") button5.pack(side=RIGHT) bottomframe.pack(side=BOTTOM) mainloop() I am not giving any output here as I want you to go try and run on your machine. You can also see:
Overview: Encourage students to be a little more organized by using recycled boxes or cans in this art project. A little creativity is in order in guiding students in assembling materials into a storage unit for jewelry, flash drives, CDs, pencils, money, small toys, car keys or iPods. The variables are the types of materials you can get your hands on, and the age group you are working with. But nearly any student will benefit from this fun process of creating, then organizing. Resources: Teacher: small boxes, frozen juice cans, cardboard, tape, colored paper, glue. Student: markers, crayons. Teacher Preparation: locate materials: these may be items you already have on hand, or throwaway items from a business. Brainstorm ideas for the age group you work with: young children may need fairly large compartments to keep toys in, while high school students might need a place to toss keys and loose change. Set out materials on table. 1. Using tape or glue, students assemble materials into a storage unit of their, or your, choosing. 2. Have students decorate the finished pieces as they choose, or give them a style to work in. Teenagers, for example, could finish their projects using whatever they like, or they could be instructed to imitate a particular art style or recreate the look of a natural material, such as sand or rock. Variations/Options: Students could design their pieces on paper first; can explain to class what the purpose of their piece is; use as a prototype for a business idea; solve a specific storage problem at home with their project.
The landscape of Queensland varies, and includes tropical islands, sandy beaches, flat river plains, elevated plateaus, dry deserts, and agricultural belts. Perhaps the most notable geographical feature of Queensland is the Great Barrier Reef, the largest coral reef system in the world. The Great Barrier Reef is located in the Coral Sea, and stretches over 1,250 miles (2,000 km), with an area of 133,000 sq. miles (344,400 km). Over 2,800 individual reefs make up the Great Barrier Reef, and it is the biggest single structure made by living organisms in the world The largest sand island in the world, Fraser Island, lies just off the coast of Queensland, and contains half of the world's dune lakes. These rare lakes have no natural inflow or outflow, and are formed in depressions between sand dunes. There are a total of 80 worldwide. On the mainland, there are no large, natural lakes within the state, but there are hundreds of rivers. The major coastal rivers include the Flinders, Mitchell, Fitzroy, Mary, and Brisbane Rivers. At 520 miles (840 km), the Flinders River is the longest of these rivers. In the eastern part of Queensland, the Great Dividing Range dominates the land. The highest point of the state is Mount Bartle Frere at 5,321 ft 1,622 m (1,622 m). The lowest point of Queensland is the Pacific Ocean (0m).
More than Words: The Cornerstone of Reading Comprehension Learning to read is one of the most fundamental, and yet most complex, tasks for young students. Despite many national initiatives to boost reading instruction, an alarming number of children still struggle: on a test sometimes called “the Nation’s Report Card,” (the National Assessment of Educational Progress or NAEP), almost half of fourth and eighth graders were rated as below proficient in reading in 2015. Part of the reason it’s so challenging to become proficient is that reading requires mastering and combining many different skills, from identifying and sounding out words to connecting those words with their meanings and then understanding the content of a text. Reading comprehension is often one of the missing pieces. Pani Kendeou, Kristen McMaster, and Theodore Christ have conducted numerous studies on how students come to understand what they read, and they have summarized their and others’ research in a recent article for Policy Insights for the Behavioral and Brain Sciences (SAGE Publishing). They find that the most important skill for reading comprehension can and should be built very early – before children begin reading, and maybe even before they begin school. Kendeou and colleagues write that “the cornerstone of reading comprehension” is the ability to make inferences. When we make inferences, we generate information that isn’t explicit in the text we read (or the story we hear or picture we see). Inferences help us understand what we read by allowing us to construct mental representations of the text that connect the meaning of the sentences to each other and to our background knowledge. Without inferences, those pieces would be disjointed and the text wouldn’t make sense. It was once thought that children should first be taught the “code-based” skills of reading, like sounding out words with phonics and recognizing high-frequency or “sight” words, before focusing on “language-based” skills of reading for understanding. This sentiment is reflected in the often-repeated phrase that third grade marks a transition from “learning to read to reading to learn.” But studies now show that children should be doing both from the beginning. The authors are working to build knowledge about how to help them do that. “We have made a lot of progress in understanding what are the key components for teaching code-based skills, but we have less work on language comprehension skills at different ages,” Kendeou says. Recent work that she and others have conducted on inferences is changing that, however. Studies suggest that efforts to teach reading need to encourage students to connect pieces of information and fill in missing information. That means not just drilling them on words and spelling, but posing questions about how pieces of the text are connected, directing readers’ attention to certain parts of the text, highlighting clues, and providing “scaffolded feedback,” which prompts students to use information from the text to answer thought-provoking questions. It also means building students’ background knowledge, not only as part of what they are reading, but in its own right. “The factor that carries the largest variability in reading comprehension is the reader’s knowledge,” Kendeou and colleagues write, including knowledge about the text’s subject and the world, as well as knowledge about syntax, grammar, and spelling. That should be a caution against instructional approaches focused exclusively on drilling code-based skills. Kendeou recommends that children be encouraged to make inferences from a very early age. Children as young as two can and do make inferences in all kinds of contexts (including but not limited to events they experience). As they get older and begin to read, their ability to make inferences and connections assists them in everything from identifying words to extracting meaning from written text. That means that parents and educators can begin building reading skills in very young children by encouraging critical thinking, for example by describing how things work, pointing out how actions are linked to reactions and consequences, and asking children questions about what they think will happen next when they put a toy at the top of a ramp. In other words, building pre-reading skills doesn’t happen only through reading. Learning to read may be one of the most complex tasks of the school years, but some of the promising strategies for schools and families to support reading can be surprisingly simple. Find more on Panayiota Kendeou, Kristen L. McMaster, and Theodore J. Christ, “Reading Comprehension” in Policy Insights for the Behavioral and Brain Sciences.
For a very long time, oceans have been polluted by land-based pollutants such as dirt and plastic. While all of these contaminants affect a huge number of marine animals, nothing is more hazardous than nuclear pollutants. The radiation from Fukushima, for instance, bled into the ocean when the Fukushima Daichi nuclear power plant was damaged during the 2011 earthquake in Japan. What was released are dozens of radioactive elements in large quantities. Although the radioactive isotopes that were released into the ocean are no longer detectable in the environment, they still pose significant health concerns. Calcium-137, for example, is a direct threat to marine life and has a lifespan of 30 years. The radioactive materials released will also affect humans, although the level of risk will depend on various factors, including the type of isotopes and the dose a person is exposed to. People who are highly sensitive to these materials will experience significant long-term health impact. Nuclear pollution, however, is not due to damage to nuclear plants alone. There are many nuclear wastes that originate from the radioactive material used for medical, industrial, and scientific processes. Where exactly does nuclear waste come from? - Radioactive by-products as a result of operations done by nuclear power stations. Many nuclear-fuel processing plants produced man-made nuclear wastes that make their way into the ocean. - If anyone were to trace radioactive materials from plants in the ocean, they will find traces of them as far away as Greenland. - Certain processes in the field of medicine, science, and other industries that use radioactive materials also produce waste during the nuclear fuel cycle. - Nuclear waste is also a result of mining and processing thorium and uranium. The real score behind radioactive waste Disposal of nuclear waste into the oceans is nothing new. In fact, the Arctic Ocean, the English Channel, and the Irish Sea have served as dumpsites of low levels of radioactive waste since 1952. Then, there are the military, nuclear power stations, and reprocessing plants that are some of the major producers of nuclear waste. Due to the rapid growth of the nuclear energy industry, the waste produced also increases every year. In 2006, the UK alone disposed of nuclear waste that measured 12,900 cubic metres into the ocean. This is equivalent to five Olympic-sized swimming pools. While many recognise that radioactive material must be encased and isolated to prevent it from leaking into the ocean floor, some are dumped directly into the world’s waters. How nuclear pollution affects the marine environment in the long-term It enters the food chain. The nuclear waste that goes into the ocean can be eaten by plankton and kelp that will, in turn, contaminate fish that feed on them. Humans who eat contaminated marine food will likely experience health problems. This is why fishing is still prohibited within 10 km of the Fukushima Daichi power plant. The good news is it takes a lot of consumption of radioactive materials for fish to be contaminated. The bad news is that fish are mobile, and some of the contaminated ones may have swam to waters that are considered clean and safe.
The diagram below graphs the number of Confederate statues erected between 1870 and 1980. Since the Southern Poverty Law Center (SPLC) compiled the data, they suggest the memorials were most frequently put in place during periods of flagrant anti-black sentiment in the South. In short they imply that racism was the prime motive for Confederate monument-building. In truth, however, more compelling reasons are as obvious as cow patties on a snow bank to the thinking person. The two most notable peaks were 1900-to-1915 and 1957-to-1965. The SPLC implies that the first wave was due to “lynchings, ‘Lost Cause Mythology,’ and a resurgent KKK.” Facts, however, don’t support their conclusion. First, the KKK’s resurgence was in the 1920s, which was at least five-to-ten years after the first peak had already past. Moreover, the state with the most KKK members during the 1920s was Indiana, a Northern state. Second, the number of lynchings were steadily dropping during the 1900-to-1915 period. Third, “Lost Cause Mythology” was a strong influence until at least 1950 and by no means concentrated in the 1900-to-1915 period. [Learn more about Civil War and Reconstruction at My Amazon Author Page] Contrary to the SPLC’s imaginings three factors were the chief cause of the first surge from 1900-to-1915. First, the old soldiers were dying and survivors wanted to honor their memories. A twenty-one year old who joined the Rebel army at the start of the war was sixty years old in 1900 and seventy-five in 1915 when life expectancies were shorter than today. Second, post-war impoverished Southerners generally did not have enough money to even begin erecting memorials to fallen Confederates until the turn of the century. The region did not even recover to its level of pre-war economic activity until 1900, which was thirty-five years after the war had ended.* Third, until at least 1890 the Grand Army of the Republic (GAR) was hostile to any display of Confederate iconography. The GAR was a Union veterans organization that held considerable political power until at least 1900. By 1893, for example, they so successfully lobbied for retirement benefits that their pensions totaled nearly 40% of the federal budget.** Annual disbursements for Union veterans pensions did not top out until 1921. As for the second surge between 1957 and 1965, the SPLC predictably attributes it to Southern resentment over public school integration and the 1960s civil rights movement. Nonetheless, it was more likely due to initiatives that celebrated the Civil War Centennial. *Ludwell Johnson, Division and Reunion, 190 **Jill Quadagno, The Transformation of Old Age Security, 45
Free Shipping on orders over $59 !Use Code: W3355See Details P.E. Central Lesson Plan: Buckets of Zoo Prerequisites:Underhand and overhand throwing skills Purpose of Activity:To practice spelling while developing a variety of throwing skills. Suggested Grade Level:2-3 Materials Needed:8-10 Bean Bags ; 8-10 Cones & 1 bucket; Plastic or Paper Plates; Selected Zoo music and tape player; Physical activity: Throwing and various locomotor skills Lesson Plan:Description of Idea Divide the class into groups of four or less with one bean bag per team. Groups will sit behind the team cones which are scattered around the perifery of the gym or playground area. The paper plates will be scattered around the interior of the gym and should have all types of different animals, insects, numbers, etc. drawn on the front side of the plate and the spelling for the animal on the back side of the plate. The plates will be spread out in no particular pattern inside of the gym perfiery in a big circle(zoo). As the music begins to play, the first student in each group will run, skip, hop, jump, etc. as directed, to a place about four feet from the animal of their choice and will try to throw underhanded or overhanded, as directed, to the animal. When the student successfully throws the beanbag on to the plate, they will then pick up the animal plate and bring it back to the group along with the bean bag. The next student then proceeds to move and throw to an animal of their choice. The students continue throwing to the plates until all plates have been collected. After all of the plates have been collected, the students choose one plate at a time and turn the plates over and read the name and spelling of the animal. As a group, the students are to spell out the name of the animal using their bodies to make each individual letter. The students should also discuss the characteristics of the animals movement and sounds that distinguish each animal. Each group can choose one animal that they would like to introduce to the class and have the class spell the name and act and move as the animal moves. Vary the type of throwing and distance from which the child throws. Throw with both the right and left hand and at different levels and positions (high, medium, low, forward, backward, sidewards) Use numbers instead of animals and have the students add up the numbers. After adding the numbers up they can then perform all types of different exercises that equal their teams' total (i.e. 55 total points= 10 half jacks, 10 jump ropes, 20 basketball shots, 10 push ups, 5 ski jumps). Change the skill area by having students strike, volley or kick to the plates located in various places throughout the room. Have the students spell the words in class and use them in a sentence that includes characteristics about how the animal moves. Adaptations for Students with Disabilities: If students are unable to move to the designated area, have a mobile student go and get an animal plate for the student. Author:Russell Westbrook who teaches at Cameron Park Elem. in Hillsborough, NC. Posted on PEC: 8/29/2000. This lesson plan was provided courtesy of P.E. Central (www.pecentral.org). Products for This Lesson:
Also found in: Dictionary, Thesaurus, Medical, Wikipedia. T-carrierA digital transmission service from a common carrier. Although developed in the 1960s and used internally, AT&T introduced it as a communications product to the public in 1983. Initially used for voice, its use for data grew steadily, and T1 and T3 lines were and still are widely used to create point-to-point private data networks. T-carrier lines use four wire cables. One pair is used to transmit; the other to receive. The cost of the lines is generally based on the length of the circuit. Thus, it is the customer's responsibility to utilize the lines efficiently. Multiple lower-speed channels can be multiplexed onto a T-carrier line and demultiplexed (split back out) at the other end. Some multiplexors can analyze the traffic load and vary channel speeds for optimum transmission. See T1, T2, T3, DS, DSU/CSU and inverse multiplexor.
There are many sampling techniques out of which the researchers has to choose the one which gives lowest sampling error, given the sample size and budgetary constraints. Whatever may be the case, an ideal sampling frame is one that entire population and lists the names of its elements only once. This situation often arises when we seek knowledge about the cause system of which the observed population is an outcome. In the above example, not everybody has the same probability of selection; what makes it a probability sample is the fact that each person's probability is known. Because there is very rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample or subset of that population. What is the recontact procedure for respondents who were unavailable? The sample will be representative of the population if the researcher uses a random selection procedure to choose participants. Aims and objectives of study The aims and objectives of research need to be known thoroughly and should be specified before start of the study based on thorough literature search and inputs from professional experience. Study-specific base sampling weights are derived using a combination of the final panel weight and the probability of selection associated with the sampled panel member. A retail store would like to assess customer feedback from at-the-counter purchases. The hypothesis generation depends on the type of study as well. In non-probability sampling procedures, the allocation of budget, thumb rules and number of sub groups to be analyzed, importance of the decision, number of variables, nature of analysis, incidence rates, and completion rates play a major role in sample size determination. Respondents must be provided with and maintain a high level of comprehension of the research subject. Explain the methods typically used in qualitative data collection. For example, consider a street where the odd-numbered houses are all on the north expensive side of the road, and the even-numbered houses are all on the south cheap side. Such designs are also referred to as 'self-weighting' because all sampled units are given the same weight. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. Types of Purposeful Sampling. In quantitative research, the goal would be to conduct a random sampling that ensured the sample group would be representative of the entire population, and therefore, the results could be generalized to the entire population. In this method, the participants refer the researcher to others who may be able to potentially contribute or participate in the study. In sampling, this includes defining the population from which our sample is drawn. Experimental Design Experimental Design Experimental i. As the interviewers and their co-workers will be on field duty of most of the time, a proper specification of the sampling plans would make their work easy and they would not have to revert to their seniors when faced with operational problems. The different types of non-probability sampling are as follows: This step involves implementing the sampling plan to select the sampling plan to select a sample required for the survey. How large should the sample be? We visit every household in a given street, and interview the first person to answer the door. Randomization occurs when all members of the sampling frame have an equal opportunity of being selected for the study. What is an appropriate available sampling frame? Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population. It is not 'simple random sampling' because different subsets of the same size have different selection probabilities — e. Systematic sampling A visual representation of selecting a random sample using the systematic sampling technique Systematic sampling also known as interval sampling relies on arranging the study population according to some ordering scheme and then selecting elements at regular intervals through that ordered list. Samples are then identified by selecting at even intervals among these counts within the size variable. Here the superpopulation is "everybody in the country, given access to this treatment" — a group which does not yet exist, since the program isn't yet available to all. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel i. In order to collect these types of data for a study, a target population, community, or study area must be identified first. A well defined population reduces the probability of including the respondents who do not fit the research objective of the company. A probability sample is a sample in which every unit in the population has a chance greater than zero of being selected in the sample, and this probability can be accurately determined. We will assess your document or any other prepared research to determine if appropriate inductive and deductive reasoning were employed in the interpretation of the data and information presented. There are various ways of classifying the techniques used in determining the sample size. The husband may purchase a significant share of the packaged goods, and have significant direct and indirect influence over what is bought. For instance, a simple random sample of ten people from a given country will on average produce five men and five women, but any given trial is likely to overrepresent one sex and underrepresent the other.Simple random sampling (also referred to as random sampling) is the purest and the most straightforward probability sampling strategy. It is also the most popular method for choosing a sample among population for a wide range of purposes. Sampling is the process whereby a researcher chooses his or her sample. The five steps to sampling are: The five steps to sampling are: Identify the population. Snowball sampling – members are sampled and then asked to help identify other members to sample and this process continues until enough samples are collected The following Slideshare presentation, Sampling in Quantitative and Qualitative Research – A practical how to, offers an overview of sampling methods for quantitative research and. About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Research is the foundation of effective decision making and knowledge creation. The research process has been refined over the years to a level of sophistication that, while yielding actionable results, may appear daunting to those not immersed in its practice. There are many methods of sampling when doing research. This guide can help you choose which method to use. Simple random sampling is the ideal, but researchers seldom have the luxury of time or money to access the whole population, so many compromises often have to be made.Download
<p style='text-align: justify;'>The <i>Sidereus nuncius</i>, a title commonly translated as the <i>Starry messenger</i>, is a book printed and published within a few months of the date of the observations it reported. The stimulus to Galileo’s new observations and the haste to publish an illustrated book was the discovery made in about 1608 by opticians in the Low Countries that two lenses placed at the correct distance when their optical axes were in line gave a magnified image of distant objects, the founding principle of the telescope. A tube of card or wood between the lenses held them in line and cut out extraneous light. Some months after the discovery was in early 1609 in Venice, where he heard of the new instrument and so he constructed one himself. Whereas the first use of the telescope seems to have been for terrestrial observations it occurred to Galileo that the telescope could be raised to the heavens to bring celestial objects apparently closer to the observer. Galileo’s first telescope gave a magnification of about 3 times with a somewhat distorted image but this he soon increased 10-fold – it was a huge leap in our observational capabilities.</p><p style='text-align: justify;'>The <i>Messenger</i> brought news of little succour to Church dogma. Galileo looked at the Sun (something never to be tried at home), at the stars, Moon and planets. The Moon, situated above the imperfect Earth, was far from the perfect white sphere assume but apparently pock-marked by impact and with mountainous regions like Earth’s. There were spots upon the Sun, even, albeit the source of light and warmth; though the stars themselves seemed brighter there was no change in their aspect but the fine dusting of background light he found to be made up of tiny stars forming the Milky Way, our own galaxy we now realise, and there was in patches irregular nebulosity, for instance around Orion’s belt. The planets showed tiny discs while the stars remained points, Saturn had bulges around it and Venus had a crescent phase much as the Moon did. From the viewpoint of dogma perhaps most damaging of all was his discovery of the four moons orbiting Jupiter. Galileo named these the four Medicean stars after the family of his patrons. Galileo had shown empirically that there were bodies that orbited other planets and that this was not exclusively the Earth’s prerogative.</p><p style='text-align: justify;'>What Galileo could not show empirically was that the Earth moved around the Sun; three centuries of technical development would be needed before that was demonstrated. Nonetheless Galileo took this as a self-evident and this caused him to cross swords with one of Bruno’s Inquisitors, Cardinal Bellarmine, who had in 1616 explicitly stated the Church’s rejections of Coperincanism. Bellarmine died in 1621 but the Church held the power of life and death and Galileo’s assertion brought him before the Inquisition in Rome in 1633. This was the same body that had sealed the grim fate of Giordano little more than three decades before, in the light of which it seems of small wonder that Galileo submitted to the Inquisitors and wrote his recantation of his resolute conviction.</p><p style='text-align: justify;'>Adam Perkins, Curator of Scientific Manuscripts</p><p style='text-align: justify;'>This item was included in the Library’s 600th anniversary exhibition <a target='_blank' class='externalLink' href='https://exhibitions.lib.cam.ac.uk/linesofthought/artifacts/seeing-the-stars/'><i>Lines of Thought: Discoveries that changed the world</i></a>.</p> This image has the following copyright: Do you want to download this image? This metadata has the following copyright: Do you want to download metadata for this document?
Inquiry Based Instructional Model To intertwine scientific knowledge and practices and to empower students to learn through exploration, it is essential for scientific inquiry to be embedded in science education. While there are many types of inquiry-based models, one model that I've grown to appreciate and use is called the FERA Learning Cycle, developed by the National Science Resources Center (NSRC): A framework for implementation can be found here. I absolutely love how the Center for Inquiry Science at the Institute for Systems Biology explains that this is "not a locked-step method" but "rather a cyclical process," meaning that some lessons may start off at the focus phase while others may begin at the explore phase. Finally, an amazing article found at Edudemic.com, How Inquiry-Based Learning Works with STEM, very clearly outlines how inquiry based learning "paves the way for effective learning in science" and supports College and Career Readiness, particularly in the area of STEM career choices. In this unit, students will develop an understanding of gravity while focusing heavily on the 5th Grade Engineering and Design standards. In the first few lessons students will explore the relationships between gravity, weight, and mass. Then, students will apply their understanding of gravity to engineer and design parachutes and roller coasters. Summary of Lesson Today, I will open the lesson by showing students a variety of components that they could include in their roller coaster models. Next, I ask students to complete a labeled design on paper before continuing with their projects. Students, then work on finishing their roller coaster support systems. Next Generation Science Standards This lesson will address the following NGSS Standard(s): 5-PS2-1. Support an argument that the gravitational force exerted by Earth on objects is directed down. 3-5-ETS1-1. Define a simple design problem reflecting a need or a want that includes specified criteria for success and constraints on materials, time, or cost. 3-5-ETS1-2. Generate and compare multiple possible solutions to a problem based on how well each is likely to meet the criteria and constraints of the problem. 3-5-ETS1-3. Plan and carry out fair tests in which variables are controlled and failure points are considered to identify aspects of a model or prototype that can be improved. Science & Engineering Practices For this lesson, students are engaged in Science & Engineering Practice 2: Developing and Using Models. The goal is for students to begin making a physical replica of a roller coaster system and to use the model to test cause and effect relationships. To relate content across disciplinary content, during this lesson I focus on Crosscutting Concept 2: Systems and System Models. In particular, students will be evaluating cause and effect relationships as they begin constructing and testing their roller coaster designs. Disciplinary Core Ideas In addition, this lesson also aligns with the Disciplinary Core Ideas: ETS1.A: Defining and Delimiting Engineering Problems ETS1.B: Developing Possible Solutions ETS1.C: Optimizing the Design Solution PS2.B. Types of Interactions Choosing Science Teams With science, it is often difficult to find a balance between providing students with as many hands-on experiences as possible, having plenty of science materials, and offering students a collaborative setting to solve problems. Any time groups have four or more students, the opportunities for individual students to speak and take part in the exploration process decreases. With groups of two, I often struggle to find enough science materials to go around. So this year, I chose to place students in teams of three! Picking science teams is always easy as I already have students placed in desk groups based upon behavior, abilities, and communication skills. Each desk group has about six kids, so I simply divide this larger group in half. Gathering Supplies & Assigning Roles To encourage a smooth running classroom, I ask students to decide who is a 1, 2, or 3 in their groups of three students (without talking). In no time, each student has a number in the air. I'll then ask the "threes" to get certain supplies, "ones" to grab their computers, and "twos" to hand out papers (or whatever is needed for the lesson). This management strategy has proven to be effective when cleaning up and returning supplies as well! Lesson Introduction & Goal I review the learning goal: I can use the Engineering Method to design a paper roller coaster. The Engineering Method I take a moment to review the The Engineering Method posters. What if I told you that you have to follow the engineering method in order and that you can never go back to earlier steps? Students were appalled! They explained that you need to continually return to steps in order to test, improve, and retest your design throughout the process of building a roller coaster model. One student explains, "You wouldn't want to wait until the end to find the failure points. You want to know right now so that you can fix them as you go." I take this opportunity to review the Testing Components Poster from yesterday's lesson. I ask: Can you tell me how you tested your support system yesterday? How did it help you? Students explain how they shook their structure to ensure stability and how they made improvements by adding tape or by adding more supports. Roller Coaster Poster While I don't take the time to specifically review the The Roller Coaster System Poster today, students and I refer to it often to make sense of a roller coaster as a system of parts working together and to review how the force of gravity plays a role in the system. Also, by just having this poster up during every roller coaster lesson, I'm supporting my ELL student with developing content related vocabulary and other students who need to have repeat exposures to content in order to comprehend new information. Reviewing the Engineering Challenge Criteria for Success In each box, I have a tape dispenser (I found them for $1.50 each.), two rolls of tape, three marbles, Design Review cards in an envelope (for a later lesson), and I'll add a roll of masking tape later on: Team Box & Supplies. At the beginning of this Roller Coaster Engineering Challenge, each student was given 30 sheets of card stock. To keep track of unused paper, each student has a folder that they keep in their desks to hold their card stock paper. Students also grab their roller coaster prototypes at this time. To help with the management of gathering and putting back roller coasters, I gave each team the same color poster board as their team name. For example, the black team all had black poster board bases for their roller coasters. I was hoping this would help students locate their roller coasters quickly during future lessons. In addition, I designated a spot for each team's roller coasters in the room. This proved to be helpful as students won't have to go looking for a spot each time we clean up our materials. After placing their roller coaster models on their desks, I invite students to the front carpet. I want to build up a renewed excitement toward building paper roller coasters, so I show students this incredibly engaging video! Students can't believe that someone was able to construct a 16-foot tall paper roller coaster! Before pushing play, I ask students to begin thinking about the components (or parts) that they would like to include in their own models! Roller Coaster Design & Components Now that students have part of their support structure in place, I introduce the main roller coaster components (without explaining how to make each part), and then students will draw their designs on paper. This way, they have a tentative plan and an idea about where to head next! As a side note: up until this point, I have purposefully held off on asking students to create a roller coaster design on paper. I know it might seem odd to be asking students to create a design now, on day 4 of this project, however, completing research (day 1), introducing the challenge (day 2), and allowing students the opportunity to begin constructing their support system (day 3) will help scaffold the complex task of drawing a paper roller coaster model on paper. Not only are there a lot of parts to take into consideration, but my students don't have any experiences with building paper roller coasters! Even at this point, it is challenging for my students to visualize and draw their roller coaster designs! Before the lesson, I constructed the main roller coaster components, including a track, jump, rounded turn, elbow turn, loop, half-pipe, funnel, corkscrew, zig-zag track, and a curvy track. At this time, I hang up and introduce each of these components on the white board with magnets: Components 1 and Components 2. After today's lesson, I create a much more organized display poster: Roller Coaster Parts using the following labels: Roller Coaster Parts Poster Labels. Designs on Paper Next, I pass out a sheet of paper to each student and ask them to draw and label their roller coaster design. I ask students to complete their designs on paper and to raise their hands and check with me before completing their support systems. Monitoring Student Understanding Once students begin working on their roller coaster designs and prototypes, I conference with every group. My goal is to support students by asking guiding questions (listed below). I also want to encourage students to engage in Science & Engineering Practice 7: Engaging in Argument from Evidence. Here, Student Explaining Design, I encourage a student to think about the amount of energy it will take for her marble to successfully make it through her roller coaster design. This student, Taking the Criteria into Consideration, explains how she has included "time wasters" such as funnels and half-pipes to meet the criteria for success, making sure the marble travels through the roller coaster as slow as possible. As students finish their designs, they move on to completing their support structures. Instead of giving advice myself, I often ask teams of students to provide suggestions: Encouraging Suggestions. Here are a few examples of student designs during this time. From now on, students will have their designs out during every roller coaster lesson. Student Roller Coasters Most students were able to finish their support systems during this time. If they are not finished, they will continue working on them during future lessons by installing more supports as they build their tracks from the bottom up: Instead of completing a Design Review (like yesterday), I decide to provide students with more time to complete their support structures. To clean up, students place any extra parts on the base of their roller coaster. They put unused paper in their individual folders and place their folders in their desks. One member from each team return their team boxes. Then, each team of students place their roller coasters in their designated spot in the classroom.
What is bipolar? Bipolar disorders are brain disorders that cause changes in a person’s mood, energy and ability to function. A roller coaster People with bipolar experience high and low moods—known as mania and depression—which differ from the typical ups-and-downs most people experience. People with bipolar disorders have extreme and intense emotional states that occur at distinct times, called mood episodes. These mood episodes are categorized as manic, hypomanic or depressive. People with bipolar disorders generally have periods of normal mood as well. Different kind of disorders Bipolar disorder is a category that includes four different conditions — bipolar I, bipolar II, cyclothymic disorder and Bipolar not otherwise (NOS). The main difference between bipolar I disorder and bipolar II disorder has to do with the intensity of the manic period and the presence of psychosis. Can you live a “normal” life? If left untreated, bipolar disorders usually worsens. However it can be treated (though not cured), and people with these illnesses can lead full and productive lives. FOUR TYPES OF BIPOLAR DISORDER Bipolar I Disorder is an illness in which people have experienced one or more episodes of mania. Most people diagnosed with bipolar I will have episodes of both mania and depression, though an episode of depression is not necessary for a diagnosis. To be diagnosed with bipolar I, a person’s manic episodes must last at least seven days or be so severe that hospitalization is required. Bipolar II Disorder is a subset of bipolar disorder in which people experience depressive episodes shifting back and forth with hypomanic episodes, but never a “full” manic episode. Cyclothymic Disorder or Cyclothymia is a chronically unstable mood state in which people experience hypomania and mild depression for at least two years. People with cyclothymia may have brief periods of normal mood, but these periods last less than eight weeks. Bipolar Disorder, “other specified” and “unspecified” is when a person does not meet the criteria for bipolar I, II or cyclothymia but has still experienced periods of clinically significant abnormal mood elevation. Read more here. What is it like to be bipolar? Symptoms and their severity can vary. A person with bipolar disorder may have distinct manic or depressed states but may also have extended periods—sometimes years—without symptoms. A person can also experience both extremes simultaneously or in rapid sequence. Severe bipolar episodes of mania or depression may include psychotic symptoms such as hallucinations or delusions. Usually, these psychotic symptoms mirror a person’s extreme mood. Symptoms of both mania and depression can occur simultaneously. A manic episode is a period of at least one week when a person is very high spirited or irritable in an extreme way most of the day for most days, has more energy than usual and experiences at least three of the following, showing a change in behavior: - Exaggerated self-esteem or grandiosity - Less need for sleep - Talking more than usual, talking loudly and quickly - Easily distracted - Doing many activities at once, scheduling more events in a day than can be accomplished - Increased risky behavior (e.g., reckless driving, spending sprees) - Uncontrollable racing thoughts or quickly changing ideas or topics The changes are significant and clear to friends and family. Symptoms are severe enough to cause dysfunction and problems with work, family or social activities and responsibilities. Symptoms of a manic episode may require a person to get hospital care to stay safe. The average age for a first manic episode is 18, but it can start anytime from early childhood to later adulthood. A hypomanic episode is similar to a manic episode (above) but the symptoms are less strong and need only last four days in a row. Unlike mania, hypomania is not associated with psychosis. Commonly, depressive episodes are more frequent and more intense than hypomanic episodes. Additionally, when compared to bipolar I disorder, type II presents more frequent depressive episodes and shorter intervals of well-being. The course of bipolar II disorder is more chronic and consists of more frequent cycling than the course of bipolar I disorder. Finally, bipolar II is associated with a greater risk of suicidal thoughts and behaviors than bipolar I or unipolar depression. Although bipolar II is commonly perceived to be a milder form of Type I, this is not the case. Types I and II present equally severe burdens. What is it like to be depressed? The lows of bipolar depression are often so debilitating that people may be unable to get out of bed. Typically, people experiencing a depressive episode have difficulty falling and staying asleep, while others sleep far more than usual. When people are depressed, even minor decisions such as what to eat for dinner can be overwhelming. They may become obsessed with feelings of loss, personal failure, guilt or helplessness; this negative thinking can lead to thoughts of suicide. What causes bipolar disorders? - Genetics. The chances of developing bipolar disorder are increased if a child’s parents or siblings have the disorder. But the role of genetics is not absolute: A child from a family with a history of bipolar disorder may never develop the disorder. Studies of identical twins have found that, even if one twin develops the disorder, the other may not. - Stress. A stressful event such as a death in the family, an illness, a difficult relationship, divorce or financial problems can trigger a manic or depressive episode. Thus, a person’s handling of stress may also play a role in the development of the illness. - Brain structure and function. Brain scans cannot diagnose bipolar disorder, yet researchers have identified subtle differences in the average size or activation of some brain structures in people with bipolar disorder. How do you treat bipolar disorders? Bipolar disorder is treated and managed in several ways: Psychotherapy, such as cognitive behavioral therapy and family-focused therapy. Medications, such as mood stabilizers, antipsychotic medications and, to a lesser extent, antidepressants. Self-management strategies, like education and recognition of an episode’s early symptoms. Complementary health approaches, such as aerobic exercise meditation, faith and prayer can support, but not replace, treatment. People with bipolar disorder can also experience: - Attention-deficit hyperactivity disorder (ADHD) - Posttraumatic stress disorder (PTSD) - Substance use disorders/dual diagnosis People with bipolar disorder and psychotic symptoms can be wrongly diagnosed with schizophrenia. Bipolar disorder can be also misdiagnosed as Borderline Personality Disorder (BPD). These other illnesses and misdiagnoses can make it hard to treat bipolar disorder. For example, the antidepressants used to treat OCD and the stimulants used to treat ADHD may worsen symptoms of bipolar disorder and may even trigger a manic episode. If you have more than one condition (called co-occurring disorders), be sure to get a treatment plan that works for you.
WE tend to assume that we see our surroundings as they really are, and that our perception of reality is accurate. In fact, what we perceive is merely a neural representation of the world, the brain’s best guess of its environment, based on a very limited amount of available information. This is perhaps best demonstrated by visual illusions, in which there is a mismatch between our perception of the stimulus and objective reality. Even when looking at everyday objects, our perceptions can be deceiving. According to the New Look approach, first propounded in the 1940s by the influential cognitive psychologist Jerome Bruner, perception is largely a constructive process influenced by our needs and values. Recent research has provided some evidence for this: in 2006, psychologists Emily Balcetis and David Dunning, then at Cornell University, reported that an ambiguous figure tended to be interpreted according to the self-interest of the perceiver. They now show that the desirability of an object influences its perceived distance. In the new study, 90 undergraduates were made to sit at a table across from a full bottle of water. Half of the participants were randomly assigned to the “thirsty” condition, and given a serving of pretzels to eat. The rest were placed in the “quenched” condition, and told that they could drink as much of the water as they wanted. Both groups were asked to indicate how long it had been since they last had a drink, how thirsty they were and how appealing the bottle of water was. Finally, they were shown a 1-inch line as a reference, and asked to estimate the distance between their own position and the water bottle. The participants who had been given pretzels to eat during the experiment reported feeling thirstier than those who drank the water, as would be expected. They also rated the bottle of water as being more desirable, and estimated the distance between themselves and the bottle to be smaller than did the quenched participants. Their state of thirst had influenced their perception of distance, such that the water bottle was perceived to be closer than it actually was. That the thirsty participants found the bottle of water to be more desirable is not at all surprising – water will quench their thirst, and therefore has immediate physiological benefits. But how about objects that are desirable because of their social value? To investigate this, Balcetis and Dunning asked another set of students to estimate their distance from a $100 bill. One group was told that they could win the money in a simple card game; the other was told that the bill belonged to the experimenter. In this case, the first group find the money more desirable than the first. Again, both groups were asked to estimate their distance from the object in question and again, those who had been told they could win the $100 bill reported it as being closer than those who were told it belonged to the experimenter. The researchers then asked a third set of participants to complete a survey, and told that it had been designed to assess their sense of humour. Each then watched as their response was graded; half of them were told that their sense of humour was “above average”, and the other half were told that theirs was “below average”. The surveys were then clipped to a stand, and each participant was asked to estimate how far away it was. Those given positive feedback estimated the stand to be closer than those negative feedback. A perceptual test which did not require a numerical response was then performed. Participants were asked to throw a small rubber bean bag towards a gift voucher placed on the floor in front of them, and told that the person whose toss landed closest to the voucher would win it. One group was told that the voucher had a value of $25, thus making it desirable to them, while the other was led to believe that it was worthless. This experiment confirmed the earlier ones – those participants who believed the voucher was worth something perceived it to be nearer, and consequently underthrew the bean bag so that it fell short of the target. The researchers designed one final experiment to rule out the possibility that desirable objects are perceived tobe closer because they evoke a strong emotionalresponse. Participants stood opposite a wall onto which two pieces of tape had been stuck. An object was placed onto a table standing beneath the tape. One group saw a brightly packaged box of chocolates, and the other saw a plastic bag which they were told contained a freshly collected sample of dog faeces. (Both chocolate and faece sevoke strong emotionalresponses.) The participants were then asked to move toward or away from the wall until their distance from it matched that between the two pieces of tape. This time, those shown the chocolate moved further away from the wall than those shown the plastic bag. This seems paradoxical, but is easily explained – the chocolate was perceived to be closer than the faeces because it is the more desirable of the two objects, and so the participants compensate for this by moving further back from it. These findings demonstrate that higher order psychological states can have a significant effect on visual perception. Specifically, they show that our desires have a direct influence on the perception of distance, such that desirable objects are perceived to be closer than they really are. This mechanism would serve to guide behaviour in the optimum way, by encouraging the perceiver to reach out and acquire the desired object. Further research into this effect is needed, however, as there are other situations in which the opposite could plausibly occur. Undesirable objects which might pose a threat – such as a venomous snake, for example – might also be perceived as being closer than they are so that one can escape quickly. - Interpreting hybrid images - How we feel affects what we see - Kicking performance affects perception of goal size Balcetis, E., & Dunning, D. (2009). Wishful Seeing: More Desired Objects Are Seen as Closer Psychological Science. DOI: 10.1177/0956797609356283. Balcetis, E. & Dunning, D. (2006). See What You Want to See: Motivational Influences on Visual Perception. J. Pers. Soc. Psych. 91: 612-625. [PDF]
A new bacterial strain that can cause the flu has been found in soil samples collected by researchers from across the globe. The researchers at the University of Alberta say they discovered the strain by sequencing soil samples from across Canada. The strain was initially identified in a soil sample taken in 2013 and has since been identified in soil from more than 40 countries, including the United States, Australia, and France. The new strain, named B. bovid, was isolated from soil samples that were collected at a farm in Quebec and the University at Buffalo, New York. Researchers found that the bacteria was able to enter soil and convert to a new type of Endocytic strain that could be resistant to antibiotics. The research is the first to show the effectiveness of the new strain and is published online today in Nature Communications. B.bovid is a novel strain of Endotoxin B (EB), which is responsible for the symptoms of the flu. The EB strain is resistant to the antibiotics ciprofloxacin and ciprolizumab, and can survive for several days in the absence of oxygen, making it a viable host for bacteria. However, researchers say they were not able to identify the specific strain of EB that was responsible for producing the new flu strain. Bovid has also been found to cause other serious illness, including respiratory distress, severe liver failure, and brain damage. The University of Toronto, the University College London, and University of California, Berkeley were among the universities involved in the research. The scientists say that B.covid was found in the soil at the site of the farm where the research was conducted, but they could not identify the exact location because of a lack of a microscope. The team says the research is important because it shows that the new variant of EB is not as dangerous as previously believed, and is more effective than previously thought at fighting the flu in humans. The bacteria was isolated in soil at a Saskatchewan farm that has produced B.a.b. since 2012, the researchers say. The university’s Dr. Robert Pappas, who conducted the research, says the results have been very encouraging and that the research shows that we have to continue to monitor these new strains of EB. He says the bacteria is more likely to make the strain that was found to be resistant. Pappa says the B.bcovid strain is very resistant to ciproprazole, a drug that has been used to treat EB. Poppa says this is a good sign because if we can get B.bbovid to make this new variant, it could help treat people with a very severe form of EB, such as people with kidney stones. He adds that this could mean the new B.bsb strain will have a larger potential to spread and cause a pandemic, if it can be isolated from a larger number of people. “There is hope that this new strain could be used in combination with ciprotrexa, which has already been used in many cases to treat the disease,” he says. “The cipretex drug is very powerful, but it only works if the patient is already infected, so the patient would need to be infected with EB before the drug would work.” The researchers say that the B-bovids ability to survive and thrive in the environment has implications for the environment as a whole. “It’s important to note that the strain of bacteria we found here in Saskatchewan is not a new strain,” Pappamas says. Pappa says the new strains B.ebb and B.abb are likely to be more common in the future. “I think we’re just at the beginning of what’s to come, and we need to take this time to see if we’re going to be able to find other strains that can survive in our environment, and how we can keep them out of the environment.” The research was supported by the Canadian Institutes of Health Research (CIHR), Canada Science and Engineering Research Council, the Canada Research Chairs Program, and the Canada Excellence Research Channels program.
Chapter 5. Classes and Objects Chapter 3 discusses the primitive types built into the VB.NET language, such as Integer, Long, and Single. The true power of VB.NET, however, lies in its capacity to let the programmer define new types to suit particular problems. It is this ability to create new types that characterizes an object-oriented language. You specify new types in VB.NET by declaring and defining classes. Particular instances of a class are called objects. The difference between a class and an object is the same as the difference between the concept of a Dog and the particular dog who is shedding on your carpet as you read this. You can’t play fetch with the definition of a Dog, only with an instance. A Dog class describes what dogs are like: they have weight, height, eye color, hair color, disposition, and so forth. They also have actions they can perform, such as eat, walk, bark, and sleep. A particular dog (such as my dog Milo) will have a specific weight (62 pounds), height (22 inches), eye color (black), hair color (yellow), disposition (angelic), and so forth. He is capable of all the actions—methods, in programming parlance—of any dog (though if you knew him you might imagine that eating is the only method he implements). The huge advantage of classes in object-oriented programming is that classes encapsulate the characteristics and capabilities of a type in a single, self-contained unit. Suppose, for instance, you want to sort the contents of a Windows listbox ...
This lesson aligns with the Common Core State Standards: "STANDARD 5 - Matter and Energy The competent science teacher understands the nature and properties of energy in its various forms, and the processes by which energy is exchanged and/or transformed. 5A. understands the atomic and nuclear structure of matter and the relationship to chemical and physical properties 5D. understands the characteristics and relationships among thermal, acoustical, radiant, electrical, chemical, mechanical, and nuclear energies 5I. demonstrates the ability to use instruments or to explain functions of the technologies used to study matter and energy. Student note guide to the PowerPoint lesson III in the "Electricity and Magnetism" Unit VI - introductory physics. This lesson identifies and describes series and parallel circuits; demonstrates how to calculate electric power and energy use, and discusses electrical safety devices and why they are important. PowerPoint lessons contain many graphics and links to short videos to further student understanding. PowerPoint slides also contain notes for the instructor based on students most "frequently asked questions" or FAQ's to help clarify the subject. A suggested class activity is included at the end of each lesson. *All PowerPoint lessons; Activote review programs; student worksheets/activities and assessments for this unit can be found under the "Introductory Physics Complete Unit VI Electricity and Magnetism" zip file folder. Terms/concepts covered in this lesson include: P = I x V E = P x t Lesson III Student PowerPoint Note Guide "Electric Circuits" is licensed under a Creative Commons Attribution 3.0 Unported License
Determination of Chromatic Number The chromatic number χ of a graph is the smallest number of colors that can be assigned to the vertices of the graph such that no two adjacent vertices are assigned the same color. Thus far, no economic algorithm has been found that calculates χ for all graphs. Algorithms (that compute χ exactly) and heuristics (that have been developed to obtain a good estimate of χ in a more realistic timeframe) have been investigated and presented. Several of these heuristics, as well as a new variation, have been programmed on a microcomputer to color randomly generated unlabeled graphs for comparison purposes. These heuristics can be applied to find workable solutions to such problems as scheduling time tables and the storage/transportation of products.
The Gilded Age, generally defined as the period following the Civil War, although more specifically between the election of Rutherford Hayes and the end of reconstruction in 1976 and the Panic of 1893, was a time of immense development in American history. The economic scars of the Civil War had started to heal, even though the social scars were still visible. The economy grew dramatically due to the effects of industrialization and new forms of economic organization, and immigration increased from Eastern Europe, creating a strong trend toward urbanization and diversification. The Gilded Age was also a period of immense graft and corruption, a theme that would be a mainstay of journalistic reporting throughout the era. The federal bureaucracy became ever more clogged with political appointees in sinecures, expanding the spoils system that was the hallmark of the earlier Andrew Jackson administration in the 1830s. Furthermore, political machines drove the politics of major metropolitan cities, and used a system of corruption to ensure the election of desired candidates. Newspapers would play a crucial role in exposing scandals and investigating the wrongdoing of public officials. However, the ethos of journalism of the time was very different from journalism in the modern era. Newspapers commonly took strong political stances, which was reflected in their reporting. Major newspapers were associated with one of the political parties or a particular social movement. Scandals, therefore, were often unearthed by journalists opposed to the policies of a particular politician. The presidential administration of Ulysses S. Grant is widely considered one of the most corrupt in history, although Grant himself is often considered a more minor player in the on-going scandals that plagued his time in office. There were more than a dozen notable scandals during his administration, but this article will focus on a representative case in this report: the Whiskey Ring. The Whiskey Ring (1875) was a tax evasion scheme developed by the newly-empowered liberal Republican political machine in Missouri. The schemers bribed and cajoled administrators in every phase of the production of whiskey to underreport their numbers to avoid paying the whiskey tax – and therefore significantly increasing their profits. The money was then diverted to the local political machine, to increase its power over potential rivals. The scandal was revealed in a series of investigations by Myron Colony, the commercial editor of the St. Louis Democrat, a paper opposed (perhaps obviously) to the Republican machine. Given his position, he was able to request figures from the different stages of the whiskey production process, eventually discovering the fraud when he reconciled the numbers. The story received immediate attention from across the country, and helped to usher in the end of reconstruction following the 1876 election. The Whiskey Ring was not the most notable scandal in the immediate post-Civil War period, a title that is generally given to the enormous Crédit Mobilier scandal. Here, the U.S. Congress approved funds designated to the Union Pacific Railroad to build the first transcontinental railroad, but the funds were partially diverted to the Crédit Mobilier company and also used to bribe congressmen. Almost $50 million in funds were profited, an amount today that equals around $700 million after adjusting for inflation. Newspapers played a crucial role in exposing the scandal. The Sun, a conservative newspaper in New York, received information about the diverted funds and the central role that congressman Oakes Ames took in the scam’s design. The newspaper was opposed to the reelection of Grant as president in 1972, and used the scandal to target him and his administration in reports throughout the election (Grant was uninvolved in the scandal himself). The level of corruption in national politics was perhaps moderate compared to the political machines that controlled the major urban areas of the country. No machine was more notorious than Tammany Hall, which controlled New York City politics for more than a century, and particularly under William Tweed in the post-Civil War period. Newspapers reported the lurid details of the corruption and graft, but it was the political cartoons drawn by Thomas Nast that were permanently etched in the minds of citizens. Nast, a Republican and a cartoonist for Harper’s Weekly, was deeply disturbed at the level of corruption in the Tweed administration and began a campaign to discredit him. His caricatures of Tweed are very famous in American history, and the attacks discredited Tweed, who eventually lost an election and was later indicted. Newspapers throughout this era, while staunchly partisan, were important fora to communicate prominent issues to the public. The enormous corruption provided instant fodder for the press, but it also ensured that newspapers and journalists upheld the ideals of the fourth estate – to question government and to keep it accountable.
This is a learning center for children to count the cherries on each cupcake and match them to the correct number. Printable numbered cupcakes are provided or children can use number tiles as well. This is a learning game for children who are learning basic counting and sequencing skills. Children can order the ice cream cones from smallest to largest in size, or they can count the scoops and find the appropriate cherry to put on top. This game is to provide children with extra practice placing and counting. Children draw a number from the the pile and place it on the cupcake. Then they must place the correct amount of “cherries” on top of the cupcake. This is an extension of the above counting game. Children choose a sentance strip with either an addition or subtraction problem. Using the cupcake as a board, and the addition problem 3 + 2, children place three cherries, then two cherries on top. Then count them all up to solve the problem. I have my children verbally read the sentence with the correct answer. If you don’t have time to sit and listen, have children record their answers on a piece of paper as they go. This game is perfect for Tots and Preschoolers. Children match the tops of the popsicle to the correct bottom based on color. These printables encourage children to build early graphing and tally skills. Using Skittles, M&M’s and Sweethearts children can take their handful and build bar graphs and visual representations of each treat. This multiplication center is designed to show children the concept of multiplying groups. Children draw two number tiles (or roll two dice) and place them in the center. Using smarties, children build the groups on their board, then use skip counting to add up their total. Space is provided for children to record their answers.
Walk in the park or look out of your bedroom window and chances are you’ll see a garden growing. These special patches of paradise are an ecosystem on our doorsteps. They are home to bugs, birds, and mammals and the plants they depend on. You might even be lucky enough to find a slimy amphibian or a scaly reptile there. But just like you, plants need water to survive. If the climate is too hot, water evaporates, it turns from liquid into gas - water vapour. If the climate is too cold, water freezes, it turns from liquid into solid – ice. So how will our gardens look as the climate changes? As plant species have evolved they have adapted to the different climate conditions that they grow in. Next: Adapting to change
Social Constructivist Learning Theories As Research Framework Education Essay The key principles of constructivism proposes that learners build personal interpretation of the world based on experiences and interactions with knowledge that is embeded in the learning context in which it is used. Learning which is viewed from social constructivism or social learning theories of situated cognition focuses on learners' prior knowledge and how they construct their understanding based on their contexts or learning culture (Vygotsky, 1978). Social learning theories advocate that students master new learning approaches through interacting with others (Doise, 1990) as knowledge and understanding develop in relationship with the social context (Fickel, 2002). The theories support learning as a social and cultural activity mediated by the social and environmental factors around the learners that stimulate their learning so that growth occurs in the cognitive, psychomotor and affective domains. The instructional or learning strategies that are proposed under constructivism include constructivist teaching, collaborative learning and problem-based learning. In research reported by Helgeson (1994), most but not all, cases using inquiry-oriented curricula resulted in significant gains in problem-solving skills, and gains in achievement or attitudes towards science. Constructivism informs educators that "learning is constructed, not only in an individual's head, but in the interactions among individuals or between individuals and materials as these occur over time" (Marshall, 1994). Nariansamy in Ijeh (2003): For many years mathematics was taught in what is referred to as the traditional way with the teacher transmitting all the knowledge and the child passively accepting it without question. In the traditional mathematics classroom, where the teacher only shows how and what is to be done, there is little discussion; pupils are seldom given chance to ask questions if they do not understand something. Often children, who already built up a fear of mathematics, feel afraid of the teacher and the reaction of peers if they do not understand. On the other hand, a mathematics classroom where meaningful teaching and learning takes place provides a powerful means of communication between the teacher and the student of among the students themselves. In contrast, the traditional mathematics classroom is ironically a place where the children's opinions are never heard. Since 1980, however, the theory of constructivism has been advocated as an effective way of learning and teaching mathematics. According to this theory, learners actively construct their knowledge with the focus on a problem-centered approach based on constructivist perspectives. Constructivists believe that learning is the discovery and transformation of complex information and that traditional teacher-centered instruction of predetermined plans, skills and content is inappropriate (Nicaise & Barnes, 1996). Furthermore, they suggest that situations and social activities shape understanding. They are critical of traditional teachers when they do not provide students with essential contextual features of learning, thus forcing students to rely on superficial, surface-level features of problems without the abilities to apply or use knowledge. Nicaise & Barnes, (1996) suggest that learning occurs within the world students experience and that when they deal with problems and situations simulating and representing authenticity, they learn more. The following section discusses constructivist learning theories and problem-solving and problem-centered approaches to teaching and learning mathematics. 1. Constructivist Perspective on Teaching and Learning Constructivism is an epistemology that views knowledge as being constructed by learners from their prior experience. The learner interacts with his/her environment and thus gains an understanding of its features and characteristics. The learner constructs his/her own conceptualizations and finds his/her own solutions to problems, mastering autonomy and independence. According to constructivism, learning is the result of individual mental construction, whereby the learner learns by dint of matching new against given information and establishing meaningful connections, rather than by internalizing mere factoids to be regurgitated later on. Thanasoulas (2002) notes that in constructivist thinking, learning is inescapably affected by the context and the beliefs and attitudes of the learner. Here, learners are given more latitude in becoming effective problem solvers, identifying and evaluating problems, as well as deciphering ways in which to transfer their learning to these problems. Constructivist learning is based on students' active participation in problem-solving and critical thinking regarding a learning activity that they find relevant and engaging. They are "constructing" their own knowledge by testing ideas and approaches based on their prior knowledge and experience, applying these to a new situation and integrating the new knowledge gained with pre-existing intellectual constructs. In this view, knowledge is gained by an active process of construction rather than by passive assimilation of information or rote memorization. This view of learning sharply contrasts with one in which learning is the passive transmission of information from one individual (teacher) to another (student), a view in which reception, not construction, is the key. According to constructivist learning theory, mathematical knowledge can not be transferred ready-made from one person (teacher) to another (student). It ought to be constructed by every individual learner. This theory maintains that students are active meaning-makers who continually construct their own meanings of ideas communicated to them. This is done in terms of their own existing knowledge base. This suggests that a student finds a new mathematical idea meaningful to the extent that he/she is able to form a new concept (Bezuidenhout, 1998). Kamii (1994) states that: "Children have to go through a constructive process similar to our ancestors', at least in part, if they are to understand today's mathematics". Kamii goes on to say that, today's mathematics are the results of centuries of construction by adults, we deprive children of opportunities to do their own thinking. Students today invent the same kinds of procedures our ancestors did and need to go through a similar process of construction to become able to understand adults' mathematics. Students' first methods (algorithms) are admittedly inefficient. However, if they are free to do their own thinking, they invent increasingly efficient procedures just as our ancestor did. By trying to bypass the constructive process, we prevent them from making sense of mathematics. Reys, Suydam, Lindquist & Smith (1998) mention three basic tenets on which constructivism rests. There are: Knowledge is not passively received; rather, knowledge is actively created or invented (constructed) by students. Students create (construct) new mathematical knowledge by reflecting on their physical and mental activities. Learning reflects a social process in which children engage in dialogue and discussion with themselves as well as others (including teachers) as they develop intellectually. There are three types of constructivism that are applicable to mathematics education. There are known as: Radical constructivism: According to this theory, knowledge can not simply be transferred ready-made from parent to child or from teacher to student but has to be actively built by each learner in his/her own mind (Glasersfeld, 1992). This implies that students usually deal with meanings, and when instructional programs fail to develop appropriate meanings, students create their own meanings. Ernest (1991) observes this type of constructivism lacks a social dimension in which the students learn dependently. Cobb, Yackel & Wood (1992) also contend that "the suggestion that students can be left to their own devices to construct the mathematical ways of knowing compatible with those of wider society is a contradiction of terms". Social-constructivism: Ernest (1991) comes up with a new type of constructivism that is called social-constructivism which views mathematics as a social construction which means that students can better construct their knowledge when it is embedded in a social process. Through the use of language and social interchange (i.e. negotiation between the teacher and the students and among the students), individual knowledge (understanding) can be expressed, developed and contested. Socio-constructivism: This type of constructivism is developed only in mathematics education. According to this theory, m is a creative human activity and mathematical learning occurs as students develop effective ways to solve problems. In connection with this, Jones (1997) notes: "Knowledge is the dynamic product of work of individuals operating in the communities, not a solid body of immutable facts and procedures independent of mathematicians. In this view, learning is considered more as a matter of meaning-making and of constructing one's own knowledge than of memorizing mathematical results and absorbing facts from the teacher's mind or the textbook; teaching is the facilitation of knowledge construction and not delivery of information. Supporters of socio-constructivism theory claim that when individuals (learners as well as teacher) interact with one another in the classroom, they share their views and experiences and along the way knowledge is constructed. Knowledge is acquired through the sharing of their experiences. Therefore, it is socially constructed (Ernest, 1991; Stein, Silver & Smith, 1998). Vygotsky holds the anti-realist position that the process of knowing is rather a disjunctive one involving the agency of other people and mediated by community and culture. He sees collaborative action to be shaped in childhood when the convergence of speech and practical activity occurs and entails the instrumental use of social speech. Although in adulthood social speech is internalized (it becomes thought), Vygotsky contends, it still preserves its intrinsic collaborative character (Kanselaar, 2002). Vygotsky (in Nicaise & Barnes, 1996) articulated the importance of social discourse when he suggested that cognitive development depends on the child's social interaction with others, where language plays a central role in cognition. Vygotsky believes that social interaction guides students thinking and concept formation (schema). Conceptual growth occurs when students and teachers share different viewpoints and experiences and understanding changes in response to new perspectives and experiences (Nicaise & Barnes, 1996). The characteristics of socio-constructivism are: mathematics should be taught through problem-solving; students should interact with teachers and other students as well; and students are stimulated to solve problems based on their own strategies (Cobb et at., 1992). 2. Problem-solving and Problem-centered Approaches to teaching and learning mathematics a) Problem-solving approach A problem-solving approach is an approach to teaching mathematics. With this approach the focus is on teaching mathematical topics through problem-solving contexts and enquiry-oriented environments which are characterized by the teacher helping students construct a deep understanding of mathematical ideas and processes by engaging them in doing mathematics: creating, conjecturing, exploring, testing and verifying (Lester, Masingila, Mau, Lambdin, dos Santon & Raymond in Taplin, 2007). According to Taplin's (2007) review of research reports, specific characteristics of a problem-solving approach include: - interactions between students mutually as well as teachers and students; - mathematical dialogue and consensus between students; - teachers providing just enough information to establish background/intent of the problem and students clarifying, interpreting and attempting to construct one or more solution processes; - teachers accepting right/wrong answers in a non-evaluative way; - teachers guiding, coaching, asking insightful questions and sharing in the process of solving problems; - teachers knowing when it is appropriate to intervene and when to step back and let the pupils make their own way; and - the possibility of using such an approach to encourage students to make generalizations about rules and concepts, a process which is central to mathematics. b) Problem-centered approach A problem-centered approach is also an approach to mathematics education that is based on problem-solving. We could just as easily have called this a learner-centered approach or, to use the more formal term, constructivist; it follows the theory that learning occurs when students construct their own knowledge. In problem-centered mathematics instruction, students construct their own understanding of mathematics through solving reality-based problems, presenting their solutions and learning from one another's methods. The learner interprets the problem conditions in the light of his/her repertoire of experiences (knowledge and strategies previously assimilated). The teacher provides the necessary scaffolding during this process. Problem-centered approach theory opposes the view that mathematics is a ready-made system of rules and procedures to be learned; a static body of knowledge. According to this theory, mathematics is a human activity and students must engage in a way similar to the genetic development of the object. Supporters of this theory hold that students should not be considered as passive recipients of ready-made mathematics, but rather that education should guide the students towards using opportunities to invent (re-invent) mathematics by doing it themselves (Ndlovu, 2004). Students should be given the opportunity to experience their mathematical knowledge as the product of their own mathematical activity. In a problem-centered approach, instruction begins with reality-based problems, dilemmas and open-ended questions. The learners acquire knowledge from the solution of problems. They engage in a variety of problem situations and along the process learn mathematical content (Hiebert, Carpenter, Fennema, Fuson, Human, Murray, Olivier & Wearne, 1996). They also use mathematical knowledge to solve real life problems. c) The role of social interaction The problem-centered classroom is a place where problem posing and problem-solving takes place. These processes are characterized by invention, explanation, negotiation, sharing and evaluation (Nakin, 2003). As Murray, Olivier & Human (1993) point out in this regard, social interaction creates the opportunity for children to talk about thinking and encourages reflection; students learn not only from their own constructions but also from one another and through interaction with the teacher. The opportunity to exchange, discuss and evaluate one's own ideas and the ideas of others encourages decentration (the diminution of egocentricity), thereby leading to a more critical and realistic view of the self and others (Piaget in Post, 1980). d) The role of the teacher In a problem-centered classroom, the role of the teacher is no longer that of transmitter of knowledge to students, but rather a facililator of their learning. He/she has "the role of selecting and posing appropriate sequences of problems as opportunities for learning, of sharing information when it is necessary for tackling problems, and of facilitating the establishment of a classroom culture in which pupils work on novel problems individually and interactively, and discuss and reflect in their own answers and methods" (Hiebert, Carpenter, Fennema, Fusson, Human, Murray, Olivier & Wearne, 1997). Casey (1997) compares traditional views with current views on the roles of teachers and learners in learning as follows: "The old teaching paradigm implies that learning only happens when the teacher puts information into children's heads. The new paradigm does not imply that the teacher is unnecessary... a knowledgeable teacher who acts as a guide, facilitator, or fellow learner is essential". Teachers have to constantly assist and support individual learners to develop their cognition at their own level and pace. The teacher has to plan, set up, manage and evaluate the teaching and learning activities to benefit the total development of every individual in the classroom. He/she must be thoroughly organized in planning appropriate activities, providing opportunities and creating a classroom atmosphere with his/her learning objectives in mind. He/she should create learning environments containing multiple sources of information and multiple viewpoints where students think, explore and construct meaning (Nicaise & Barnes, 1996) and situations to develop creative thinking and develop a wide range of problem-centered activities and materials aid problem-solving development in learners. He/she should also encourage learners to think critically, adapt ideas that make sense to them, invent many different ways to solve problems and to expand and enhance the development of mathematical concepts through problem-solving activities. Teachers guide learners to discover and develop mathematics skills, such as active inquiry and reflection, in order to analyze and synthesize information, solve problems and successfully construct new knowledge through creative participation and understanding. Progressive teachers facilitate learning by selecting and implementing suitable learning matter and by motivating learners to improve their personal skills and abilities through the use of different materials and tools, such as computers. Teachers observe and evaluate learners' progress and provide them with relevant feedback in this regard. They thus monitor and guide rather than dominate and direct learning activities (Bonk & King, 1998; Newby, Stepich & Russel, 2000). d) The role of the learners In the problem-centered approach, learners choose and share their methods (Hiebert et al., 1997). Learners should also be free to express themselves without fear of reprisal. Mistakes are often as constructive as the correct strategies in helping learners to understand the mathematics involved (Erickson, 1999; Hiebert et al., 1997). According to Hiebert et al. (1997), mistakes provide opportunities for examining errors in reasoning, and thereby raise learners' level of analysis. Learners should realize that learning means learning from others and must take advantage of others' ideas and results of their investigations. Communication and Collaboration Communication and collaboration are two of the essential processes in understanding and learning mathematics because they allow students to reflect, share and discuss their understandings of mathematical concepts and procedures (Ontario Ministry of Education, 2007). Gadanidis, Graham, McDougal and Roulet (2002) underline the importance of collaboration and communication by arguing "mathematical learning is a social activity that helps students learn from listening and sharing and also from watching the actions, movements, and manipulations of others". Paper-and-pencil method is the most preferred method for students to demonstrate their solution processes in face-to-face environments because students can easily share their solution procedures and describe what they did by drawing tables or sketching diagrams. As a consequence of this communication, they can improve their understandings and cognitive abilities. However, this approach is rather challenging in online environments, and the existence of "obstacles associated with this text based communication interfaces, where it is difficult to expressed ideas with mathematical language and graphical representation" may prevent effective collaboration (Gadanidis et at., 2002). This challenge will be the main focus of our study. Collaborative learning has situated its roots in the theory of distributed cognition introduced by Lev Vygotsky since 1930s as meta-analyzed by Morgan, Brickell, and Harper (2008). Distributed cognition describes how people interact with their environment including each other in order to advance their cognitive abilities, but not capacities. Sweller (1999) explained the limits of cognitive capacity of individual by introducing cognitive load theory and possible solutions for the effective use of this limited capacity. Collaborative learning enhanced by peer interaction not only in face-to-face settings but also in online environments provided evidence for a two-way benefit for learners (Tseng and Tsai, 2007). They analyzed online peer assessment and the role of peer feedback and demonstrated that students learn better by getting feedback as well as by providing feedback. This chapter describes the methodological rational for the study. There is a range of methodologies and methods available for researchers. In mathematics education researchers should make explicit the theories that influence their work since these theories influence both the ways in which they work in the classroom and the ways they analyze their data. The following research design is structured according to Crotty's (1998) suggested research processes. Crotty (1998) argues that in developing a research design the researchers should answer two basic questions: firstly, what methodologies and methods will be employed in the research and secondly how this choice and use of methods and methodologies is testified. The second question deals not only with the purpose of the research but also with the researcher's understanding of reality (theoretical perspective) and about what human knowledge is and what it entails (epistemology) (Crotty, 1998). Thus the two initial questions have explained into four: what epistemology is embedded in the theoretical perspective, what theoretical perspective lies behind the methodology, what methodology controls our choice and use of methods and what methods are proposed to be used. These four elements are presented separately because each element is substantially different from the other (Crotty, 1998). Epistemology refers to the theory of knowledge embedded in the theoretical perspective (Crotty, 1998). This study is based on a social constructivist view of learning: pupils learn mathematics through active construction of their own knowledge and this can be facilitated in a computer environment through the interactive process of conjecture, feedback, critical thinking, discovery and collaboration (Howard et al., 1990). I consider constructivism to be the socially collective generation and construction of meanings rather than a meaning-making activity of the individual mind as Crotty (1998) claims. I do not take constructivism to highlight the unique experience of an individual that tends to resist the critical spirit (Crotty, 1998); in contrast, I ground my research on a social constructivist nature of knowledge in which: The meaning are negotiated socially and historically. In other words they are not simply imprinted on individuals but are formed through interaction with others (hence social constructivism) and through historical and cultural norms that operate in individuals' lives (Creswell, 2003). Social constructivism claims that rather than being transmitted, knowledge is created or constructed by each learner (Leidner and Jarvenpaa, 1995); there is no knowledge independent of the meaning attributed to experience constructed by the learner (Hein, 1991). According to certain cognitive theories learning does not involve a passive reception of information; instead, the learning process can be regarded as an active construction of knowledge in a learner-centered instruction (Kapa, 1999). Constructivism claims that students can not be given knowledge; students learn best when they discover things, build their own theories and try them out rather than when they are simply told or instructed. Vygotsky argues that: Direct teaching of concepts is impossible and fruitless. A teacher who tries to do this accomplishes nothing but empty verbalism, a parrot-like repetition of words by the child, simulating a knowledge of the corresponding concepts but actually covering up a vacuum (Vygotsky, 1962). But participating in social constructivism activity students have the opportunity not only to learn mathematical skills and procedures, but also to explain and justify their own thinking and discuss their observations (Silver, 1996). From a social constructivist perspective ICT offers teachers a powerful pedagogical tool-kit (O'Neill, 1998). Hoyles (1991) argues that in mathematics lessons involving computers, learning is achieved through social interaction for three reasons: the social nature of mathematics; the collaboration that computer based activities invite; the basis for viewing the computers as one of the partners of the discourse. The theoretical perspective refers the philosophical stance informing the methodology, providing a context for the process followed and justifying its logic (Crotty, 1998). Recognizing the fact that there are multiple socially constructed realities this study adopts the interpretive paradigm and more specifically symbolic interactionism as the primary theoretical perspective. The interpretive paradigm emerged in the social sciences to break away from the constraints imposed by positivism (Boghossian, 2006). The main aspiration of the interpretive paradigm is to understand "the subjective world of human experience" (Cohen et al., 2007). In order to maintain the integrity of the investigated phenomena efforts are made by the researchers to enter into the culture and find out the insider's perspective. Interpretive researchers see reality as a social construct and try to understand individuals' interpretations of the world around them (Bassey, 1992). Researchers work directly with experience and understanding in order to see the observer's view point and thus build the theory (Cohen et al., 2007). As seen above, Creswell (2007) argues that the meanings constructed are negotiated socially and historically. The interpretivist approach looks for 'culturally derived and historically situated interpretations of the social life' while symbolic interactionism 'explores the understanding and meanings in culture as the meaningful matrix that guides our lives' (Crotty, 1998). This is directly linked to the purpose of the research: to get inside the classroom and see how the computer software could be used as an aid in pupils' understanding of mathematics. Only through significant symbol, for example language and other symbolic tools which humans within a culture share and use to communicate, researchers can become aware of the insiders' perceptions and attitudes and interpret their meanings and intentions; hence symbolic interactionism (Cohen et al., 2007; Crotty, 1998). If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
World History: Journey Across Time Web Activity Lesson Plans During the Middle Ages, villagers and townspeople turned to nobles to protect them. The shift in power from kings to nobles led to the creation of feudalism. As students learned in this chapter, under this system, knights were the most powerful soldiers in Europe. They commanded respect and were rewarded for their bravery. In this activity, students will analyze the military, social, and political significance of the medieval knight. They will learn about the armor and weapons used by knights, and the code by which they lived. Students will visit Knights and Armor to discover what it was like to be a knight in Medieval Europe. Destination Title: Knights and Armor Students will learn about the knights of Medieval Europe. They will read about the armor, weapons, training, and code of the knights. They will analyze the influence the knights had in medieval times and how this influence grew over time. Students will answer four questions about what they have learned. They will then prepare a Venn diagram, comparing and contrasting the "real" and "romantic" knights. This activity will help students apply what they've learned. - Students will describe medieval knights, including their armor, weapons, and responsibilities. - Students will analyze the military, social, and political significance of the medieval knight. They will complete a Venn diagram comparing and contrasting the "real" and "romantic" knights. Student Web Activity Answers - Knights are compared to tanks. The invention of the stirrup helped knights keep their balance while charging their enemies. - Armor was expensive, so in order to be a knight one had to have the money to purchase the necessary equipment. Thus, the possession of armor symbolized wealth. - Knights were bound to offer military service up to 40 days a year in peace time and more during war time. Their military duties included castle guard, serving in the lord's "bodyguard," and participating in battle. In addition to these duties, the knight could administer justice, manage his estates, and continue to perfect his combat skills in tournament. - Knights had two reasons for participating in the Crusades. First, they hoped to reclaim the holy land. In addition, they wanted to carve out for themselves fiefs and kingdoms in this land of "milk and honey." - Answers will vary.
Graphic design is the art of communication, stylizing, and problem-solving through the use of type, space, and image. The field is also often erroneously referred to as Visual Communication or Communication Design due to overlapping skills involved. Graphic designers use various methods to create and combine words, symbols, and images to create a visual representation of ideas and messages. A graphic designer may use a combination of typography, visual arts and page layout techniques to produce a final result. Graphic design often refers to both the process (designing) by which the communication is created and the products (designs) which are generated. Common uses of graphic design include identity (logos and branding), publications (magazines, newspapers and books), print advertisements, posters, billboards, website graphics and elements, signs and product packaging. For example, a product package might include a logo or other artwork, organized text and pure design elements such as images, shapes and color which unify the piece. Composition is one of the most important features of graphic design, especially when using pre-existing materials or diverse elements. Microsoft Word is a word processor developed by Microsoft. It was first released in 1983 under the name Multi-Tool Word for Xenix systems. Subsequent versions were later written for several other platforms including IBM PCs running DOS (1983), the Apple Macintosh (1985), the AT&T Unix PC (1985), Atari ST (1988), SCO UNIX (1994), OS/2 (1989), and Windows (1989). Commercial versions of Word are licensed as a standalone product or as a component of Microsoft Office, Windows RT or the discontinued Microsoft Works Suite. Freeware editions of Word are Microsoft Word Viewer and Word Web App on SkyDrive, both of which have limited feature sets. Sentence spacing is the horizontal space between sentences in typeset text. It is a matter of typographical convention. Since the introduction of movable-type printing in Europe, various sentence spacing conventions have been used in languages with a Latin-derived alphabet. These include a normal word space (as between the words in a sentence), a single enlarged space, two full spaces, and, most recently in digital media, no space. Although modern digital fonts can automatically adjust a single word space to create visually pleasing and consistent spacing following terminal punctuation, most debate is about whether to strike a keyboard's spacebar once or twice between sentences. Until the 20th century, publishing houses and printers in many countries used additional space between sentences. There were exceptions to this traditional spacing method—some printers used spacing between sentences that was no wider than word spacing. This was French spacing—a term synonymous with single-space sentence spacing until the late 20th century. With the introduction of the typewriter in the late 19th century, typists used two spaces between sentences to mimic the style used by traditional typesetters. While wide sentence spacing was phased out in the printing industry in the mid-twentieth century, the practice continued on typewriters and later on computers. Perhaps because of this, many modern sources now incorrectly claim that wide spacing was created for the typewriter. Algeria · Nigeria · Sudan · Ethiopia · Seychelles Uganda · Zambia · Kenya · South Africa Afghanistan · Pakistan · India Nepal · Sri Lanka · Vietnam China · Hong Kong · Macau · Taiwan North Korea · South Korea · Japan Malaysia · Singapore · Philippines · Thailand
The j sound, spelled j, dge ge, and g(i) The j sound is an affricate and is a voiced ch sound. Don't let the various spellings for this sound confuse you! Hi everyone, and welcome back to Seattle Learning Academy's American English Pronunciation Podcast. My name is Mandy, and this is our 90th podcast. Two weeks ago I talked about the ch sound (ch sound). Today I'm going to explain a related sound, the j sound (j sound) as in the words jump, strange and giant. The ch sound and j sound are both affricates. An affricate is a type of sound created when we stop all the air from leaving the vocal tract, and then, when we release the air, we do it with friction, or a little extra sound. English has only two affricate sounds, the ch sound and j sound. The only difference between the ch sound and j sound is voicing. The ch sound is unvoiced, and the j sound is voiced. You can feel the difference between voiced and unvoiced sounds by placing a finger or two against the front of your neck. You will feel the vibration of a voiced sound, but not the unvoiced sounds. That vibration is created by your vocal cords. Feel both of these sounds: (ch sound, j sound, ch sound, j sound) Be careful. If you add a vowel sound to the ch sound, and say it like cha, you will be adding a voiced sound to the ch sound, and you'll feel the vibration of that sound, which may confuse you. The ch sound is pronounced as (ch sound) and the j sound as (j sound). The sounds aren't cha and ja, but simply (ch sound) and (j sound). I hope you remember form two weeks ago that the ch sound began with the tongue in the same position as a t sound. Since the j sound is a voiced ch sound, it should not be a surprise that the j sound begins by stopping the air with the tongue in the same position as the d sound (j sound). With the j sound, just like with the ch sound, the stop is released with friction. That friction, if I were to hold it, would sound like (zh sound). If I combine (d sound) and (zh sound), I get (j sound), the j sound. There is also a spelling concept that is the same between the ch sound and j sound. The ch sound can be spelled tch, as in the words watch and catch. There is no additional t sound in those words. It is only a ch sound. The j sound has a similar concept in the dge spelling, as in the words judge and bridge. Although we see the letter d there, we do not add an extra d sound to the word. The dge spelling is pronounced as just the j sound. There is obviously no letter j in the words strange or giant, yet they are pronounced with a j sound. The letter g, when followed by the letters e or i, are generally pronounced as the j sound. So the words strange and giant are both pronounced with the j sound. We're going to do two sets of practice today, one is just the j sound with it's various spellings, then we'll practice a few minimal sets between the j sound and ch sound. Repeat the following words after me. j sound spelled j: j sound spelled dge: j sound spelled ge or gi: Here is the minimal set practice between the ch sound and j sound. I'll say the word with the ch sound first: As a quick review, here are the key points to remember about the j sound: - a j sound is a voiced ch sound - there is no additional d sound when the j sound is spelled dge - ge and gi are also common j sound spellings That's all for today everyone. Don't forget you can find transcripts for this, and all of our shows, at www.pronunicna.com/podcast, and you can follow us on Twitter, username pronuncian, to get all the updates on new Pronuncian content as well as other interesting English bits. This has been a Seattle Learning Academy digital publication. Seattle Learning Academy is where the world comes to learn. About the ESL/ELL Teacher Mandy has been teaching ESL, pronunciation and accent reduction since 2005 at Seattle Learning Academy, an English language school in Seattle, Washington, USA. She uses her experience with intermediate to advanced students to create the topics that most affect students living and working in the United States and can help them communicate better and more clearly. Note: The most current podcast will begin playing, scroll down to the episode you wish to listen to. Other Stuff at Pronuncian If you find value in Pronuncian's podcasts, why not check out the rest of the site? We have more than 8000 audio files online If you can't study online, choose one of our books, or try our downloadable sound drill MP3s Become a subscriber to receive the full range of Pronuncian services, from online tests to sound recording and feedback. Learn more about how we help individuals, teachers, and organizations improve communications with our first-class American English pronunciation education materials!
Our History curriculum includes half termly topics informed by the national curriculum and Development Matters for all children from Nursery to Year 6. We aim to offer a broad and ambitious history education that will help pupils gain a coherent knowledge and understanding of Britain’s past and that of the wider world. Our History curriculum is accessible to all, maximising the outcomes for every child so that they know more, remember more and understand more. It will inspire pupils’ curiosity to know more about the past and is structured to ensure that learning is sequential and cumulative. The history curriculum links to the golden threads through: The teaching of history will equip pupils to ask perceptive questions, think critically, weigh evidence, sift arguments, and develop perspective and judgement. History helps pupils to understand the complexity of people’s lives, the process of change, the diversity of societies and relationships between different groups. It also helps children gain a sense of their own identity within a social, political, cultural and economic background. Because of this, we feel it is important for the subject to be taught discretely as well as incorporated within other curriculum subjects such as English and Art. Our History curriculum aims to excite the children and allow them to develop their own skills as historians. We use the Cornerstones Curriculum to help us to achieve this aim. This is a creative and thematic approach to learning that is mapped to the Primary National Curriculum to ensure comprehensive coverage of national expectations. Teachers have identified the key knowledge and skills of each topic and consideration has been given to ensure progression across topics throughout each year group across the school. By the end of year 6, children will have a chronological understanding of British history from the Stone Age to the present day. Interlinked with this are studies of world history, such as the ancient civilisations of Greece and the Mayans. They will be able to draw comparisons and make connections between different time periods and their own lives. To ensure connections and comparisons can be made, each KS2 history topic must focus on five aspects: food, clothing, houses, beliefs and achievements. By doing so, children can build a bank of knowledge of how these different aspects have developed throughout history. Each history topic is supported by a topic box of resources, which includes costumes, primary and secondary sources, and knowledge organisers. Educational visits are another opportunity for the teachers to plan for additional history learning outside the classroom. At Thornaby Church of England Primary School, the children have many opportunities to experience history on educational visits. The children will explore local museums and engage with visitors in school to share history learning and participate in ‘hands on’ experiences. The Early Years Foundation Stage (EYFS) follow the Development Matters non-statutory guidance for Early Years Foundation Stage which aims for all children in Reception reach the ELG Past and Present by the end of their reception year. This includes understanding the past through stories, identifying similarities and differences between then and now, which lays the foundations of children’s understanding of the past and how it influences life today. The history curriculum at Thornaby is high quality, well thought out and is planned to demonstrate progression. We focus on progression of knowledge and skills together with discreet vocabulary progression within the units of work. Outcomes in topic and literacy books, evidence a broad and balanced history curriculum and demonstrate the children’s acquisition of identified key knowledge. We measure the impact of our curriculum through the following methods: In May 2022, the whole school took part in celebrations for Queen Elizabeth’s Platinum Jubilee. Each child designed and painted a piece of bunting that was displayed within the school. The children learnt about the different decades of the Queen’s reign and produced a Time Tunnel through the ages. The whole school enjoyed a picnic one afternoon where we enjoyed sandwiches and biscuits and listened to each year group singing a song from the decades. There were even prizes for the best crown made in each class.
Tempered glass is about four times stronger than annealed glass. The greater contraction of the inner layer during manufacturing induces compressive stresses in the surface of the glass balanced by tensile stresses in the body of the glass. Fully tempered 6-mm thick glass must have either a minimum surface compression of 69 MPa (10 000 psi) or an edge compression of not less than 67 MPa (9 700 psi). For it to be considered safety glass, the surface compressive stress should exceed 100 megapascals (15,000 psi). As a result of the increased surface stress, when broken the glass breaks into small rounded chunks as opposed to sharp jagged shards. Compressive surface stresses give tempered glass increased strength. Annealed glass has almost no internal stress and usually forms microscopic cracks on its surface. Tension applied to the glass can drive crack propagation which, once begun, concentrates tension at the tip of the crack driving crack propagation at the speed of sound through the glass. Consequently, annealed glass is fragile and breaks into irregular and sharp pieces. The compressive stresses on the surface of tempered glass contain flaws, preventing their propagation or expansion. Any cutting or grinding must be done prior to tempering. Cutting, grinding, and sharp impacts after tempering will cause the glass to fracture. The strain pattern resulting from tempering can be observed by viewing through an optical polarizer, such as a pair of polarizing sunglasses. Tempered glass is used when strength, thermal resistance, and safety are important considerations. Passenger vehicles, for example, have all three requirements. Since they are stored outdoors, they are subject to constant heating and cooling as well as dramatic temperature changes throughout the year. Moreover, they must withstand small impacts from road debris such as stones as well as road accidents. Because large, sharp glass shards would present additional and unacceptable danger to passengers, tempered glass is used so that if broken, the pieces are blunt and mostly harmless. The windscreen is instead made of laminated glass, which will not shatter into pieces when broken while side windows and the rear windshield have historically been made of tempered glass. Some newer luxury vehicles have laminated side windows to meet occupancy retention regulations, anti-theft purposes, or sound-deadening purposes. Other typical applications of tempered glass include: - Balcony doors - Athletic facilities - Swimming pools - Shower doors and bathroom areas - Exhibition areas and displays - Computer towers or cases Buildings and structures Tempered glass is also used in buildings for unframed assemblies (such as frameless glass doors), structurally loaded applications, and any other application that would become dangerous in the event of human impact. Building codes in the United States require tempered or laminated glass in several situations including some skylights, glass installed near doorways and stairways, large windows, windows which extend close to floor level, sliding doors, elevators, fire department access panels, and glass installed near swimming pools.
The familiar name “Neanderthal” came from the place where scientists found the first skulls in 1856 near Neander, Germany. Writers have published numerous articles about Neanderthals. Most of the articles have been very misleading about who the Neanderthals were, what they looked like, how they lived, and what connection they have to modern humans. Neanderthal research presents a changing picture. The popular perception of Neanderthals has been connected to the term “ape-man” often used to describe them. At the Max Planck Institute early in the 20th century, a French paleontologist depicted Neanderthals as “apelike and backward.” In 1953, a movie titled The Neanderthal Man popularized them as primitive humans with passions and desires common to apes. The view for years was that the Neanderthals were brutes who huddled in cold caves gnawing on slabs of slain mammoths. The truth is that Neanderthals walked upright and had larger brains and larger lung capacities than modern humans. They made complex tools, built shelters, created and traded jewelry, wore clothes, created art, buried their dead, had language and a form of worship. What has convinced scientists to change their understanding has been Neanderthal research and the sequencing of the Neanderthal genome. Comparisons of the Neanderthal genome and the modern European genome shows that up to 4% of modern human genes came from Neanderthals. They were not brutes or ape-men. They were totally human. Probably much of the reason for the negative stereotyping is the “out of Africa” scenario promoted by many as the origin of human history. Some scientists have not wanted to admit that human origins seem to have come from a more northern source. Dr. Joao Zilhao, a Portuguese paleoanthropologist and an expert on Neanderthals, says: “The mainstream narrative of our origins has been fairly straightforward: the exodus of modern humans from Africa was depicted like it was a biblical event: Chosen ones replacing debased Europeans, the Neanderthals. Nonsense, all of it.” Neanderthals were not apes or brutes of a different species of humans. They were a race of humans that had specific physiological characteristics that are somewhat different from the appearance of humans today. The Neanderthal Museum near Dusseldorf, Germany, displays a recreation of a Neanderthal by renowned paleo-artists Adrie and Alfons Kennis. He is groomed, wearing a business suit, and looking like the politician he could have been. For that matter, his name might have been Adam. As Neanderthal research continues, we will see what develops. — John N. Clayton Reference: Smithsonian Magazine, May 2019.
Websites to support Home Learning It is important that children are being safe online. It will be especially important for parents and carers to be aware of what their children are being asked to do online, including the sites they will be asked to access. Parents and carers should be aware of the importance of understanding what sites are safe and can be trusted. Grange Park Junior School will be providing weekly e-safety lessons for children to complete. The below poster from London Grid for Learning provides six top tips to keep children safe online during school closures. The following links will also support parents and carers in keeping their children safe online: - Six top tips to keep children safe online - LGfL Poster - Internet matters - for support for parents and carers to keep their children safe online - London Grid for Learning - for support for parents and carers to keep their children safe online - Net-aware - for support for parents and careers from the NSPCC - Parent info - for support for parents and carers to keep their children safe online - Thinkuknow - for advice from the National Crime Agency to stay safe online - UK Safe Internet Centre - advice for parents and carers In addition to the weekly homework that is assigned every Monday, Year 6 is given home learning from their set teacher. Home learning could include the use of: CGP books (Study books or 10-Minute test books), worksheets specific to the lesson taught that day or online work using one of their school logins. Children may not be given home learning everyday, however we encourage our children to read for a minimum of twenty minutes a day and record their reading (along with their summary of what they just read) in their Reading Record. Please refer to the Tool Box document for more resources to support your child at home. School Closure Home Learning Each child has been given a home learning pack, which includes the following: - Home Learning Booklet - Exercise book - Project book (brown) - Exercise book - Plain book (red) - CGP 10 minute test books - CGP Practice Tests Home Learning will be assigned weekly for children. Please follow the schedule set out for each week.
236-Uranium (236U) is present on Earth due to natural and anthropogenic production. However, the estimated inventory of anthropogenic 236U (106 kg) largely exceeds the natural one (30 kg). Releasing even a tiny fraction of this artificial isotope would drastically change the environmental 236U/238U ratio and therefore make this ratio a useful isotopic marker to trace such anthropogenic releases and to study seawater transport and mixing processes in the ocean. Casacuberta and her colleagues took this premise to propose the first transect of 236U in the North Atlantic Ocean during the first two legs of the GEOTRACES GA02 cruise. Global fallout and the releases of the European reprocessing nuclear plants La Hague and Sellafield are the main sources of 236U to the ocean. Results of this transect showed that there is a North to South and surface to deep decreasing gradient of 236U/238U atomic ratio when moving from 64ºN to 2ºN mirroring the distribution of waters masses in this region. In particular, highest 236U contents are shown in the Labrador Sea Water and Denmark Strait Overflow Water, tracing the penetration of waters to the North Atlantic Ocean that carry the signal of 236U from the two European reprocessing plants. This pioneer work is proving that 236U can be an efficient water mass transient tracer. Figure: Distribution of 236U in the North Atlantic Ocean. Click here to view the figure larger. Casacuberta, N., Christl, M., Lachner, J., Rutgers van der Loeff, M., Masqué, P., & Synal, H. -A. (2014). A first transect of 236U in the North Atlantic Ocean. Geochimica et Cosmochimica Acta, 133, 34–46. doi:10.1016/j.gca.2014.02.012. Click here to access the paper.
I turn, again, to sharing notes from Jeff Wynn and Louise Wynn, Everyone is a Believer: The Growing Convergence of Science and Religion (2019): But what caused the “vapor of darkness” described in 3 Nephi [chapter 8, verses] 19 and 20? This was almost certainly a smothering blanket of volcanic ash. As attention-garnering as it was, Mount St. Helens 1980 was a relatively small eruption (as we said earlier, a “mere” VEI level 5).* Yet this event still lofted about 3 cubic kilometers of material and left nearly a meter-thick blanket of ash on Yakima, Washington, 240 kilometers to the east, within a few hours of its eruption. Can ash put out fires? Yes — ask any forest fire fighter (one of us worked his way through college fight forest fires each summer) or ask anyone who learned how to shovel direct and ashes onto a campfire to smother it. (38) Are there any candidates for the actual volcano that caused the destruction described in 3 Nephi? There are at least two, according to the Wynns. But first a few words of explanation: Ash can travel all around the world, borne by the wind. But tephra, the material blown out from an erupting volcano in small fragments (ranging from the size of peas to that of a cantaloupe) doesn’t travel so far. Thus by measuring the distance that tephra has travelled, researchers can gain some idea of the magnitude of an eruption. The Wynns mention two candidate volcanoes: - Masaya, in Nicaragua, which erupted about 2100 years ago (plus or minus a century). It deposited tephra for as far as 170 kilometers. - Chiletepe, also in Nicaragua, which erupted about 1900 years ago (plus or minus a century). It dropped tephra as far away as 570 kilometers. The Masaya eruption lofted approximately 8 cubic kilometers of ash and tephra, nearly three times more than Mount St Helens in 1980. Both Chiletepe and Masaya lie east of the subduction zone where the Cocos Plate is being over-ridden by the Caribbean Plate at a rate of nearly 7 cm/year. This rate of crustal movement is important, because it is nearly three times faster than the Cascadia subduction rate in the Pacific Northwest of the US. This faster subduction rate means that there are proportionally larger and more frequent volcanic eruptions in Nicaragua than in the Washington and Oregon Cascades. Central American is basically a gargantuan pile of volcanic lava, tephra, and ash covered with recent soils and vegetation. In that sense, 3 Nephi 8 doesn’t record a particularly remarkable event for Central America — except for the timing of it. In other words, the Book of Mormon is fully conformable with the geologic record of Central America. (38, emphasis in the original)
Hazelnuts... a nut that has been planted around the world Eastern Filbert Blight (EFB) is a disease of Hazel shrubs and trees. As I have explained in my previous article (What's in a name? Hazelnut vs Filbert), Hazels are also known as Hazelnuts or Filberts and even Cobnuts. Eastern Filbert Blight is caused by Anisogramma anomola. This is a fungus originating in northeastern North America, and it infects Hazels (Corylus species). When growing on these plants (i.e. plants that are native to the area that this fungus naturally occurs), the fungus is not a problem. The small cankers of a mild Eastern Filbert Blight infection. However, with the advent of international travel and the spread of cultivated plants, we soon realized that this fungus can be deadly to non-native Hazels (i.e. Hazels that are not native to the eastern United States and Canada). When these plants are infected, very large cankers grow around the trunk and slowly suffocate or “girdle” the plant. Most plants die within 5-10 years after infection, and plants that do not die have a significant decline in productivity. A Corkscrew Hazel (a named variety) infected, and slowly dying, of EFB The problem is that most commercial orchards are composed of non-North American Hazels, typically varieties of the European Filbert (Corylus avellana). These plants are devastated by Eastern Filbert Blight. Many attempts to grow the tastier European Filbert in the eastern U.S. failed due to poor understanding of this disease; however, large and successful orchards of European Filberts have grown very well in the Pacific Northwest for over a hundred years. Unfortunately, in 1973, the Eastern Filbert Blight was identified in the Pacific Northwest. How it got there, no one is certain, but we do know that it has been slowly spreading through commercial orchards since. There are very large campaigns funded by the U.S. and Canada to identify and halt the spread of this disease, and there are fungicides are available that can kill A. anomola. With all this information, what is the home grower to do? Fortunately, there are many new cultivars and hybrids that have been developed with partial to almost complete tolerance or resistance to Eastern Filbert Blight. Treating trees with fungicides has a lot of unintended consequences. My recommendation is just to plant trees resistant to the blight. Most of us are not going to be commercial Hazelnut growers. We don’t have acre upon acre of European Filberts, and we don’t have our sole source of income depending on the size of nut or quantity of harvest. Granted, some of the hybrids produce nuts that are smaller than the larger commercial hazelnuts, but when it comes to long-term health of our bodies and our land, as well as the time spent in trying to identify, prevent, and treat this disease, a smaller nut is not that big of a deal.
Today education is not what it was even five years ago, which is not only because of technology’s impact. Introduction of e-books, online learning, and diverse electronic devices are rapidly changing the world around us, yet there are both old and new principles that still shape how we learn. As the changes embrace not only the students but the teachers and the education system as a whole, its future must be approached as a mixture of prognosis and anticipated changes that have been recently implemented. Such a method will make it easier to analyze different forms that education may take while remaining familiar without drowning in controversies like total elimination of human factor. Education will always remain in this form or another. A primary task is the implementation of existing changes to augment teacher’s roles as the learning methods evolve. 10 Things That Will Shape The Future of Education During The Next 20 Years - Personalized Learning. The necessity to consider students’ personal peculiarities and reveal their potential has always been a primary challenge for educators. A classroom usually comprises students who have different levels of knowledge and that is why the individual approach might be more suitable as it will help students to be more confident with their acquired knowledge and ensure the adherence of the learning process to students’ abilities. - Free Choice. This method allows students to learn in a way that appears to be the most suitable for them. Essentially, it does not mean that learners can skip particular subjects or ignore specific knowledge. On the contrary, free choice means that students will be able to modify their learning process and focus on obtaining skills and knowledge that is required for their future career. Moreover, students will be able to enrich their general studies with particularly useful additional materials or approaches. Waldorf and Montessori education systems already implemented this methodology and it will gain even more popularity in the future. - Grading Changes. One can anticipate significant changes related to the procedure of grading. Currently, the adopted rubrics often evaluate students’ achievements in a rather limited way, that is why the development of a more detailed and student-oriented grading system is expected. For example, Artificial Intelligence software will calculate the past achievements of each student for a meaningful criterion of success or failure. Of course, the teachers will always be a final instance to make the decision, yet they still should have more evaluation tools. According to online education surveys, progressive American universities should choose flexible grading where students discuss decisions with a teacher and reach a compromise as they will have multiple opportunities for feedback. - Different Approach to Exams. Another important concern among educators is how will the exams change in the near future. While it would be incorrect to assume that the current examination style may disappear completely, it is safe to predict that the usual quiz or question-answer model will transform into a more project-based environment. Since most course materials are currently stored in a digital domain, exam practices will shift from assessing students’ abilities to memorize definitions to focus on teamwork, projects, and field practice, where students can demonstrate real knowledge and how they could apply it to work tasks. As a result, it creates a healthy balance between actual employment simulation and knowledge of theoretical principles. - Even More Technology. Despite already taking its honorary place, the use of digital technology and supplementary devices will only increase. A major reason why it is expected is the growing number of online classrooms which allow remote learning and grading. While the practical part is explained during face-to-face interaction, the use of personal learning devices allows educators to control progress and track down any changes a student makes. Considering the fact that the young generation grows up with smartphones and social media as part of everyday life, education will make a serious effort to enter this field at even greater depth. - Personal Involvement. Both students and their parents will become more involved in the process of course curriculum development including the allocation of credit hours and even assessment methods. The possibility to let students explain how they see diverse matters and choose what they would like to change will become prevalent. Even though conservators are not happy with such an idea, it is the only significant power that may allow new initiatives to work in practice. It is especially relevant in online education where students have more choices regarding learning choices based on their future career plans. - Mentoring Roles. Modern schools already implement changes related to the widespread use of electronic devices and more students’ independence. The role of mentors will only increase under these circumstances. An average student may have access to a myriad of learning materials in online access but asking experienced educators for a piece of good advice will always be relevant. Technology is not a solution that can instantly solve all the problems. Continuing support via real face-to-face interaction where all student’s concerns and struggles can be discussed will remain relevant in future education. - Cloud Learning. This form of education might be familiar to the majority of students because of smartphones and online activities where cloud computing is involved, whether it is sharing photos with the family, listening to music or working on a college project that involves several participants. Providing immediate access to information anywhere it is needed along with the different privileges, cloud learning method ensures collaboration in practice and helps to reveal personal and professional skills among students. - Focus on Projects. There will be a tendency towards project-oriented tasks to help students implement gained theoretical skills in practice and interact with others as they seek solutions. Whether it is business administration, nursing, or any other class, adopting projects as part of the educational process will help to create situations where participants can easily sense real-life challenges of each assignment. This way educators can help students to see how they may manage their time, work on obtaining diverse skills relevant for real work, and develop strong leadership qualities. - Going Beyond Classrooms. One of the most important aspects that will change the future of education is the shift from traditional methods of teaching and learning. The effectiveness of multiple online courses and language lessons via Skype proved the absence of obligatory physical presence of students in a classroom. Additionally, not being limited by a particular location makes it possible to learn at one’s own pace while managing spare time, work, and family commitments. While it may bring up concerns among old school educators, such an approach is also crucial for disabled students who apply for online courses because visiting a university in person can be difficult. Guiding Principles for Future Education While students benefit from advancements in technology and learning on their own using their favorite devices, teachers are concerned with how to embrace all these changes or even reluctant to accept the changes. There is talk about AI’s replacing the importance of real teachers or even concerns that machine learning and cloud computing might make educators in the classroom completely redundant. Nevertheless, it is more of a philosophical issue as it is crucial to get the benefits of any innovative method. It is crucial to reach an accommodation because it is exactly what can help focus on every learner’s needs and implement technology at the same time. Without a doubt, even ten years ago, most school children were starting with the same curriculum, while modern schools take a different approach with a detailed evaluation. The benefits are obvious, yet it is not easy for an average teacher to make an immediate switch. Since technology provides immediate access to information for students, a true challenge for educators is to let the students learn on their own. Students must not only avoid getting lost in an endless data stream but also process acquired knowledge correctly. Students' works integrity can be a concern for teachers in the future as well. With all the aspects of life going online how can they be sure their students didn't just pay to have paper written. It's easy to do online even nowadays and only going to get easier. For this reason, governments should make sure there are only legit services that provide reference papers that can be used as education material. Only this way, students saying 'help me with my homework' can be safe of academic fraud. Another important concern is the anticipated change in the teacher’s position and authority if there is no classroom or face-to-face interaction with students. However, the surveys among participants in the most popular online courses in various disciplines revealed the fact that the instructor’s personality was one of the most important defining factors. Teacher’s role will never be undermined with the pace of technology. On the contrary, the availability of more supplementary tools will allow teachers to track the progress of each student and address an individual’s needs on time. Since most schools and colleges in the United States switch to smaller classes, a chance of focusing on individual qualities of each attendee will help teachers seek innovative ways to improve education by linking it with practice and skills that must be used in professional environments. Hand In Hand With Changes While no one can tell for sure that technology improves education, it cannot be denied that it has made it more accessible and varied than ever before. The teachers should not be afraid of the upcoming changes, they should be ready to accept new perspectives instead. Moreover, technology will help educators to gain useful experience, trust among students, and become more qualified eventually. Therefore, it is a task of teachers to work hard and teach students how to learn in a new environment while remaining the best role model for them.
Informed consent is the act of agreeing to allow something to happen, or to do something, with a full understanding of all the relevant facts, including risks, and available alternatives. That full knowledge and understanding is the necessary factor in whether an individual can give informed consent. This type of consent applies to many situations in life, including making decisions about medical care and legal issues, as well as entering into contracts. To explore this concept, consider the following informed consent definition. Definition of Informed Consent - Consent given only after having been informed of the facts, benefits, risks, and alternatives. 1965-1970 U.S. Medical-Legal Concept What is Informed Consent The law recognizes that a person can only legally consent to something, whether that is to allow something to occur, or to perform some act, if that person has been informed of, and understands the facts of the situation. It is only with a full comprehension of the risks and benefits of the decision, as well as an understanding of the possible alternatives, that any individual can consider whether an action would be in his best interests. The obtaining of informed consent is especially important in the medical field, as failing to receive such approval leaves medical professionals liable for injuries that may occur. Informed consent is also vital when entering a contract, as if one party is not fully informed, or if all information has not been disclosed, that uninformed party may be able to back out of the contract. Informed Medical Consent In a doctor’s office, hospital, or other medical setting, healthcare providers are required to obtain informed medical consent before treating a patient. In general, informed medical consent means advising the patient of reasons the treatment is needed, the benefits of having it done, the risks of harm that may occur, and any alternative treatments that may be considered. There are legal requirements for obtaining informed medical consent, as well as its documentation, though they vary from state to state. Typically, the information presented to the patient or legal guardian must be fully understood. The patient and medical professional share this responsibility, since the doctor does not automatically know what the patient does and does not understand. Informed medical consent must be given willingly, as it is not valid if obtained under pressure or duress. In most states, it is the responsibility of the physician treating a patient to confirm that informed consent has been obtained. When discussing the course of treatment, the physician should disclose: - The patient’s diagnosis - The nature and reason for the treatment or procedure - The benefits of the procedure or treatment - The risks of the procedure or treatment - The alternative treatments available - The risks and benefits of forgoing the proposed treatment or procedure The patient, or legal guardian, must sign and date the informed consent documents, and be given a copy. There are, of course, certain situation in which it is not required of healthcare professionals to obtain consent before acting. These are considered emergency situations in which the patient’s health or safety may be at risk if treatment is delayed because consent cannot be obtained. Such a situation occurs if the patient is unconscious or otherwise unable to understand or acknowledge consent. It also applies to seriously ill or injured minor patients whose parents or legal guardian cannot be reached beforehand. Even in emergency situations, medical personnel are generally allowed only to provide the level of treatment that is necessary to alleviate the worst of the problem, until proper consent can be obtained. Risks of Treatment that Must Be Disclosed When obtaining informed consent, doctors do not have to inform the patient of each and every possible risk, but he must advise of the important risks. In most states one of two is used to determine just what risks are required to be disclosed. - Would other doctors disclose that risk? If a medical case becomes the subject of a civil lawsuit, this issue would boil down to whether the undisclosed risk was statistically important enough to be disclosed, and whether it is commonly disclosed by other doctors in similar circumstances. This question would be answered by expert witnesses for each party, who are asked in court whether they would have personally informed a patient of the risk in question. - Would another reasonable patient have made a different decision if informed of that particular risk? This question could only be compared to a patient with the same medical condition and medical history as the plaintiff, asking whether he would have made a different choice regarding the treatment if he had been advised of that particular risk. Informed HIPAA Consent The Health Insurance Portability and Accountability Act of 1996, widely known as “HIPAA,” establishes certain standards in the healthcare industry. HIPAA protects workers’ health insurance benefits when they lose or change their jobs, and places restrictions on how information can be shared with researchers conducting studies. Among laypeople, HIPAA is most known for its privacy restrictions on patients’ protected health information (“PIH”). All healthcare providers, and other entities that use the personal information of patients, are required to obtain a signed HIPAA consent form before they are allowed to release or share any patient’s information. The actual type of information protected includes the status of the patient’s health, any information related to the provision of healthcare, and even payment information that can be linked to an individual. The HIPAA privacy rule bars providers from sharing any information regarding individual patients to research studies without first obtaining a signed consent form from the patient. To properly obtain informed HIPAA consent, the form must advise the patient how their health and other personal information will be used, and how it will be kept private. Individuals who believe their personal information has been improperly handled can file a complaint with the U.S. Department of Health and Human Services. HIPAA privacy regulations also define conditions when health plan providers are permitted to disclose protected health information of a patient. For example, if certain information is vital to public safety, such as certain communicable diseases, the provider must disclose a patient’s status to the U.S. Department of Health and Human Services. According to HIPAA privacy regulations, a valid consent form must contain the following specific elements: - It must be written in plain language that any reasonable patient can understand - It must inform the patient that his information may be used for treatment, payment, or future care - It must inform the patient of his right to review the form before signing - It must indicate the patient’s right to request restrictions on release of his health information - It must inform the patient he has a right to revoke the consent in writing, but that actions taken prior to the signing are not subject to revocation - It must include the patient’s signature and date - The original signed form must be kept by the provider for a minimum of six years Informed Financial Consent While the U.S. does not have specific laws requiring informed financial consent, as many other nations have, every person has the right and responsibility to ask questions about how transactions, services, and even healthcare will affect their bank account. In the case of making a purchase, or entering into a contract, the person has the right to ask questions about the total amount he will be required to pay, including any interest or other fees. It is highly recommended that a written receipt be obtained for every transaction. As it applies to medical care, informed financial consent involves asking questions about costs for services provided beforehand, when possible. In the U.S., hospitals and other medical providers are required to send detailed bills after the care has been provided. As a patient, however, questions about potential costs of tests, treatments, medications, supplies, and other expenses can and should be asked. Because it is not required that healthcare providers obtained “informed financial consent” before ordering expensive tests, medications, and other treatment options, the patient himself is the last line of defense, so to speak, when it comes to keeping his costs down. This is especially important if the patient is uninsured. Another issue to be aware of is whether more than one doctor will be involved in treatment. If so, the patient will receive bills from several sources, such as each of the doctors’ billing services, imaging (x-rays, CT scans, etc.), laboratory, and other departments. Informed Consent Form An informed consent form is used to protect doctors and other professionals from being held liable in the event something goes wrong. Each facility or entity may design its own forms, though there are certain elements that should be included in the forms to ensure its effectiveness should it be referred to later. Consent forms are used by a variety of industries, though they are most commonly and widely used in the medical field. An informed consent form for medical use is generally more detailed and specific than forms for other purposes. A well-made informed consent form clearly outlines the service or treatment to be performed, as well as the risks and benefits involved. The language should be clear and easy to understand, and the form should be printed on plain paper in a font that is comfortable to read. The provider of the service or treatment should be clearly identified on the form, including contact information, and the client or patient’s name should be clearly printed. It is important to provide enough room for the patient to fill in any information required, such as address and phone number, and the signature and date blocks should be clearly labeled. While some providers use the same informed consent form for adults and children alike, it is a good idea to provide a separate consent form for children, in which the parent or guardian is informed of the required information, and the signature and date block allows the person signing the form to clearly state his or her relation to the child. Examples of Informed Consent Problems Informed consent is not just another form a patient or client needs to sign so the provider can get on with his job. Unfortunately, in a world that relies on pages and pages of information, many of which require a signature or acknowledgement, this has become a serious issue. The fault lies, not only with the providers, but with the patients and clients who are unwilling to wade through a complicated, and lengthy, document that they are unlikely to understand, before signing and moving forward. The following are examples of informed consent problems that commonly occur. When it comes to providers’ responsibilities in obtaining informed consent, there are certain thinks to keep in mind. - Method of Disclosing Risks – the most likely risks, including the most severe risks, such as death, or brain damage, should be specifically disclosed, though a lengthy description of how this might occur is not necessary. When providers skip through the risks, without encouraging a discussion about the procedure and the risks, the patient or client cannot truly give informed consent. - Skipping the Details – a surprisingly large number of doctors and other professionals are too casual about the details of properly obtaining informed consent. Many fail to explain the risks with a healthcare professional present as a witness, or to have the patient and/or witness sign and date the necessary consent forms. Doctors are also cavalier about signing the consent documents as the treating physician, or fail to fully complete the patient’s information. - Asking a Sedated Patient to Give Consent – it is not uncommon for patients to be given a mild sedative, or a narcotic pain reliever to make them more comfortable while arrangements are made for further treatment, such as surgery, or admission to the hospital. This makes the patient legally incapable of giving informed consent, so consent forms should be signed prior to such sedation, or they should be signed by the patient’s nearest relative, such as his spouse or parent. In this case, the name and relationship should be clearly documented. - Obtaining Consent for Only One Procedure, When a Second Procedure May be Needed – if informed consent is obtained for a procedure or surgery which may bring about additional information that makes it necessary to do an additional procedure, the second procedure cannot be performed without consent. Emily has been having sever abdominal pain and bleeding, and her OB/GYN has determined that she has fibroid tumors that must be removed. As she is being prepared for the surgery, the staff has her sign a consent form for the fibroid removal, but there is no mention on the form of a hysterectomy. If, while the surgeon is operating, he determines that a hysterectomy is necessary, he has no consent for the procedure. Doing the procedure without explicit consent exposes the doctor, and the hospital, to serious liability if something goes wrong, or if Emily is upset because she may have chosen an alternative treatment. If he decides not to proceed with the surgery, with the though of rescheduling a hysterectomy, Emily is exposed to the additional risks of another major surgery. This example of informed consent could have been made legal had the doctor discussed the possibility of needing to do a hysterectomy with Emily, as well as the potential risks and alternatives beforehand. Both procedures could have been included on the consent form for Emily’s signature. Malpractice Lawsuit Over Failure to Obtain Informed Consent In 1964, Ralph Cobbs was treated by his family doctor, Jerome Sands, for an ulcer. Even with treatment, his symptoms worsened to the point that Dr. Sands deemed it necessary to perform surgery, a decision which was confirmed by another doctor. Dr. Sands advised Cobbs of the general risks of undergoing general anesthesia for surgery, and what would be done in the surgery, though he failed to mention risks of the surgery itself. A two-hour surgery repaired the ulcer, though after Cobbs had been discharged eight days later, he again began having severe abdominal pain. Cobbs returned to the hospital and went into shock, requiring emergency surgery. He was experiencing bleeding caused by an artery near his spleen that had been severed, and the spleen had to be removed. There were further complications, including the too-rapid absorption of an internal suture, and other issues. About a month later, Cobbs experienced another gastric ulcer, so doctors operated and removed half of is stomach. Cobbs filed a medical malpractice lawsuit against the surgeon as well as the hospital, claiming that the surgeon failed to inform him of the serious risks involved in the surgery on his stomach. Cobbs claimed that, had he known of those risks and potential alternative treatments, he may had made a different choice. At trial, the jury ruled in favor of Cobbs, awarding him over $68,000 in damages between the hospital and doctor. The defendants appealed the trial court’s decision on the issues of (1) the jury’s disregard of expert testimony in determining the doctor’s actions to be negligent, and (2) whether the jury had been properly instructed on the doctor’s duty to properly obtain informed consent. The appellate court reversed the judgment of the trial court, remanding the case to the trial court for a new trial. The appellate court specifically suggested principles for creating guidelines in determining the issue of informed consent. Related Legal Terms and Issues - Civil Lawsuit – A lawsuit brought about in court when one person claims to have suffered a loss due to the actions of another person. - Damages – A monetary award in compensation for a financial loss, loss of or damage to personal or real property, or an injury. - Defendant – A party against whom a lawsuit has been filed in civil court, or who has been accused of, or charged with, a crime or offense. - Liable – Responsible by law; to be held legally answerable for an act or omission. - Medical Malpractice – Any unskilled, improper, or negligent treatment of a patient, or failure to provide appropriate treatment, by a healthcare professional, which results in injury to the patient. - Plaintiff – A person who brings a legal action against another person or entity, such as in a civil lawsuit, or criminal proceedings.
Computer Technology for Disabled People In human-computer interaction, computer accessibility (also known as Accessible computing) refers to the accessibility a computer system to all people, regardless of disability or severity of impairment. It is largely a software concern; when software, hardware, or a combination of hardware and software, is used to enable use of a computer by a person with a disability or impairment, this is known as Assistive Technology. There are numerous types of impairment that impact computer use. These include: - Cognitive impairments and learning disabilities, such as dyslexia, ADHD or autism. - Visual impairment such as low-vision, complete or partial blindness, and color blindness. - Hearing impairment including deafness or hard of hearing. - Motor or dexterity impairment such as paralysis, cerebral palsy, or carpal tunnel syndrome and repetitive strain injury. These impairments can present themselves with variable severity; they may be acquired from disease, trauma or may be congenital or degenerative in nature. Accessibility is often abbreviated to the numeronym a11y, where the number 11 refers to the number of letters omitted. This parallels the abbreviations of internationalisation and localisation as i18n and l10n respectively. Special needs assessment People wishing to overcome impairment in order to be able to use a computer comfortably and productively may need a “special needs assessment” by an assistive technology consultant to help them identify and configure appropriate assistive hardware and software. Where a disabled person is unable to leave their own home, it is possible to assess them remotely using remote desktop software and a webcam. The assessor logs on to the client’s computer via a broadband Internet connection. The assessor then remotely makes accessibility adjustments to the client’s computer where necessary and is also able to observe how they use their computer. Considerations for specific impairments Cognitive impairments and illiteracy The biggest challenge in computer accessibility is to make resources accessible to people with cognitive disabilities – particularly those with poor communication skills – and those without reading skills. Another significant challenge in computer accessibility is to make software usable by people with visual impairment, since computer interfaces often solicit input visually and provide visual feedback in response. For individuals with mild to medium vision impairment, it is helpful to use large fonts, high DPI displays, high-contrast themes and icons supplemented with auditory feedback and screen magnifying software. In the case of severe vision impairment such as blindness, screen reader software that provides feedback via text to speech or a refreshable Braille display is a necessary accommodation for interaction with a computer. About 8% of people, mostly males, suffer from some form of colour-blindness. In a well-designed user interface, colour should not be the only way of distinguishing between different pieces of information. However, the only colour combinations that matter are those that people with a deficiency might confuse, which generally means red and green and blue and green. An example in Web accessibility is a set of guidelines and two accessible web portals designed for people developing reading skills are peepo.com try typing a letter with your keyboard for more and peepo.co.uk with enhanced graphics, unique style controls and improved interactivity (requires an SVG supported browser). Motor and dexterity impairments Some people may not be able to use a conventional input device, such as the mouse or the keyboard. Therefore it is important for software functions to be accessible using both devices; ideally, software uses a generic input API that permits the use even of highly specialized devices unheard of at the time of software development. Keyboard shortcuts and mouse gestures are ways to achieve this. More specialized solutions like on-screen software keyboards and alternate input devices like switches, joysticks and trackballs are also available. Speech recognition technology is also a compelling and suitable alternative to conventional keyboard and mouse input as it simply requires a commonly available audio headset. The astrophysicist Stephen Hawking is a famous example of a person suffering from motor disability. He uses a switch, combined with special software, that allows him to control his wheelchair-mounted computer using his remaining small movement ability. This performs as a normal computer, allowing him to research and produce his written work, and as a Voice Output Communication Aid (VOCA) and environmental control unit. While sound user interfaces have a secondary role in common desktop computing, usually limited to system sounds as feedback, software producers take into account people who can’t hear, either for personal disability, noisy environments, silence requirements or lack of sound hardware. Such system sounds like beeps can be substituted or supplemented with visual notifications and captioned text (akin to closed captions).
Inclusive Education In India – Need, Concept, And Challenges The aim of inclusive education (IE) is to educate children with disabilities and learning difficulties within the same classroom with other children. It aims to maximize all students’ potential regardless of their strengths and weaknesses and bring them together in one classroom and community. Tolerance and inclusion are among the most effective ways in which to promote a tolerant, inclusive society. It is estimated that 73 million children of primary school age were out of school in 2010, down from a high of 110 million during the mid-1990s, according to new figures from the UNESCO Institute for Statistics (UIS). About 80% of the Indian population lives in rural areas without equipment for special schools. It implies that there are an approximated 8 million children who are not attending school in India (MHRD 2009 statistics), the majority of whom are marginalized by various factors including poverty, gender, disability, or caste. Today, what do we need to accomplish the goal of inclusive education? In what ways will an inclusive environment benefit children with disabilities? What is the most effective, efficient, and effective way to deliver quality education for all children? In order to achieve universal access to education, inclusive schools must serve all the children in every community, and the government must manage the classrooms in which all children are included. Keeping these questions in perspective, this article discusses the importance of inclusive education, its challenges, and measures to ensure that inclusive education is implemented effectively in India. What Is Inclusive Education? In many countries, school children with special needs are not accepted in mainstream schools. These children are either having learning disabilities or, more frequently they belong to disadvantaged sections of society like the scheduled castes and tribes or other backward classes (OBCs). They have limited access to education. In such cases, any education that is provided to these children is called ‘Inclusive Education’. That means, these children are first identified by a screening process and then admitted to regular schools in a special class or unit. The special teacher would assess their problems and provide necessary lessons to overcome them. The schools will do whatever is necessary to ensure that the child with a disability completes her education alongside other students. Need For Inclusive Education By educating children with special needs in mainstream schools, we can ensure that these children get the opportunity to live their normal lives like other students. They are also provided opportunities to develop autonomy and self-esteem, as well as skills necessary for daily living. This provides a talent pool of skilled human resources that can be used to meet the future needs of society. For instance, a person with a hearing impairment can work as an operator in a bank that deals with deposits and withdrawal of money. Similarly, a blind child can be provided opportunities in the computers field. Inclusive education also helps to create awareness about human rights and contribute to social justice. In countries like India, where the literacy rate has reached 74%, more and more persons from unprivileged sections of society are getting educated. In the same way, children with special needs are also becoming aware that they have equal rights to education like other children. They also demand their right to be educated along with other students in regular schools. It is not an easy job to provide inclusive education in India because of various reasons. Challenges of Inclusive Education There are some challenges in providing inclusive education to children with special needs, which may be due to financial problems, lack of training facilities, and infrastructure. But I think that the most important challenge is the mindset of teachers and parents towards these children. They have to understand that these children also have the right to get a proper education. Teachers and school authorities should provide sufficient support facilities so that these children feel comfortable in the regular classrooms. In India, we have 650 million young people whose future is dependent on how inclusive education will be provided to them and how their talents can be utilized for nation-building activities. How To Implement An Inclusive Education System? There are three major components of this inclusive education system. The first is to identify children who need special attention. We should have a proper screening process through which we can recognize them as children with special needs. Only then, efforts can be made to provide them opportunities for learning in schools along with other students of their age. Secondly, the teachers in schools should be trained to provide special education. Inclusion of shadow teachers where the schools lack in trained teachers is much needed. The last but not the least component is that we have to give continuous support through which these children do not feel different from other students. Successful Cases Of Inclusion In Indian Schools And Colleges There are many successful stories of inclusion in India which prove that it is possible to educate a person with special needs along with other students. Some Indian colleges also have full-fledged training facilities for disabled students, like the National Institute of Open Schooling. This institute runs a course called ‘Life Enrichment Programme’ which offers courses on personality development, communication development, and mathematics. It also conducts vocational training in fields like typing, bookbinding, cooking, and flower making for disabled students. Similarly, there are other colleges that provide education to persons with disabilities. Benefits Of Implementing A Successful Inclusion Plan Benefits Of Implementing A Successful Inclusion Plan For Students With Disabilities, Teachers, And The Community At Large Can Result In A More Inclusive Society. Benefits To Students With Disabilities: -Increased student/teacher interactions -Opportunities to be involved in community activities such as field trips and other social events like dances, etc. that would not normally be accessible for school personnel who have traditional roles of being the “instructor” -Opportunities to socialize with other students outside of school hours -Increased opportunities for service-learning and community engagement could increase community awareness about diverse abilities and demystify disabilities. Benefits To Teachers: -Supports student success -Increased support time for instruction -Concrete goals for students with disabilities -Increased opportunities for mentoring and coaching of general education teachers -Opportunities to connect with students’ families, which can offer a great support system in addition to the school environment. Benefits To The Overall Community: -Equal access to quality education programs at all levels of schooling -Employment opportunities for persons with disabilities which will offer increased social interaction -Increased exposure to persons with disabilities, allowing for more positive attitudes about disability -Respond effectively to meeting the needs of children and adults with disabilities in their communities. Suggestions For Policy Changes That Can Promote Inclusion At All Levels Of Indian Education -Increase government funding for inclusive schools and programs -Implementing diversity awareness training to general teaching staff -Providing increased support through enrichment activities, one-to-one assistance in the classroom, during after-school hours -Allowing children to make choices about physical education and electives outside of the standard curriculum -Making school policies more flexible to allow for a variety of learning styles that would facilitate individualized learning and allow students to be evaluated on individual performance. -Providing increased access to assistive technology, such as computer software for spelling and math practice as well as classroom technologies like digital projectors, document cameras, etc. -Creating inclusion committees that can be used to represent and advocate for the needs of students with disabilities in public schools, private schools, and colleges. Summary: Indian educators are faced with a challenge to reverse years of exclusivity and adopt a philosophy of inclusion so as to ensure an appropriate education for all students, including those who have disabilities. A Call To Action- What You Can Do To Help Make This Change Happen! -If you are an individual with a disability, share your story and how it has been challenging or help you reach success -Promote education within your own community through means such as media campaigns, volunteering at schools, or joining a parent/teacher association -Advocate for increased government funding to support more inclusive programming at schools and colleges; participate in parent/teacher associations and attend school meetings -Provide authentic opportunities for students with disabilities so that they are not limited to a “special education” role but rather a student who is taking part in activities on an equal basis as his or her peers. -If you are a professional who works in Indian education, consider adding an element of inclusion to your programs and reflect on how you can shift the way you view disabilities, from being a connotation that one is “not as good” as others to instead being a shared experience -Hold public forums or include this initiative in guided discussion at professional development sessions so that it will be spread to more schools -If you are an administrator or member of a school board, be aware of the needs and views of your staff, get involved in local educational initiatives, and promote public awareness about inclusive opportunities that will benefit all students. -Be aware and stand against bullying or derogatory language about anyone for any reason to help create a climate of acceptance -If you are a professional who works in Indian education, consider adding an element of inclusion to your programs and share these experiences with others. -Promote awareness of the importance of including everyone in educational opportunities, both for students and staff. Summary: As stated by one parent whose child was included successfully at a school that was not previously inclusive, “Children should be able to go to school and play with other children. If we give them a different status, they will feel bad about themselves, and it will affect their whole attitude toward life.” Thus it is best for everyone if Indian education can take the steps necessary to widen the scope of opportunities that are available to its students. Inclusive education is the need of today’s world where one out of every ten children gets born with special needs. According to Census 2011, there are 19.6 million children with special needs in India, which accounts for 7.1% of the total population of the country. Inclusion is a practice where students who cannot study in regular classrooms due to their different abilities and disabilities get equal opportunities for learning along with other students on an equal basis at schools and colleges. This helps them integrate with mainstream society and also get trained to become an active citizen of the country. So far, this article has been written about how India is dealing with children with special needs and, in particular, ‘Inclusive Education’. Inclusion is a much-talked topic these days across India as many young people are suffering Now, it’s time for your opinion: How can India provide inclusive education to her children with special needs? What are its benefits? Write in the comments below!
Text: A. Hajnal, P. Hamburger: Set Theory + handouts The goal of the course is twofold. On the one hand, we get an insight how set theory can serve as the foundation of mathematics, and on the other hand, we learn how to use set theory as a powerful tool in algebra, analysis, geometry and even number theory. Notation, empty set, union, intersection, complement, subset, power set, equality of sets, N, Z, Q, R, countable and uncountable sets. Elementary properties of cardinal numbers: Equivalence of sets, cardinals, the Cantor-Bernstein 'Sandwich' Theorem and its consequences, |A| < |P(A)|. 'Oops': Russel's Paradox. The axiomatic approach: Zermelo-Fraenkel Axioms. More on cardinal numbers: Calculations with cardinals, 2ω = c (the cardinality of the continuum), there are c many continuous functions, 1· 2 · 3 ··· = c, the cardinal numbers ω, 2ω, 22ω, etc., König's Inequality. Ordered sets: Definition, isomorphism, initial segment, initial segment determined by an element, order type, well ordered set. The crucial notion, ordinal numbers: Definition, properties, calculations with The heart of the matter: The Well Ordering Theorem*: we can enumerate everything, the Theorem of Transfinite Induction and Recursion, the Fundamental Theorem of Cardinal Arithmetic: &kappa2=&kappa for every infinite cardinal κ, Zorn's lemma. Applications (as many as time permits): Every vector space has a basis, Hamel basis*, Cauchy's Equation, Dehn's Theorem about decompositions of geometric bodies, the Long Line, f(x)=x is the sum of two periodic functions, 2-point Sierpinski's Theorem* and the Continuum Hypothesis, throwing darts at the plane, decomposition of R3 into circles, Goodstein's Theorem*, the Problem of 13 Numbers, nonstandard analysis (infinitesimal numbers and how to make dy/dx precise), how to define the limit of nonconvergent sequences (Banach limits and ultrafilters). * The theorems marked by asterisk are, in the instructor's opinion, among the ten most beautiful results of mathematics.
Researchers at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) are researching a morphing wing trailing edge that can be smoothly transformed into any shape and will make conventional flaps redundant. The flaps on the wings of today’s commercial airliners are actuated via a complicated mechanism. Their arrangement and the resulting gap when they are extended compromises the aerodynamics, increases fuel consumption and contributes to inflight noise. The new technology, on the other hand, is flexible, its movement being based on that of carnivorous plants. This enables the gap between the wing and the flap to be eliminated. Efficiency through pressure When looking for a suitable technical solution for deforming the trailing edge of a wing during flight, the Venus Flytrap proved to be a good source of inspiration. This is something of a surprise, but only at first glance. “The carnivorous Dionaea Muscipula needs to be able to close its trapping leaves very quickly to catch its flying insect prey,” says Benjamin Gramüller from the DLR Institute of Composite Structures and Adaptive Systems. “It does this by changing the pressure in the leaf cells and using a leaf-shape geometry optimised through evolution.” Research has shown that the Venus Flytrap builds up tension through water pressure. When triggered – when a fly enters the trap – this can be quickly discharged. The trap then snaps shut. “We are now using the principle behind the plant’s movement for aeronautics applications,” adds Gramüller. Movement in two cell layers The DLR researcher and his colleagues have translated the cell system’s idea of using pressure to assume a desired shape on the trailing edge of a wing. To do so, they have developed the world’s first flap demonstrator, which is operated with compressed air and can flexibly assume aerodynamic shapes for cruising or landing. The plastic cells in the demonstrator have different sizes to form the appropriate shape for the trailing edge of the wing. Two layers of cells lie one on top of the other. “To raise the edge, we pressurise the lower cell layer, and to lower it, we pressurise the upper one,” explains Gramüller. “The compressed air can be easily supplied from the existing compressed air system in an aircraft.” The DLR researchers have already been able to use the new flight technology to demonstrate that the desired flap shapes for take-off and landing can be achieved depending on how the compressed air is applied. The aircraft is able to maintain itself in the air at low speed – for example, during landing – thanks to the increased lift coefficient from the extended flaps. The flaps increase the curvature of the wings during slow flight and hence compensate for the loss of speed.
The Earth is plunging through the debris trail of Comet Swift-Tuttle right now. It happens every August. The debris consists mostly of rocks about half an inch across, tumbling through space along the comet’s trajectory. The rocks are small, but orbital velocities impart tremendous kinetic energy. When those pebbles strike Earth’s tenuous upper atmosphere at over 12 miles per second, the resulting fiery display of shooting stars is something we call the annual Perseid meteor shower. This year the Perseids peak on the night of Aug. 12, though you can see them streaking across the sky for many days before and after the peak. Comets are the primordial remnants of the early solar system. They’re giant dirty snowballs, made of ice mixed with dust and rock, averaging six miles in diameter. Billions of comets orbit the distant reaches of the solar system; of these, we know of around 3,700 with highly elongated orbits that periodically bring them into our neighborhood, the inner solar system. Halley’s Comet is the one most people have heard of. It has a 75-year orbit — look for it next in 2061. Swift-Tuttle, with an orbit of 133 years, passed by in 1992. It won’t return until 2126. Comets like Halley and Swift-Tuttle spend most of their orbits frozen solid. It’s very cold way out there. But as they make their relatively short dash through the inner solar system, the Sun’s heat grows more intense. At around three times Earth’s distance from the Sun, the exposed ice on the surface of the comet begins to sublimate — that is, it transitions directly from solid to gas. This water vapor trails behind the comet, along with little rocks and countless specks of dust released by the sublimating ice. Sunlight illuminates this debris trail. If the trail is large and reflective enough, the comet becomes a naked-eye object during the weeks before and after the perihelion (the point when it’s closest to the Sun). After the comet rounds the Sun and starts back towards the outer solar system, the temperature plummets, sublimation ceases, and the comet goes dark again, having permanently lost some of its mass. If a comet’s trajectory intersects Earth’s orbit, then we pass through its debris trail once every year. These annual smash-ups are called meteor showers. The comet debris strikes our atmosphere at extremely high speed. Friction heats the dust grains and little rocks to incandescence and leaves a glowing trail of ionized air molecules. We call this flash of light streaking across the sky at an altitude of about 60 miles a meteor or shooting star. The comet debris doesn’t survive atmospheric entry. It’s too small and vaporizes in the intense heat. Millions of meteors strike our atmosphere every day. Most are tiny grains of sand, and their fiery demise is faint and hard to see. But there are larger rocks out there, and on any night of the year you can see a shooting star every few minutes or so. What makes a meteor shower different is the frequency, brightness, and orientation of the meteors. Under favorable conditions with a moonless sky, you can expect to see 90 Perseids per hour. They will be brighter and more dramatic than your average shooting star. Many will leave lingering trails; some will explode with a dramatic flash. If you trace the Perseids’ path back with your eye, they will all appear to originate from the same part of the sky. The constellation that happens to be in that part of the sky gives this meteor shower its name: the Perseids originate in the constellation Perseus. (You know him. He’s the one who slew Medusa with the old reflection-in-the-shield trick.) The Leonids, another well-known meteor shower and the debris of Comet Tempel-Tuttle, occur every November. This year, unfortunately, a nearly full moon will obscure the less bright Perseids. Even so, you can expect to see about 45 per hour if the sky is clear. Perseus is low in the northeast, but you don’t need to find him to see his meteors. Perseids can appear anywhere; just look up and be patient. The best time is after midnight and right before dawn, when Earth’s rotation points those of us on the East Coast directly towards the debris stream. But get out there whenever you can, even if it’s not the peak night. You never know when we might pass through an unusually dense filament of debris, turning a light shower into a heavy rain – or even a downpour – of meteors. Clear skies! Readers can contact the writer with astronomy questions at [email protected].
1. Set up your apparatus as shown in the diagram using a rectangular block. 2. Shine the light ray through the glass block 3. Use crosses to mark the path of the ray. 4. Join up crosses with a ruler 5. Draw on a normal where the ray enters the glass block 6. Measure the angle of incidence and the angle of refraction and add these to your results table 7. Comment on how the speed of the light has changed as the light moves between the mediums. 8. Repeat this for different angles of incidence and different glass prisms.
The lego race At the end of the 17th century, Gottfried Wilhelm Leibniz (1646-1716) and Isaac Newton (1643-1727), independently one from the other, invented a brilliant mathematical tool: infinitesimal calculus or differential and integral calculus. This is an incredibly efficient crystal ball to predict the future, provided the system in question is governed by a differential equation. This second chapter is about an introduction to the subject in the Lego world. How can we define the speed of a lego man that walks? The average speed is the ratio of the distance travelled, and the time that it took to do so. With that we can calculate the average speed for each step. But what about a driving car? The idea here is to consider the motion of a car as a succession of small steps, so small that they cannot be noticed. This is the basis for the derivative or differential calculus. Imagine a flowing river. For each point of the river, it is possible to calculate the speed of the water at that point. We then take a drawing of the river and draw an arrow on the point in question. The length indicates the speed, and the direction indicates the direction of the speed. Such an arrow is called a vector, and we have such a vector for each point of the river. Mathematicians call this a vector field. Integral calculus is the opposite of differential calculus. The task is now to calculate trajectories in a given vector field. The film shows how lego men moving through a vector field have no choice but to follow a predetermined path. This is known mathematically as the Cauchy-Lipschitz theorem and summarises the concept of determinism: with a given vector field, and a given starting position, there is a unique trajectory starting from that point, and this trajectory is tangent everywhere to the velocity vectors. Determinism as we have defined it has its limits, as we can show with a simple example. In 1879, the physicist James Clerk Maxwell (1831-1879) insisted on the importance of initial conditions for physical phenomena. « There is a maxim…that the same causes will always produce the same effects [...] There is another maxim which must not be confounded with the first, which asserts that “like causes produce like effects. This is only true when small variations in the initial circumstances produce only small variations in the final state of the system. In a great many physical phenomena this condition is satisfied; but there are other cases in which a small initial variation may produce a very great change in the final state of the system. » At the end of this chapter we see our lego men flying in their small spacecrafts. The images should convince you that now, in three dimensions, the situation can become very complicated..
Catrine Kostenius and Ulrika Bergmark, Associate Professor of Education, have for a year followed 15 pupils in grade 3 at a primary school in a smaller city in northern Sweden. The purpose has been to find out what school situations that students experience as meaningful and how these experiences can guide educational improvement. Meaningful school situations entail, for example, meetings and specific events that students value for their learning, development, and well-being. Such situations can make a valuable difference for the students involved. The researchers' analysis resulted in four themes: - Having the opportunity to learn in different spaces. - Being free and able to participate. - Experiencing caring and sharing. - Recognizing one’s own growth and achievement. The findings suggest that situations students find meaningful involve aspects of both learning and well-being. The practical implication for these results is that student-generated qualitative data can help indicate needs for educational improvement. Importance of appreciation – We have not wanted to identify problems but have had the perspective to look at what works, and we see in our research that it is something that the children can describe. They can link health and learning, even though they are no more than ten years old, says Ulrika Bergmark. The children in the study “Students’ experiences of meaningful situations in school” emphasize the importance of appreciation, being seen and heard, and that someone see their contribution as an important foundation for their enjoyment and well-being at school. – Appreciation is more than compliments. It's not about curling – the kids are are involved themselves. Instead, we raise the level of their influence and participation. Participation is a pillar of health promotion, says Catrine Kostenius.
Children’s Application of Theory of Mind in Reasoning and Language Many social situations require a mental model of the knowledge, beliefs, goals, and intentions of others: a Theory of Mind (ToM). If a person can reason about other people’s beliefs about his own beliefs or intentions, he is demonstrating second-order ToM reasoning. A standard task to test second-order ToM reasoning is the second-order false belief task. A different approach to investigating ToM reasoning is through its application in a strategic game. Another task that is believed to involve the application of second-order ToM is the comprehension of sentences that the hearer can only understand by considering the speaker’s alternatives. In this study we tested 40 children between 8 and 10 years old and 27 adult controls on (adaptations of) the three tasks mentioned above: the false belief task, a strategic game, and a sentence comprehension task. The results show interesting differences between adults and children, between the three tasks, and between this study and previous research. KeywordsFalse belief Second-order reasoning Sentence comprehension Strategic game Theory of Mind - Binmore K. (1992) Fun and games: A text on game theory. D.C. Heath and Company, Lexington, MAGoogle Scholar - Flobbe, L. (2006). Children’s development of reasoning about other people’s minds. M.Sc. Thesis Artificial Intelligence, University of Groningen.Google Scholar - Grice H.P. (1975) Logic and conversation. In: Cole P., Morgan J.L.(eds) Syntax and semantics, vol. III, speech acts. Academic Press, New York, pp 41–58Google Scholar - Karmiloff-Smith A. (1992) Beyond modularity: A developmental perspective on cognitive science. MIT Press, Cambridge, MAGoogle Scholar - Mol, L., Verbrugge, R., & Hendriks, P. (2005). Learning to reason about other people’s minds. In L. Hall, D. Heylen, et al. (Eds.), Proceedings of the Joint Symposium on Virtual Social Agents (pp. 191–198). The Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB), Hatfield.Google Scholar - Osborne M. (2003) An introduction to game theory. Oxford University Press, OxfordGoogle Scholar - Premack D., woodruff G. (1978) Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences 4: 515–526Google Scholar - Steerneman, P., Meesters, C., & Muris, P. (2003). TOM-test (derde druk). Antwerpen: Garant.Google Scholar - Termeer, M. (2002). “Een meisje ging twee keer van de glijbaan.” A study of indefinite subject NPs in child language. MA Thesis, Utrecht University.Google Scholar - Vrieling, P. (2006). Een ezel stoot zich geen twee keer aan dezelfde steen: Dutch children’s interpretation of indefinite subject NPs. MA Thesis, Utrecht University.Google Scholar - Verbrugge, R., & Mol, L. (2008). Learning to apply theory of mind? Journal of Logic, Language and Information (this issue). doi:10.1007/s10849-008-9067-4.
"Allergy" refers to a variety of conditions caused by an adverse reaction of the immune system to substances in the environment. Some substances - dust mites, animal dander, molds or pollens - can be inhaled. Others (e.g.; poison ivy) causes reactions upon contact with the skin. Some substances are ingested, such as food. With a food allergy, the body reacts as though that particular food product is harmful. 90% of all food allergies are caused by 8 food items: milk, peanuts, tree nuts, eggs, fish, shellfish, soy, and wheat. The Centers for Disease Control and Prevention (CDC) estimates that 4%-6% of children ages 4 years and younger have food allergy. The purpose of this training is to familiarize teachers with various food allergens, potential reactions, and appropriate responses. Click to download training evaluation forms in
Harvard University scientists manipulated the wings of live insects to investigate how wing deformations affected bumblebee aerodynamics. They found that wing flexibility enhances vertical force production, and thus how much weight bees can lift while in flight. Insect wings are flexible structures that passively bend and twist during flight. Only recently has insect flight research explored the aerodynamic consequences of flexible wing deformations. However, results from robotic models have contradicted those of computational models on whether wing deformations enhance or diminish aerodynamic force production. Dr Andrew Mountcastle and his colleagues addressed this question for the first time by manipulating the wings of live bees. They artificially stiffened the wings of bumblebees by applying a splint (in the form of a piece of glitter) to a flexible vein joint, and carrying out load-lifting tests. They found that wing stiffness decreased the amount of weight the bees could lift. The bees with stiffened wings showed an 8.6 per cent reduction in maximum vertical force production. This cannot be accounted for by changes in wing kinematics, as flapping frequency and amplitude were unchanged. Thus the team concluded that wing flexibility affects aerodynamic force production in a natural behavioural context; locomotory traits with important ecological implications.
With the help of lasers, cameras can track moving objects hidden around corners, scientists say. The finding could one day help vehicles see around blind corners to avoid collisions, researchers added. Laser scanners are now regularly used to capture 3D images of items. The scanners bounce pulses of light off targets, and because light travels at a constant speed, the devices can measure the amount of time it takes for the pulses to return. This measurement reveals how far the light pulses have traveled, which can be used to recreate what the objects look like in three dimensions. Prior research suggested that lasers could help locate items hidden around corners by firing light pulses at surfaces near the objects. These surfaces can act like mirrors, scattering the light onto any obscured targets. By analyzing the light that is reflected off the objects and other surfaces back to the scanner, researchers can reconstruct the shapes of the items — for instance, an 8-inch-tall (20 centimeters) mannequin. [Science Fact or Fiction? The Plausibility of 10 Sci-Fi Concepts] "The ability to see behind a wall is rather remarkable," said the study's senior author Daniele Faccio, a physicist at Heriot-Watt University in Edinburgh, Scotland. One potential application of this research is a system that helps cars see around bends to avoid collisions. "If the other vehicle or person is arriving too fast, implying that there could be a collision, then the system could feed this information to the car, which could then autonomously decide to slow down," Faccio told Live Science. However, one of the weaknesses of previous research was the length of time it took to reconstruct the image of an object. This prevented researchers from being able to use this method to track moving items in real time. Now, researchers have found a way to see moving objects hidden behind corners in just seconds instead of hours. The new system is made up of a laser and a camera. The laser used was extraordinarily fast, capable of firing 67 million pulses per second, with each pulse lasting just 10 femtoseconds. (A femtosecond is one-millionth of one-billionth of a second.) The camera was sensitive enough to detect single photons, or packets of light, and was fast enough to capture photons every 50 picoseconds. (A picosecond is one-millionth of one-millionth of a second.) In experiments, the scientists fired laser pulses onto a white cardboard floor just in front of a black cardboard corner. This light reflected onto a hidden object, a foam statue of a human measuring 11.8 inches (30 centimeters) high. Because of the camera's speed and sensitivity, after only 3 seconds of capturing data on the hidden objects, it was able to locate objects hidden behind a corner with up to 0.4 inches (1 cm) of precision. The scientists could reliably track an item located about 3 feet (1 meter) from the camera while the item moved about 1.1 inches (2.8 cm) per second. The scientists cautioned that they cannot use this method yet to generate 3D images of the objects the camera detects. Faccio said that future research could improve the system by helping it see in full 3D, as well as by making it detect images hundreds of feet away and faster than the 3 seconds it now takes. "Extending the detection distance — for example, up to hundreds of meters — is a great challenge, but we are confident that as the technology gets better and better, this will become possible," Faccio said. "It is clear that now we need better cameras, and these are indeed under development as we speak." Faccio, along with study lead author and doctoral student Genevieve Gariepy at Heriot-Watt University and their colleagues, detailed their findings online Dec. 7 in the journal Nature Photonics.
Adapted and Multicultural Physical Education for Teaching Majors *Identify, select, and implement appropriate instruction that is sensitive to the strength/weaknesses, multiple needs, learning style, and experiences of diverse learners. *Use appropriate strategies, services, and resources to meet special and diverse learning needs. *Create a learning environment which respects and incorporates learners’ personal, family, cultural, and community experiences Know the principles that address the physiological and biomechanical applications encountered when working with diverse populations. Know how to assess and evaluate students with disabilities in order to make appropriate decisions about special services and program components. Know how to teach students with disabilities integrated in regular physical education programs. Learn how to provide consultation and staff development activities related to students with disabilities and their IEP. Learn how to work and attend to the needs of multicultural students integrated in physical education
|| Home | A–Z Index | Birth Certificates | Disease Prevention | Emergency Prep. | Environmental Health | Immunizations | Jobs | Links | Nursing | Nutrition | Offices || Avian Influenza ("Bird Flu") ... and Human Pandemics Avian Influenza ("Bird Flu") received a great amount of media coverage, but there currently are no bird or human cases in the U.S. Most reported human cases have resulted from direct contact with infected poultry. All evidence to date indicates that very close contact with dead or sick birds is the principal source of human infection. Most cases have occurred in households where small flocks of poultry are kept in very close contact with humans. Especially risky behaviors include the slaughtering, defeathering, butchering and food preparation of infected birds. In a few cases, children playing in an area contaminated with bird feces is thought to be the source of infection. Very few cases have been detected in presumed high-risk groups, such as commercial poultry workers, workers at live poultry markets, cullers, veterinarians, and health staff caring for patients without adequate protective equipment. And all of these cases have been outside of the U.S., to date. Migrating birds carry avian influenza viruses, but usually do not get sick. However, this new strain of avian influenza is very contagious to other birds, and can sicken or kill domesticated birds including chickens, ducks and turkeys. The eventual arrival of infected birds in the United States does not signal the start of the disease in humans. At present, H5N1 avian influenza remains largely a disease of birds. The virus does not easily cross from birds to infect humans. The spread of avian influenza viruses from one ill person to another is extremely rare, and unlike most strains of human influenza, transmission has not been observed to continue beyond one person. However, since these new virus strains do not commonly infect humans, there is little or no immune protection against them in the human population. Therefore, if this particular H5N1 strain of avian influenza were to mutate and be able to spread easily from person to person, an epidemic or a pandemic (worldwide outbreak of disease) could happen. Pandemic influenza is very different from seasonal influenza. (Comparison Chart) A human pandemic outbreak would be very serious, so there is an effort to promote global preparedness. That is why health departments and news media around the world are closely monitoring the situation. issue gives agencies and individuals an opportunity to better prepare for a pandemic, and revise their emergency preparedness and communication plans. You should still take the usual common sense prevention measures against seasonal influenza: get a flu shot; stay home if you are sick; cover your mouth when you cough or sneeze; wash your hands often; and avoid touching your eyes, nose or mouth. Also exercise, eat healthy food and get plenty of rest. Fact Sheet
The first cities we know of were located in Mesopotamia, such as Eridu, Uruk, and Ur, and in Egypt along the Nile, the Indus Valley Civilization, and China. Before this time it was rare for settlements to reach significant size, although there were exceptions such as Jericho, Çatalhöyük and Mehrgarh. It is estimated that ancient Rome had a population of about a million people by the end of the first century BC, after growing continually during the 3rd, 2nd, and 1st centuries BC. It is generally considered the largest city before 19th century London. Alexandria’s population was also close to Rome’s at around the same time. Historians estimate a total population close to a million based on a census dated from 32 CE that counted 180,000 adult male citizens in Alexandria. Similar administrative, commercial, industrial and ceremonial centres emerged in other areas, most notably Baghdad, which to some urban historians later became the first city to exceed a population of one million by the 8th century instead of Rome. Viva La Revolucion! The industrial revolution from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. In the United States from 1860 to 1910, the invention of railroads reduced transportation costs, and large manufacturing centres began to emerge, thus allowing migration from rural to city areas possible. However, cities during those periods of time were deadly places to live in, due to health problems resulting from contaminated water, air and disease. In 1950, 30 percent of the world’s population lived in cities. In 2000 this proportion grew to 47 percent, and it is predicted to rise to 60 percent by 2030. Urbanites earn more income than rural residents, due to the fact that city living facilitates learning, innovation and specialisation. Richer workers can afford to purchase more energy-intensive durables such as cars and household appliances. As a consequence, urban populations consume 75 per cent of the world’s natural resources while simultaneously producing 75 per cent of the planet’s waste. Nearly 200 years ago, London was the only city in the world with more than one million people. Today, across the globe, there are more than 400 cities at least that size. Modern cities have indeed become so large that they actually create their own micro-climates. This is due to the large clustering of hard surfaces that heat up in sunlight and that channel rainwater into underground ducts. As a result, city weather is often windier and cloudier than the weather in the surrounding countryside. Conversely, because these effects make cities warmer than the surrounding area they also cause significant knock-on environmental effects such as global warming. Deforestation and Dustbowls According to the UN’s Millennium Ecosystem Assessment report, 47% of the Earth’s land surface was covered with forests prior to the modern industrial era; today the planet is left with only 10% of that. Every day, thousands of rural poor in India move to big cities where there are few environmental policies in place. Though they have come in search of a better life, many eventually end up living in slums, with no access to safe water or sanitation facilities. Yet, they add to the increasing demands of the city population for food and energy. According to UN population surveys, India is likely to have 700 million rural poor moving to its cities by 2050 if the current trend is not reversed in the next few years. With 45,000 plant and nearly 90,000 animal species, India is considered one of the world’s most mega-diverse countries. Experts say the continued growth in its urban population could lead to enormous loss of biodiversity. In China, a human population of 1.3 billion and a livestock population of just over 400 million are weighing heavily on the land. Huge flocks of sheep and goats in the northwest are stripping the land of its protective vegetation, creating a dust bowl on a scale not seen before. Northwestern China is on the verge of a massive ecological meltdown. Desert expansion has accelerated with each successive decade since 1950. China’s Environmental Protection Agency reports that the Gobi Desert expanded by 52,400 sq km (20,240 square miles) from 1994 to 1999. With the advancing Gobi now within 150 miles of Beijing, China’s leaders are beginning to sense the gravity of the situation. The strong winds of late winter and early spring can remove literally millions of tons of topsoil in a single day – soil which can take centuries to replace. For the outside world, it is these dust storms that draw attention to the deserts that are forming in China. On 12 April 2002, for instance, South Korea was engulfed by a huge dust storm from China that left people in Seoul literally gasping for breath. Schools were closed, airline flights were cancelled, and clinics were overrun with patients having difficulty breathing. Koreans have come to dread the arrival of what they now call “the fifth season” – the dust storms of late winter and early spring. Japan also suffers from dust storms originating in China. Although not as directly exposed as Koreans are, the Japanese complain about the dust and the brown rain that streaks their windshields and windows. Urban Solutions or Urban Myths? However some argue that urban growth offers some beneficial trends. For instance, since urbanites have fewer children than rural households, cities generally have slower aggregate population growth than rural nations. Also since cities are often deemed hotbeds of innovation, urban nations are more likely to develop green technologies such as hybrid vehicles and alternative fuel sources. Technological advance can certainly help to further reduce greenhouse gas emissions per dollar of national income. But if technology is to come to the rescue, economic factors– including countries, firms and individuals – must have sufficiently strong incentives to reduce carbon dioxide and other greenhouse gas emissions. On 15 May 2005, Mayors from around the globe took the historic step of signing the Urban Environmental Accords in the rotunda of San Francisco City Hall in recognition of United Nations World Environment Day 2005. Delegates from 50 of the largest industrialised cities on the planet drew up a charter that they claimed to be a new and bold course toward urban environmental sustainability. Let’s hope these so-called historic accords and protocols result in producing more than just additional bureaucratic ‘hot air’.
Whistling can be a way to command attention, call a dog or carry a beautiful melody. Once you find your sweet spot, practice as often as possible to gain greater control over your tone and volume. However, not everyone can master the whistling so don't be disappointed. You can either continue practicing or try different methods to whistle. There are three main ways to whistle: by puckering up with your lips, using your tongue, and using your fingers. Whistling With Your Lips 1Pucker your lips. Pretend like you're about to give a kiss, and make your lips into a puckered shape. The opening in your lips should be small and circular. Your breath flowing through this opening will produce a range of notes. - Another way to get your lips in the right position is to say the word "two." - Your lips should not be resting against your teeth. Instead, they should be stretched slightly forward. - If your lips are quite dry, lick them before you begin whistling. This may help improve the sound you produce. 2Curl your tongue slightly. Curl the edges of your tongue slightly upward. As you begin whistling, you'll change the shape of your tongue to produce different notes. - For beginners, rest your tongue against your bottom row of teeth. Eventually, you should learn to move the shape of your tongue to form different tones. 3Begin blowing air over your tongue and through your lips. Blow gently, slightly altering the shape of your lips and the curve of your tongue until you're able to produce a clear note. This may take a few minutes of practice, so don't give up too quickly. - Don't blow hard, just softly at first. You'll be able to whistle more loudly once you find the right form for your lips and tongue to take. - Wet your lips again if they dry out while you're practicing. - Pay attention to the shape of your mouth when you find a note. In what exact position are your lips and tongue? Once you find the note, keep practicing. Try blowing harder in order to sustain the note. 4Experiment with the position of your tongue to produce other notes. Try pushing it slightly forward to produce higher notes, and lifting it from the bottom of your mouth for lower notes. Play around until you're able to whistle up and down the scale. - To produce lower tones, you'll notice your jaw is lower as well. Producing lower tones requires creating a bigger mouth area. You might even point your chin downward when whistling low notes. - Your lips will be slightly tighter when you're producing higher notes. You might lift your head up to whistle a high note. - If you're hissing instead of whistling, your tongue might be too close against the roof of your mouth. Whistling With Your Tongue 1Pull back your lips. Your upper lip should be tight against your upper teeth, which may be slightly exposed. Your lower lip should be tight against your lower teeth, which should be fully covered. You mouth should look like you are smiling with no teeth. This positioning will create a very loud, attention-grabbing whistle of the sort you can use to hail a cab when your hands are full. - Use your fingers to set your lips into place until you get the positioning correct. 2Draw your tongue back. Position it so that it is broad and flat, and hovering just behind your bottom teeth. There should still be a slight space between your tongue and bottom teeth, but don't let them touch. 3Blow across your tongue and over your bottom teeth and lip. Direct your breath downwards towards your lower teeth. You should be able to feel the downward force of the air on your tongue. The air will flow at a sharp angle created by the top of your tongue and your upper teeth, downward across your lower teeth and lip. This produces a uniquely loud tone. - This whistle will require some practice and exercise. Your jaw, tongue and mouth will all be slightly strained when you whistle this way. - Try to broaden and flatten the tip of your tongue until you produce a loud, clear tone. - Remember that your tongue should float in your mouth more or less at the level of your bottom row of teeth. 4Experiment to produce more sounds. Changing the position of your tongue, cheek muscles, and jaw will produce a wide variety of whistle sounds. Whistling With Your Fingers 1Decide which fingers to use. When you whistle with your fingers, you use them to hold your lips in place to make it possible to produce the clearest note you can. Every person should decide which fingers to use to create the best possible whistle. Your individual finger positioning will be determined by the size and shape of your fingers and mouth. Consider the following possibilities: - Using both your right and left index fingers. - Using both your right and left middle fingers. - Using your right and left pinkie fingers. - Using the thumb and middle or index finger of the one hand. 2Make an inverted "v" shape with your fingers. Whichever combination of fingers you're using, put them together to make an upside-down "v" shape. The bottom of the "v" is where your fingers connect with your mouth. - Be sure to wash your hands before you put your fingers in your mouth 3Place the tip of the "v" shape under your tongue. The two fingers should meet just under your tongue, behind your back teeth. 4Close your lips over your fingers. There should be a small hole right between your fingers. - Close your mouth tight over your fingers to ensure air only goes through the hole between your two fingers for a more concentrated sound. 5Blow through the hole. This technique should produce a loud, shrill sound perfect for calling your dog home or getting your friends' attention. Keep practicing until your fingers, tongue and lips are in the correct position to produce a strong sound. - Don't blow too hard at first. Gradually increase the strength of the air you blow until you make the right sound. - Try different finger combinations. You might not be able to whistle over certain fingers but other fingers might just be the right size to produce a sound. How do I whistle with my mouth open?wikiHow ContributorJust pucker your lips tightly, then softly blow. It usually works better if you lick your lips before. Be sure to practice plentifully, as it takes time to master this skill. How do I make a hole with my 'V' shaped fingers?wikiHow ContributorAfter placing your finger in the 'V' shape, rest them on your bottom row of teeth, with the tips of the fingers which form the 'V' about half way in from the tip to the first joint. Rest your tongue on top of the 'V', and close your lips over them. Blow gently but firmly, and move the 'V' up and down until you get a whistling sound, then work on making it a louder and more shrill whistle. If none of these methods work, do I need to blow softer, and then harder?wikiHow ContributorBlowing harder should only make the whistle louder, it's generally not the factor that produces the whistling sound. However, if you're blowing extremely softly, blowing harder may be the solution. The positioning of your tongue, lips, fingers, etc. is what actually makes the sound though, so prioritize finding the correct positions first. How come I can't whistle? I tried so hard, but I just can't!wikiHow ContributorWhistling takes some time and a little practice. You should try experimenting with different mouth, tongue, or finger positions; try blowing harder, softer, and other variables like that. How can I whistle with my lips for a longer time? I get tired very easily.wikiHow ContributorTake some deep breaths beforehand so that you can whistle longer. This will also help you to have a bit more energy. What do I do when my voice cracks or breaks when I whistle?wikiHow ContributorYour voice does not crack when you whistle. If your lips aren't capable of holding the note, it may sound like your voice is cracking, but with practice it will not occur any more. The higher the note, the more likely it will sound like it's cracking. - Moving your lips in a smile motion will increase pitch. It's best to get to know your range this way. - Every whistle has a "sweet spot" where the shape is correct for a long, clear tone. Practice with the above whistles until you find your sweet spot. - For most people, whistling is easier if your lips are moist. Try licking your lips, and maybe taking a sip of water. - When you exhale try to raise your diaphragm so that your air escapes in a slightly raised direction. - Don't blow hard, especially when practicing. This will give you more air to practice with and it is better to get the sound and shape before going for volume. Categories: Sound Tricks In other languages: Français: siffler, Español: silbar, Português: Assobiar, Italiano: Fischiare, Nederlands: Fluiten, Deutsch: Richtig pfeifen, Русский: свистеть, 中文: 吹口哨, Čeština: Jak pískat, العربية: التصفير, Bahasa Indonesia: Bersiul Thanks to all authors for creating a page that has been read 1,622,494 times.
Etymology of the Word Rhombus Date: 01/30/2003 at 09:07:39 From: Ms. Judy Subject: 2D shape My first grade class would like to know where the word rhombus comes from as they find it an unusual word and why the shape is called this. Date: 01/30/2003 at 09:11:37 From: Doctor Sarah Subject: Re: 2D shape Hi Ms. Judy - thanks for writing to Dr. Math. From Steven Schwartzman's _The Words of Mathematics - An Etymological Dictionary of Mathematical Terms Used in English_ (Mathematical Association of America): rhombus (noun), rhombic (adjective): rhombus is a Latin word borrowed from Greek rhombos. The Indo-European root is wer- "to turn, to bend." A native English cognate is wrap. A rhombos in Greek was what is known among anthropologists as a bull- roarer, a small object rapidly swung about on a cord in order to make a noise. Such objects were used in religious ceremonies by many cultures, not just the ancient Greeks. Apparently the shape of the Greek rhombos was akin to what we now call a rhombus: a parallelogram with all sides equal. - Doctor Sarah, The Math Forum http://mathforum.org/dr.math/ Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
X-linked agammaglobulinemia (XLA) is a condition that affects the immune system and occurs almost exclusively in males. People with XLA have very few B cells, which are specialized white blood cells that help protect the body against infection. B cells can mature into the cells that produce special proteins called antibodies or immunoglobulins. Antibodies attach to specific foreign particles and germs, marking them for destruction. Individuals with XLA are more susceptible to infections because their body makes very few antibodies. Children with XLA are usually healthy for the first 1 or 2 months of life because they are protected by antibodies acquired before birth from their mother. After this time, the maternal antibodies are cleared from the body, and the affected child begins to develop recurrent infections. In children with XLA, infections generally take longer to get better and then they come back again, even with antibiotic medications. The most common bacterial infections that occur in people with XLA are lung infections (pneumonia and bronchitis), ear infections (otitis), pink eye (conjunctivitis), and sinus infections (sinusitis). Infections that cause chronic diarrhea are also common. Recurrent infections can lead to organ damage. People with XLA can develop severe, life-threatening bacterial infections; however, affected individuals are not particularly vulnerable to infections caused by viruses. With treatment to replace antibodies, infections can usually be prevented, improving the quality of life for people with XLA. XLA occurs in approximately 1 in 200,000 newborns. Mutations in the BTK gene cause XLA. This gene provides instructions for making the BTK protein, which is important for the development of B cells and normal functioning of the immune system. Most mutations in the BTK gene prevent the production of any BTK protein. The absence of functional BTK protein blocks B cell development and leads to a lack of antibodies. Without antibodies, the immune system cannot properly respond to foreign invaders and prevent infection. This condition is inherited in an X-linked recessive pattern. The gene associated with this condition is located on the X chromosome, which is one of the two sex chromosomes. In males (who have only one X chromosome), one altered copy of the gene in each cell is sufficient to cause the condition. In females (who have two X chromosomes), a mutation would have to occur in both copies of the gene to cause the disorder. Because it is unlikely that females will have two altered copies of this gene, males are affected by X-linked recessive disorders much more frequently than females. A characteristic of X-linked inheritance is that fathers cannot pass X-linked traits to their sons. About half of affected individuals do not have a family history of XLA. In most of these cases, the affected person's mother is a carrier of one altered BTK gene. Carriers do not have the immune system abnormalities associated with XLA, but they can pass the altered gene to their children. In other cases, the mother is not a carrier and the affected individual has a new mutation in the BTK gene. These resources address the diagnosis or management of X-linked agammaglobulinemia: These resources from MedlinePlus offer information about the diagnosis and management of various health conditions: - Bruton's agammaglobulinemia - congenital agammaglobulinemia
Sampled from ancient Roman maritime concrete near Naples, Italy, this 9-centimeter-diameter drill core comprises lime (white spots), lava (dark fragments), pumice (yellowish inclusions), volcanic ash, and other volcanic crystalline materials. Credit: Carol B Hagen Photography In De Architectura, Vitruvius described with amazement a building material mastered by the Romans: “There is also a kind of powder which from natural causes produces astonishing results. … This substance, when mixed with lime and rubble, not only lends strength to buildings of other kinds, but even when piers of it are constructed in the sea, they set hard under water.” Today, concrete continues to enjoy unprecedented popularity in building construction. It is the most common manmade material on earth, the second most consumed substance after water, and the veritable foundation of contemporary society. Although modern concrete may be considered an advanced building material, it pales in comparison to the original Roman formulation. Simply compare today’s version—which often shows degradation after 50 years—with the Roman monuments that still stand after two millennia and the underwater Roman structures that show little decay despite their harsh marine environments. Scientists at the University of California, Berkeley, recently studied Roman marine concrete to understand the ancient material’s secrets. Using X-ray spectroscopy on samples excavated from a harbor near Tuscany, Italy, they found evidence of the stable compound calcium-aluminum-silicate-hydrate (C-A-S-H). By contrast, modern Portland cement contains calcium-silicate-hydrate (C-S-H). The researchers allege that the addition of aluminum and the reduced amount of silicon in the Roman version result in its superior longevity. They also found that Roman concrete contains crystal lattices made of aluminum tobermorite, a hydration product that improves stiffness—and which modern Portland cement lacks. Roman concrete is also less carbon intensive. In Portland cement, the limestone and clay mixture must be heated to 1,450 C (2,642 F). The fuel required to reach this temperature—coupled with the carbon released from the resulting calcium carbonate—emits significant amounts of greenhouse gas. Meanwhile, Roman cement used less lime, which was made from limestone heated at 900 C (1,652 F). This reduction in processing temperature and lime content may be the key to reducing concrete’s high carbon footprint. By incorporating pozzolan or volcanic ash materials from regions with large natural deposits, neo-Roman concrete could offset 40 percent of the Portland cement used today, the Berkeley researchers estimate. However, mining more pozzolan is not the only answer. The industrial waste products flyash, slag, and silica fume, which are used to offset Portland cement, perform similarly to natural pozzolan. The research suggests that a more thorough study of the C-A-S-H compounds in these materials would determine which most closely approximate the binding characteristics of Roman concrete. Such research could lead to improvements in the longevity and environmental footprint of concrete without the need for additional mining.
Ideal gas law From Wikipedia, the free encyclopedia The state or amount of an amount of gas is found by using its pressure, volume, and temperature in the equation: - is the absolute pressure of the gas, - is the volume of the gas, - is the number of moles of gas, - is the universal gas constant, - is the absolute temperature.
Atmospheric and Gauge Pressure In another lesson, we defined pressure as the force per unit area, and gave the SI units as Newtons per meter squared, or Pascals. There are actually a couple of different ways to measure pressure based on our frame of reference. We will further define pressure in terms of absolute pressure and gauge pressure. - Absolute pressure: The total pressure exerted on a system referenced to zero Pascals, and equal to the gauge pressure plus atmospheric pressure - Gauge Pressure: Pressure referenced to one atmosphere, and is the pressure actually shown on the dial of a gauge that registers pressure relative to atmospheric pressure. For example, an ordinary pressure gauge reading of zero does not mean there is no pressure, it means there is no pressure in excess of atmospheric pressure. - Atmospheric Pressure: The pressure exerted by the weight of the atmosphere. Absolute pressure = gauge pressure + atmospheric pressure Units of pressure - in the SI system: 1 atmosphere = 1.013x105 N /m2 or 101.3 kilopascals 1 Bar = 1.00x105 N/m2 = 100 kilopascals But where does pressure come from? The pressure of a fluid at any depth depends only upon the density of the fluid (r) and the distance below the surface of the fluid (h) and the gravity constant (g). The height of a fluid is sometimes referred to as the pressure head. Pressure = density ● gravity ● height A fluid exerts pressure in all directions. The pressure at a given depth on an object is the same in all directions. It is also independent of the volume of the fluid. For example, the pressure in a swimming pool filled with salt water at a depth of 10 meters is the same as the pressure in the ocean at a depth of 10 meters. The pressure on a submerged object is always perpendicular to the surface it is in contact with. (Picture linked from The pressure of the Earth’s atmosphere changes height, just as the pressure in any fluid changes with the depth of the fluid. Hydrostatic pressure is the pressure at the bottom of a column of fluid caused by the weight of the fluid. Hydrostatic pressure exists at all points below the surface, but it is not constant at all points. The hydrostatic pressure at any point depends on both the fluid density and the depth below the fluid surface. A scuba diver experiences the effects of hydrostatic pressure. As the diver goes deeper beneath the surface, more hydrostatic pressure is exerted on him. The amount of hydrostatic pressure depends on the weight of the water and the diver's distance below the surface. A swimmer diving down in a lake can easily observe an increase in pressure with depth. For each meter increase in depth, the swimmer experiences an increase in pressure of 9,810 N/m2. Since a liquid is nearly incompressible, its density does not change significantly with increasing depth. Therefore, the increase in pressure is caused solely by the increase in depth. The formula is given by: Pressure = density ● gravity ● depth Determine the water gauge pressure at a house at the bottom of a hill fed by a tank of water 8.0m deep and connected to a house by a pipe that is 120m long and at an angle of 50° from the horizontal. Assume the tank h=120 m (sin50°) + 8m = 100 m P= (1000kg/m3)(9.8m/s2)(100 m) P=9.8 x 105 N/m2 Here is an interesting point to note. In the absence of friction or other net external forces, our behavior of fluids will follow the behaviors of other things that we have modeled in other lessons. For example, in the above problem, should a hole develop in the pipe right before it enters the house and the water is free to spray straight up, it will rise rise to the same height as the level of the water in the tank and remain even with that level as the tank drains and the level drops. To read what others have to say on the subject, try: For Practice Problems, Try: Giancoli Multiple Choice PracticeQuestions (It will be a few lessons before all of this is covered)
This website focuses on the chick-kidnapping behavior displayed by the female emperor penguins when they return for her chick after feeding and realize that the chick is dead or missing. The instinct to raise young drive these unfortunate females to steal the chicks from other mating couples, but once the initial urge gives way to the realizatoin that the chick is not their own, they abandon the chick, leaving the youngster open to the elements and predation. Standing nearly 4 feet tall and weighing around 75 pounds, the Emperor Penguin (Aptenodytes forsteri) is the largest penguin species in the world. The populations usually stay in close proximity to the shores of Antarctica, but have been known to travel north several hundred miles near the southern coasts of South America and Australia. In these waters, emperor penguins can find all their prey, which consists mainly of fish and cephalopods, and occasionally krill. The emperor penguins are easily recognized by their big black heads, thick blue-grey neck, black beak, and orange ear patches. The penguins have really dense and thick feathers that are also greasy, so as to retain heat and keep the water off their feathers. In addition to the thick feathers, the emperor penguins have a layer of blubber under the skin. In the early part of winter, the emperor penguins march toward remote areas of the Antarctica to breed once the sea ice has formed. The emperor penguins have a very complicated breeding pattern. Once at the breeding ground, the male and female penguins either look for their mate from the previous year or look for a new mate. Once a mate is found, courtship begins. Not long after, the female will lay her one and only egg. She will keep it between her feet under her thick feathers to keep the egg warm before passing it on to the male and then going off to the sea to feed. The male penguins are now responsible for taking care of the egg till the mother returns. Figure 1. An emperor penguin colony in the early summer. (http://www.arcturusexpeditions.co.uk/cruises/ross_sea.html)
The world is in a constant state of change. With this change comes good things like technological advancements which in turn result in easier ways of doing things. However, to every positive, there is a negative. In the current world, people aim to do things faster and more efficiently than before. While this is a good thing, the dependency on machines that simplify work is high, as is that on fast foods. Fast foods, also known as junk food, are very popular among the working class citizens of any country who view them as an easy way of saving time and energy, while still achieving the intended goal of being satisfied. In retrospect, these foods are also very popular among the lazy in society; those who just can’t pull their weight, mainly children and young adults in schools and colleges. This has resulted in the emergence of lifestyle diseases like diabetes, high blood pressure and obesity. This paper therefore aims to discuss to a great extent one of these lifestyle diseases, obesity; what it is, what causes it, its health effects and how to manage it. According to Oxford Advanced Learners Dictionary 6th Edition, to be obese is to be very fat, in a way that is not healthy. Wikipedia defines obesity as a medical condition in which excess body fat has accumulated to the extent that it may have an adverse effect on health. Obesity is therefore a medical condition resulting from a person having excess fat on their body. Wikipedia further explains that obesity is defined by body mass index (BMI) and evaluated in terms of fat distribution via the waist- hip ratio and total cardio- vascular risk factors. Thus, there are several levels of obesity. A person with a BMI of 25.0 to 29.9 is classified as being overweight. One with a BMI of 30.0 to 34.9 is rated as having class one obesity, those falling under the category of class two obesity have a BMI of between 35.0 and 39.9, while those with a BMI of greater than or equal to 40 have class three obesity.Any body mass index greater than or equal to 35 or 40 is categorized under severe obesity, any BMI of greater than or equal to 35 and undergoing health conditions related to obesity is referred to as morbid obesity, as is that greater than or equal to 40 to 44.9. In addition, a body mass index greater than or equal to 45 or 50 is referred to as super- obesity. However, some countries have categorize obesity differently as compared to others because it occurs at different BMI levels with varying races. Causes of obesity Many factors have been attributed to as being major causes of obesity. First is the excessive intake of food energy accompanied by little or no physical activity. This is the main cause of obesity since all the energy accumulated in the body is not spent thus resulting in the accumulation of fat. Secondly is the increase in reliance on machinery and cars which has generally resulted in laziness. People have retreated to certain comfort zones, allowing things to be done for them and in the process, failing to engage in physical activities that would help burn the fat accumulated in the body. Other factors that have been credited with resulting in obesity include not smoking, or reducing the rate at which one smokes, since smoking tends to reduce appetite; an increase in the intake of medication that result in weight gain; pregnancy in older ages which generally increases the risk of one’s children becoming obese; generationally- passed genetic weight factors; an increase in ethnicities and age groups that tend to be heavier; individuals similar in genotype and phenotype make-up mating thus resulting in an increased likelihood of the concentration of factors that could lead to obesity; the lack of sleep, and pollutants from the environment that result in lower metabolism of lipids, among other factors. The health effects of obesity Obesity is associated with contributing to various diseases like asthma, diabetes, cancer and cardio- vascular diseases. Excessive weight gain and fat concentration in the body is bound to result in high blood sugar levels as well as difficulties in breathing, hence fostering the emergence of many other opportunistic ailments. Obesity also results in high mortality rates as it reduces the life expectancy of those with the disease. Stuudies have shown that the rates at which women with a BMI of over 32kg/m2 have died in the recent years have doubled. It is estimated that obesity can reduce the life of a person by upto seven years. The risk of one acquiring many mental, as well as physical conditions is usually heightened by obesity. Medical disorders like diabetes mellitus type 2, high blood cholesterol, high blood pressure and high levels of fatty molecules known as triglycerides in the blood arise as a result of obesity. People with obesity also tend to suffer psychologically as a result of being stigmatized by the society. They are usually looked down upon, or considered objects of societal ridicule and comedy. In a society that increasingly emphasizes the beauty of being thin and shaped like a model, obese people tend to suffer from low self- esteem and some even become suicidal, and willing to do anything to avoid the stigma. According to the World Health Organization (WHO), traditional health concerns such as infectious diseases will soon be replaced by obesity as the major cause of illness. This lifestyle disease is increasingly becoming not only a health problem but also a policy one due to its prevalence, health effects and costs. How to manage obesity In order to manage obesity, two key measures must be put in place. These include : physical exercise and appropriate dieting. People with this condition need to engage in active physical exercises such as running, doing sit- ups, lunges and squats. Cardio exercises like swimming and rope- skipping are accredited with helping one to burn fat more quickly since they involve all muscles of the body and regulate one’s breathing too. It is often advisable that people dealing with weight problems engage the services of physical trainers or enrol in gymn memberships so as to get professional counsel. They are advised not to over- exert themselves nor give up when the going gets tough. Appropriate dieting on the other hand involves eating correct food portions; not too large nor too small, balancing all the food groups; proteins, vitamins and carbohydrates, and being disciplined enough to avoid junk food high in sugar and fat concentration.While these two measures would seem easy for the ordinary person, they are labourious to an obese person who will require not only physical but also emotional support from those around them.Parents are advised to feed their children well- balanced foods and to monitor their participation in physical activities in order to reduce chances of them dealing with obesity in the future. Medication such as orlistat have been introduced world- wide to help overweight people deal with the weight and have proven successful in the short- term though long- term effects of these medication is yet to be determined. Bariatric surgery has also proven to be a successful way of dealing with obesity, though the high costs and risks involved have scared many off. ( Kushne 34) In case you have considered our essay sample a great piece of writing and you would like to get the similar one, you are welcome to order an essay on school uniform from EssaysMasters.com
A comet that comes within about 50,000 km of the Sun, so that it passes through the solar atmosphere and may actually fall into the Sun. Records of sungrazing comets go back many centuries. In the late 1880s and early 1890s, the German astronomer Heinrich Kreutz (1854–1907) studied the possible sungrazing comets that had been observed until then and determined which were true sungrazers. He also found that the genuine ones all followed the same orbit, with a period of about 800 years, indicating that they were fragments of a single comet that had broken up. To this day, all comets seen to graze the Sun have been members of the Kreutz sungrazers group. The parent may have been a bright comet seen by the Greek astronomer Ephorus in 372 BC to come close to the Sun and then break in two. Several hundred sungrazers have been observed by the SOHO spacecraft out of a total population of perhaps 200,000, the smallest of which may be less than 10 m across.
In the fall of 1604, explorer Samuel de Champlain and his crew arrived from Europe to make landfall on the terrain that would eventually become Acadia National Park. De Champlain, who mapped the area and named it “Isle de Monts Desert,” recalled that the “summits are all bare and rocky. The slopes are covered with pines, firs and birches.” Though much has changed over the 400-plus years that followed, odds are good de Champlain would still recognize the rolling, tree-covered terrain and rocky shoreline today. Compared to the original inhabitants of the area, however, de Champlain was a relative latecomer. The Wabanaki people and their ancestors trace their Maine roots back more than ten thousand years. For the area’s earliest inhabitants, Mount Desert Island was well known for its plentiful hunting and fishing. While hunting and fishing are still important to residents and visitors alike, present day Mount Desert Island is even better known as a place for recreation, relaxation and unmatched natural beauty. That transformation began in the early 20th century, when Woodrow Wilson first gave federal status to the land now known as Acadia, establishing it as Sieur de Monts National Monument on July 8, 1916. Less than three years later, on February 26, 1919, the area was re-designated and renamed as Lafayette National Park. Then, on January 19, 1929, the park was again renamed Acadia National Park – and like its appeal, the name has endured. While the U.S. government focused on the establishment and protection of the park – John D. Rockefeller, Jr., a wealthy Mount Desert Island landowner, had plans of his own. From 1915 to 1933, Rockefeller dedicated his efforts and resources to the development of his vast island estate, and the establishment of the carriage paths. Originally intended as a diversion for guests and dignitaries, Rockefeller developed more than 50 miles of trails to provide carriage and horseback access to the island’s remote beauty. He spared no expense, and constructed 17 arched granite bridges to achieve his vision. In 1930, Rockefeller commissioned Beatrix Farrand to design planting and landscape plans for the carriage paths. The results show Rockefeller’s and Farrand’s remarkable foresight – still evident today in the beautiful, well-maintained trails. Details – like the hand-cut granite coping stones that protect travelers from steep roadside embankments – still stand as testament to their long-term vision. In the fall of 1947, wildfires consumed more than 10,000 acres of the park. The fires, which burned for days, were finally brought under control by U.S. military forces, National Park employees and local residents. Despite the short-term devastation, the fires ultimately enhanced the Park’s long-term beauty and diversity – and called the Rockefellers back into action. Through their generosity and dedication, reconstruction began as soon as the fires ran their course. Now, more than sixty years later, Acadia consistently ranks among the most-visited national parks in the U.S.
The Story of Jackie Robinson: Bravest Man in Baseball | 760L - Learning Goal - Explain the impact of a significant experience on a person’s life. - Approximately 2 Days (40 minutes for each class) - Necessary Materials - Provided: Events and Effects Chart 1, Events and Effects Chart 2, Events and Effects Worksheet (Student Packet, pages 15-16) Not Provided: Chart paper, markers, The Story of Jackie Robinson, Bravest Man in Baseball by Margaret Davidson Before the Lesson Read Chapter 6: “The Noble Experiment” – Chapter 8: “Oh, What a Year!;” Complete Student Packet Worksheets for Chapter 6: “The Noble Experiment” – Chapter 8: ”Oh, What a Year!” Activation & Motivation Explain that there are some events in our lives that are significant, while there are other events in our lives that change us forever. For example, many of my teachers were good when I was growing up, but in 7th grade, I had an amazing teacher. S/he made me realize that stories were my gateway to the past. I think this teacher influenced me to eventually become a teacher, so I could share literature and history with others. Time permitting, ask for 1-2 volunteers who have had life changing experiences to come to the front of the classroom. The rest of the class will act as “biographers” and will ask interview questions about the event. Scaffold this activity by prompting students with sample questions, such as: Describe how the event happened. When did this happen? Who was involved? How did the event make you feel? How did your life change? Why was it important? will explain that when I read a biography, I am reading about the important events that happened in a person’s life. Some events are important, like the ones we charted on our comic strip in Lesson 2, but other events are life-changing. They impact a character by making them grow, change, or realize something about the world. I want to be able to explain how life-changing events impact and affect a main character. To explain the impact of a life-changing event in the life of the main character in a biography, I will first identify an event that helped the character grow, change, or realize something about the world. Then, I will list the effects of the event, and finally use them to explain the impact of the event on the life of the main character. I will model explaining the impact of life-changing events in the life of Jackie Robinson. I will record the life-changing events on Events and Effects Chart 1. First, I will find and describe a life-changing event that happened to Jackie. I will focus on when Carl Johnson convinced Jackie that continuing to belong in a gang would break his mother’s heart. I will describe the event using Who?, What?, Where?, When?, Why?, How? Questions and record the information on the Events and Effects Chart 1. Note: See Events and Effects Chart 1 for specific examples. Then, I will write the effects of this event by looking for evidence in the text. I know that because of this conversation, Jackie dropped out of The Pepper Street Gang and worked hard at sports and academics in school. He was able to go to Pasadena Junior College and eventually UCLA because he left the gang and focused on sports and school. Finally, I will use this event to explain how it impacted him. I will record the impact on the Events and Effects Chart 1. This conversation changed Jackie’s life because he decided to leave the gang and stay out of trouble. Instead of taking his anger at prejudice out on his mother and his future, he used his feelings of anger to get into college, play professional sports, and make a positive difference in the world. Ask: "How do I explain the impact of life-changing events on a character?" Students should respond that you identify an event that helped a character change, grow, or realize something about the world. You describe that event and its effects, and use the effects to explain how the event was made an impact on the character. will examine the impact of the Minor League World Series between Montreal and Louisville on Jackie’s life. We will record our thoughts on the Events and Effects Chart 2. Note: See Events and Effects Chart 2 for specific examples. First, we will describe the event using our reporting questions—Who?, What?, Where?, When?, Why?, and How? We will describe how the players and audience in Montreal were so offensive and rude at the first game, they brought Jackie’s morale down to an all time low, and he was not able to play his best. We will list the effects of this event. When the teams got to Montreal, the crowd had heard about the disrespectful Louisville team. 5,000 crowd members booed at Louisville. Jackie realized that he was supported by his home team and his crowd, so he played his best game he could and won the Minor League World Series. Finally, we will explain how the experience of the Minor League World Series impacted Jackie. Jackie knew that he would face many opponents who did not like what he was doing, but the support he received from his home team encouraged him to push harder to victory. He realized that he could change the game, and that people were behind him. Because of this, he won the Series and was signed to the Brooklyn Dodgers as the first African-American baseball player in the Major Leagues. will act like a reporter and explain the effects of Jackie being signed to the Brooklyn Dodgers. You will describe how this happened, and list the effects you find in Chapter 7: “The Loneliest Man” and Chapter 8: “Oh, What a Year!” Finally, you will use that information to explain why the event was made a great impact on Jackie’s life. (See Student Packet pages 15-16.) will come together to discuss how being signed to the Dodgers affected Jackie’s life. We will extend the discussion to talk about how being signed to the Dodgers affected others and how it impacted the world. Build Student Vocabulary unite |Tier 2 Word: unite| |Contextualize the word as it is used in the story||For this abuse, more than anything else, started to “solidify and unite the entire team behind Jackie. Not one of them was willing to sit by and see someone kick around a man who had his hands tied behind his back.”| |Explain the meaning student-friendly definition)||To unite means to bring together as a single unit for a common purpose. Jackie’s teammates grew united behind Jackie – they came together as a team for the reason of supporting Jackie from other racial attacks.| |Students repeat the word||Say the word unite with me: unite.| |Teacher gives examples of the word in other contexts||Schools unite students from many different backgrounds for one purpose: to learn. If you’re trying to convince your teachers to change a rule, you should unite with your friends to discuss it as a group. You can also say that you unite people who belong together: once the king was united with the queen, they were ready to rule the kingdom.| |Students provide examples||Can you give an example of a reason why you might unite a group of people? Students should say, “I might unite a group of people if…”| |Students repeat the word again.||What word are we talking about? unite| |Additional Vocabulary Words||heckled, tremendous, fumbled, despised, repulsive, confronted, taunt, insist| Texts & Materials (To see all of the ReadWorks lessons aligned to your standards, click here.)
How Woodwind Instruments Work Woodwinds are one of the major families of instruments in use today. Woodwinds are basically defined as hollow tubes, which, when blown on one end, produce a sound. Most wind instruments have keys or fingerholes to vary the pitch of the sound, and different methods may be used to create the basic sound. Single Reed (Clarinet/Saxophone) The single reed produces a sound by vibrating against the mouthpeice when blown. The reed is held down by a metal ligature. Reeds are very sensitive, and must be cared for to produce the right tone. Double Reed (Oboe/Bassoon) The double reed uses two reeds, tied together, to make a sound. The sound it produces is somewhat nasal, and can be very difficult to build, maintain, and play. Most double reed players make their own reeds. The tight opening of the double reed means that the musician can play long phrases in one breath. Transverse Flute (Flute) A transverse flute works by blowing air across a hole, much like blowing across a bottle makes a sound. It is one of the oldest ways to produce sound from a wind instrument. Transverse flutes are usually held horizontally. The whistle is very similar to a transverse flute. Instead of a blowhole, air is blown into the end, past an opening further down the instrument, creating roughly the same effect. Playing Different Notes Different notes are created by shortening or lengthening the air column inside the instrument. This is usually acheived by covering certain holes on the instument, either with keys or fingers. The air column extends to the first open hole. Try this interactive diagram of a flute - click on the blowhole or keys to see how the air column and the pitch of the note is affected.
- 5.5–6.7 in - 10.2–11 in - 0.6–0.7 oz - Slightly larger than a Tufted Titmouse. - Moucherolle phébi (French) - Mosquero fibi (Spanish) - In 1804, the Eastern Phoebe became the first banded bird in North America. John James Audubon attached silvered thread to an Eastern Phoebe's leg to track its return in successive years. - The Eastern Phoebe is a loner, rarely coming in contact with other phoebes. Even members of a mated pair do not spend much time together. They may roost together early in pair formation, but even during egg laying the female frequently chases the male away from her. - The use of buildings and bridges for nest sites has allowed the Eastern Phoebe to tolerate the landscape changes made by humans and even expand its range. However, it still uses natural nest sites when they are available. - Unlike most birds, Eastern Phoebes often reuse nests in subsequent years—and sometimes Barn Swallows use them in between. In turn, Eastern Phoebes may renovate and use old American Robin or Barn Swallow nests themselves. - The oldest known Eastern Phoebe was 10 years, 4 months old. Eastern Phoebes breed in wooded areas (particularly near water sources) that provide nesting sites—typically human-built structures such as eaves of buildings, overhanging decks, bridges, and culverts. Before these sites were common, phoebes nested on bare rock outcrops and still do occasionally. They seem to choose nest sites with woody understory vegetation nearby, possibly to make the nest site less visible or to provide perches near the nest for the adult. On migration they use wooded habitats and show somewhat less of an association with water. During winter, Eastern Phoebes occur in deciduous woods, more often near woodland edges and openings than in unbroken forests. Flying insects make up the majority of the Eastern Phoebe’s diet. Common prey include wasps, beetles, dragonflies, butterflies and moths, flies, midges, and cicadas; they also eat spiders, ticks, and millipedes, as well as occasional small fruits or seeds. - Clutch Size - 2–6 eggs - Number of Broods - 1-2 broods - Egg Length - 0.7–0.8 in - Egg Width - 0.6–0.7 in - Incubation Period - 15–16 days - Nestling Period - 16–20 days - Egg Description - White, sometimes speckled with reddish brown - Condition at Hatching - Helpless, eyes, closed, with sparse gray down. Only the female builds the nest, often while the male accompanies her. She constructs the nest from mud, moss, and leaves mixed with grass stems and animal hair. The nest may be placed on a firm foundation or it may adhere to a vertical wall using a surface irregularity as a partial foundation. The female may at first need to hover in place while she adds enough of a mud base to perch on. Nests can take 5–14 days to build and are about 5 inches across when finished. The nest cup is 2.5 inches across and 2 inches deep. Unlike most birds, nests are often reused in subsequent years—and sometimes used by Barn Swallows in some years. Eastern Phoebes build nests in niches or under overhangs, where the young will be protected from the elements and fairly safe from predators. They avoid damp crevices and seem to prefer the nests to be close to the roof of whatever alcove they have chosen. Nests are typically less than 15 feet from the ground (in a few cases they have been built below ground level, in a well or cistern). © 2004 Cornell Lab of Ornithology Eastern Phoebes sit alertly on low perches, often twitching their tails as they look out for flying insects. When they spot one, they abruptly leave their perch on quick wingbeats, and chase down their prey in a quick sally—often returning to the same or a nearby perch. Less often, they hover to pick insects or seeds from foliage. Phoebes rarely occur in groups, and even mated pairs spend little time together. Males sing their two-parted, raspy song throughout the spring and aggressively defend their territory from others of their Eastern Phoebes, though they tolerate other species. Both sexes, but particularly the female, attempt to defend the nest against such predators as snakes, jays, crows, chipmunks, mice, and House Wrens. Eastern Pheobe populations have been stable or slowly increasing in most areas since 1966, according to the North American Breeding Bird Survey. Partners in Flight estimates the global breeding population to be 32 million with 76 percent spending some time in the U.S., 33 percent wintering in Mexico and 33 percent breeding in Canada. They rate an 8 out of 20 on the Continental Concern Score and are not on the 2012 Watch List. Historically, their numbers and range increased as people spread across the landscape and built structures the birds could use as nest sites. Many people enjoy having phoebes nesting nearby, but sometimes homeowners remove nests out of concerns over sanitation or general appearance, as also happens with birds such as American Robins and Barn Swallows. Even if there are suitable structures for nest sites, phoebes also depend on low woody plants for foraging perches, so clearing of understory plants can reduce habitat quality for them. Nest sites can be created in large circular culverts by adding nest platforms, and these have proven to be readily adopted by phoebes. Short to medium distance migrant. Eastern Phoebes are among the first migrants to return to their breeding grounds in spring—sometimes as early as March. They migrate south in September–November, finding wintering habitat in the central latitudes of the United States south to Mexico. Find This Bird The Eastern Phoebe’s eponymous song is one of the first indications that spring is returning. It’s also a great way to find phoebes as they go about their business in quiet wooded neighborhoods. Just don’t mistake the Black-capped Chickadee’s sweet, whistled “fee-bee” call; the phoebe’s is much quicker and raspier. During early summer, a great way to find phoebes is to quietly explore around old buildings and bridges. Look carefully under eaves and overhangs and you may see a nest. You Might Also Like If you have a wooded yard, Eastern Phoebes may come to visit, and they may stay to nest if you have quiet outbuildings that could serve as nest sites. Phoebes are flycatchers, so they’re unlikely to come to feeders.
Introduction to Quadratic Equation The length of a rectangle is 3 cm more than its width. Its area is equal to 54 square centimeters. What is its length and width? x = width of rectangle x + 3 = length of rectangle The area of a rectangle is the product of the length and width, so we have Area= x(x + 3) which is equal to 54. Therefore, we can form the following equation: x(x + 3) = 54. By the distributive property, we have Finding the value of x In the equation, we want to find the value of x that makes the equation true. Without algebraic manipulation, we can find the value of x by assigning various values to x. The equation indicates that one number is greater than the other by 3 and their product is 54. Examining the numbers with product as 54, we have, 1 and 54 2 and 27 3 and 18 6 and 9. Note: We have excluded the negative (e.g. (-1)(-54) = 54) numbers since a side length cannot be negative. Now, 9-6 = 3 which means that the side lengths of the rectangle are 6 and 9. Yes, their product is 54 and one is 3 greater than the other. In the equation above, subtracting both sides by 54, we have The equation that we formed above is an example of a quadratic equation. A quadratic equation is of the form , where a, b, and c are real numbers and a not equal to 0. In the example above, a = 1, b = 3, and c = -54. In the problem above, we got the value of x by testing several values, however, there are more systematic methods. In the next post, we will be discussing one of these methods. These methods are factoring, completing the square, and quadratic formula.
The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. Classroom Assessment and the National Science Education Standards encountered by students in learning the particular science concepts that are chosen. These important responsibilities and daily decisions regarding curriculum and assessment underscore the importance of science teachers having a solid background and understanding of the science subject matter that they teach. THE TEACHER'S ROLE Much of the responsibility for implementing the science standards rests with classroom teachers. Assessment is no exception. The Standards recognize the importance of a teacher's ongoing assessments and indicate that classroom teachers are in the position to best use assessment in powerful ways for both formative and summative purposes, including improving classroom practice, planning curricula, developing self-directed learners, reporting student progress, and investigating their own teaching practices. Teachers' participation in classroom activities, hour after hour, day after day, positions them to gain information and insight into their students' understandings, actions, interests, intentions, and motivation that would be difficult to glean from tests (Darling-Hammond, 1994; Moss, 1994, 1996). Teachers need not only to interpret the assessment-generated information, they also must use the information to adapt their teaching repertoires to the needs of their students. Feedback—Cognitive and Affective The usefulness and effectiveness of formative assessment depend, in part, on the quality and saliency of the information gathered in the first place and the appropriateness and relevance of subsequent actions. The quality of the feedback rather than its existence or absence is the central point (Bangert-Downs, Kulik, Kulik, & Morgan, 1991; Sadler, 1989). With regard to feedback, research makes the case for the use of descriptive, criterion-based feedback as opposed to numerical scoring or letter grades without clear criteria (Butler & Neuman, 1995; Cameron & Pierce, 1994; Kluger & deNisi, 1996). For example, in a study conducted by Butler (1987) with a random sampling of students, individuals completed an assessment task and then received one of three types of feedback: (a) tailored, written remarks addressing criteria they were aware of before taking the assessment, (b) grades derived from scoring of previous work, or (c) both grades and comments. Scores on two subsequent tasks increased most significantly for those who received detailed comments, while scores declined for those who received both comments and grades. For those assigned grades only, scores declined and then increased between the second and third tasks.
The term was coined in 1846 by an Englishman who wanted to use an Anglo-Saxon term for what was then called "popular antiquities". Johann Gottfried von Herder first advocated the deliberate recording and preservation of folklore to document the authentic spirit, tradition, and identity of the German people; the belief that there can be such authenticity is one of the tenets of the romantic nationalism which Herder developed. While folklore can contain religious or mythic elements, it typically concerns itself with the mundane traditions of everyday life. Folklore frequently ties the practical and the esoteric into one narrative package. It has often been conflated with mythology, and vice versa, because it has been assumed that any figurative story that does not pertain to the dominant beliefs of the time is not of the same status as those dominant beliefs. Thus, Roman religion is called "myth" by Christians. In that way, both myth and folklore have become catch-all terms for all figurative narratives which do not correspond with the dominant belief structure. Sometimes "folklore" is religious in nature, like the tales of the Welsh Mabinogion or those found in Icelandic skaldic poetry. In this case, folklore is being used in a quasi-pejorative sense. That is, while the tales of Odin the Wanderer have a religious value to the Norse who wrote the stories, because it does not fit into a Christian configuration it is not "religious" per se. Instead it is "folklore." On the other hand, folklore can be used to accurately describe a figurative narrative which has no theological or religious content, but instead pertains to useful mundane lore. This mundane lore may or may not have components of the fantastic[?] (such as magic, ethereal beings or the absurdist personification of inanimate objects). These folktales may emerge from a religious tradition, but are essentially secular. "Hansel and Grethel" is a strong example of this fine line. While the element of witchcraft may possibly contain a religious subtext, or at least imply some early euro-pagan origin (like what Margaret Murray or The Golden Bough might describe), it can be said with some degree of certainty that the purpose of the tale is primarily one of mundane instruction regarding forest safety, as well as secondarily a cautionary tale about the dangers of famine to large families. There is moral scope to the work, but not necessarily a religious scope. The modern western folklore that we are faced has been identified by some scholars as that of urban legend and conspiracy theory. Only time will tell what of that tradition is practical, what is ephemeral and what is religious. "Hansel and Gretel[?]" lives on today in the tales that inspired the Texas Chainsaw Massacre film. But UFO abduction narratives can be seen, in some sense, to refigure the tales of pre-Christian Europe... or even such tales in the Bible as the Ascent of Elijiah to Heaven in a spinning wheel. Are these "folktales"? Or is their religious dimension being purposefully, if unconsciously, ignored or suppressed?
This is a very important study. Among its highlights: Reducing black carbon and tropospheric ozone now will slow the rate of climate change within the first half of this century. Climate benefits from reduced ozone are achieved by reducing emissions of some of its precursors,especially methane which is also a powerful greenhouse gas. These short-lived climate forcers – methane, black carbon and ozone – are fundamentally different from longer-lived greenhouse gases, remaining in the atmosphere for only a relatively short time. Deep and immediate carbon dioxide reductions are required to protect long-term climate, as this cannot be achieved by addressing short-lived climate forcers. A small number of emission reduction measures targeting black carbon and ozone precursors could immediately begin to protect climate, public health, water and food security, and ecosystems. Measures include the recovery of methane from coal, oil and gas extraction and transport, methane capture in waste management, use of clean-burning stoves for residential cooking, diesel particulate filters for vehicles and the banning of field burning of agricultural waste. Widespread implementation is achievable with existing technology but would require significant strategic investment and institutional arrangements.
Forests define the North Cascades. Many other ranges have higher or more impressive mountains, but none, excepting perhaps the Olympics, have more impressive forests. From the very beginning, conservation in the North Cascades has been about protecting and preserving forests. Forests are the living skin holding the mountains together, controlling water runoff, providing wildlife habitat, and even controlling the heights of the mountains themselves over geological timescales by balancing uplift and erosion. And few experiences can surpass that of simply being within one of the very impressive, old forests of the Cascades. Starting in the nineteenth century and throughout most of the twentieth, North Cascade forests were cut as fast as possible. Almost all of the valuable low elevation forests were privatized, often fraudulently, toward the end of the 19th century, and all those lands have now been cut. National Forests were established early in the 20th century, mostly on higher elevation lands that the timber industry did not then want. Logging that started at tidewater gradually worked its way up all major valleys to high elevations by the 1970′s and 1980′s. A small but significant acreage of forests was preserved in Park and Wilderness areas, but by 1990 almost all of the richer lower elevation forests were cut, even on Federal lands. Many of the forests on public National Forest lands fell victim to taxpayer subsidized deficit sales. Rising public awareness of the value of what remained, along with legal efforts to protect old growth dependent species, managed to slow the frenzy of destruction just before everything was cut. Thanks to the Northwest Forest Plan (NWFP,) adopted by the Clinton administration in 1994, cut levels on the National Forest lands of the Cascades are now just a small fraction of what they were prior to 1990. The NWFP has made more difference in saving more forests than all of the Park and Wilderness areas ever established in the Northwest put together. Since trees tend to grow quickly in the North Cascades, some low elevation valley areas have now grown back with naturally regenerated, mature second growth forests which are well on their way to becoming old growth. Unfortunately, efforts have begun by the timber industry and others, to alter or eliminate the NWFP and push cut levels back up on the National Forests, often in the guise of “forest restoration.” And the legacy of decades of reckless highball logging, thousands of miles of crumbling logging roads, poses ever-increasing dangers to watersheds and fisheries. - Protection of North Cascades forestlands from the logging and road building that would result from efforts to raise the timber cut on public lands. NCCC has played a large role in stopping so-called “old growth” legislation that would ostensibly protect old trees while greatly increasing cutting everywhere else. - Protection of North Cascades forests by insuring that new Wilderness or other protected areas contain the maximum amount possible of forest lands, particularly lower elevation forest lands. - Restoration of watersheds and fisheries damaged from decades of heavy logging and road building, by decommissioning roads, stabilizing watersheds, and allowing forests to regrow naturally Accomplishments to Date NCCC has been working intensively to raise public awareness of the importance of North Cascades forests through numerous media articles. Conservation advocates and the public have been made more aware of the damaging effects of so-called “restoration” logging, and the massive roadbuilding that comes with it. NCCC has analyzed, commented on, intervened in, and appealed many National Forest timber sales, resulting in many damaging sales being stopped or greatly modified. NCCC is also a founding member of, and leading contributor to the “Washington Watershed Restoration Initiative,” (WWRI) which seeks to address the serious problems presented by thousands of miles of crumbling logging roads on National Forest lands in the Cascades. Thanks to WWRI’s efforts, appropriations have been secured to help begin the process of dealing with these thousands of miles of collapsing roads. NCCC is a leading advocate of including forests in new Wilderness areas. Thanks in large part to NCCC’s efforts, over 80,000 acres of forestlands are included in the 106,000 acre Wild Sky Wilderness, signed into law in 2008. The Wild Sky takes in approximately 60,000 acres of high elevation old growth, 14,000 acres of lowland old growth and 6,000 acres of lowland mature natural second growth, as well as nearly 25 miles of salmon spawning streams and rivers. The Wild Sky has a much larger percentage of lands under 3,000 feet (approximately 30%) than previously established Wilderness areas in the Cascades, which have approximately 6% in such lands. NCCC plans to work to insure similar levels of forest and river protection in future Wilderness efforts in places such as Mt. Baker and the “Seven Rivers” area south of the Skagit. Current Issues and Activities - Educating the public and public officials regarding the values of natural forests, and advocating for new Wilderness areas containing a maximum of forest lands. NCCC has been closely involved in efforts currently before Congress to add more than 20,000 acres, most of it lowland old growth and mature second growth forests, to the Alpine Lakes Wilderness in the Pratt and Middle Fork Snoqualmie valleys. - Opposing damaging timber sales on National Forest lands. - Countering efforts to increase timber sales disguised as “restoration logging.” - Opposing so-called “old growth protection” bills that threaten to dramatically raise cut levels on National Forests. - Advocating for Congressional funding for the decommissioning and restoration of thousands of miles of unnecessary, collapsing old logging roads on National Forests across the North Cascades. What You Can Do to Help - Volunteer to help with NCCC outreach activities to the public and public officials communicating forest protection values and advocating appropriate forest management. - Volunteer with NCCC forest and watershed protection advocates to monitor forest service activities and help prepare statements in opposition to damaging timber sales. - Join NCCC to advocate for continued and increased federal funding for decommissioning and/or restoring collapsing logging roads. For more info email us at [email protected]
Let us look once more at the nose of our supersonic aircraft. We saw how the shock waves formed in front of it, slowing the air down almost instantaneously and providing a subsonic patch through which the pressure information could propagate a limited distance upstream at the speed of sound (Fig. 5.2). It should be noted that the shock wave itself is able to make headway against the oncoming stream above the speed of sound. Only weak pressure disturbances travel at the speed of sound. The stronger the shock wave is, the faster it can travel through the air. Considering the problem from the point of view of a stream of air approaching a stationary aircraft, this means that the faster the oncoming stream, the stronger the shock wave at the nose becomes. Thus the changes in pressure, density, temperature and velocity which occur through the shock wave all increase with increasing air speed upstream of the shock wave. A mathematical analysis of the problem shows that the strength of the shock wave, expressed as the ratio of the pressure in front of the wave to that behind, depends solely on the Mach number of the approaching air stream. If we now stand further back from the aircraft we see that the bow shock wave which forms over the nose is, in fact, curved (Fig. 5.3(a)). As we get further from the nose tip so the shock wave becomes inclined to the direction of the oncoming flow. In this region the shock wave is said to be oblique. At the nose, where it is at right angles to the oncoming flow, it is said to be a normal shock wave. The oblique shock wave acts in the same way as the normal wave except that it only affects the component of velocity at right angles to itself. The component of velocity parallel to the wave is completely unaffected. This means that the direction of the flow is changed by an oblique shock (Fig. 5.6) whereas Fig. 5.6 Flow deflection by oblique shock wave Tangential component Vt remains unchanged but V„2 < Vn Fig. 5.7 Flow deflection through bow shock wave Deflection reaches a maximum and then reduces again it is unaffected by a normal shock. In both cases, however, the magnitude of the velocity is reduced as the flow passes through the shock wave. Looking more carefully at the effect of the bow shock wave (Fig. 5.7) we see that, in general, the same flow deflection can be obtained by two possible angles of oblique wave. The reason for this is given in Fig. 5.8. The wave of greater angle at A is stronger because the velocity component normal to the wave front is greater. It therefore changes the oncoming velocity component more than the weaker wave at point B. Adding the resulting velocity components immediately downstream of the shock waves at two points (Fig. 5.8) shows how a particular point B (where the shock wave is weak) can be chosen with exactly the same flow deflection as at A (with a strong shock wave). It should also be noted that for a normal shock wave the downstream flow is always subsonic, as it is for most strong oblique waves. The fact that the Fig. 5.8 Weak and strong shock waves Strong shock at A gives same deflection as weak shock at B, but greater pressure jump since V2 < V2 velocity component parallel to the wave is not changed means, however, that the flow downstream of the weak oblique wave is supersonic.
Three-phase power is an electrical system that combines three separate currents that alternate but have the same frequency of 120 degrees, delivering power that works more efficiently in industrial power systems than single-phase electrical systems, according to Tripp Lite. The current in a three-phase power supply is constant.Continue Reading Because a three-phase power system has the ability to deliver more efficient power, it should only be used if a single-phase power circuit cannot handle the load of a large system, such as a data center, states Tripp Lite. Most electronics are powered with a single-phase power circuit for this reason, as such electronics do not require constant currents. A three-phase power supply is able to deliver a constant current because its power does not carry the risk of falling to 0. In contrast, the power in a single-phase power supply drops to zero three times per cycle. Because the three-phase power distribution cabinets that manufacturers produce can power multiple server racks at once, using three-phase power over single-phase power can be a more efficient option for data centers, as claimed by Tripp Lite. Single-phase circuits must be installed on each individual rack, which means that installing one three-phase circuit for multiple server racks cuts labor costs. Data centers also save on costs by having less equipment to keep cool.Learn more about Electricity
Allergies are becoming more and more common in today’s modern society, whether it is hay fever, watery eyes, respiratory problems or skin allergies in the form of itchy rashes or eczema. These symptoms are a reaction to the body’s natural defense system to defend itself against certain, normally harmless, environmental substances (allergens). These consequences can range from a feeling of slight body reaction to more debilitating conditions. Diagnosis & Tests To specifically diagnose an allergy problem and to determine the appropriate treatment, your healthcare practitioner will utilize certain diagnostic tests as well as acquire a detailed and comprehensive evaluation of any signs and symptoms that may be indicative of an allergy. Allergy Testing: The Physical Exam Diagnosing allergies begins with a detailed evaluation about your daily habits. A helpful hint is to be prepared for your doctor’s visit in anticipation of questions regarding reactions you may have to food or environmental conditions, for example. Food Allergy Testing Proper testing of food allergens will assure an accurate evaluation and treatment. See a list of the most common food allergies to learn your individual needs. Allergy Symptom Diary If you have a food allergy, it is important to keep a food diary to keep track of your body’s reaction to eliminating certain foods and discovering what you may be allergic too. Allergy Skin Test An allergy skin test is used to identify the substances that are causing your allergy reactions. Learn more about allergy skin tests, including what happens during the test. Blood Test for Allergies See how blood tests are used to diagnose allergies and learn what can interfere with the test.
As teachers, you know weather refers to the state of the atmosphere. Atmosphere is a mixture of invisible gas molecules and dust, and has three layers. The layer closest to earth is the troposphere. The conditions we experience as weather take place mostly in the troposphere. A region’s weather is not the same as climate. Every day weather changes according to such things as air temperature, wind and clouds. The climate, however, depends on its average year-round weather conditions. Weather affects human conditions. On a warm sunny day, people wear lightweight clothing. When there is a storm or there are winds, people tend to seek shelter inside. Weather can affect what they eat and drink, too. During summer time, ice-cold drinks are more refreshing than coffee or hot chocolate. Continue reading “Teaching Weather” Teaching Outer space and the solar system is one of the most interesting topics discussed in school because of the countless variety of plantets and the idea that there is actually something else outside of our world. In the few decades since space exploration began, probes have reached the far regions of the solar system. The solar system is the group of celestial bodies, including Earth that orbits around the Milky Way galaxy. Some hundred billion stars can be found in the universe while more than 1,000 comets have been observed regularly through telescopes. To give this topic a little twist, here are tips to have students “get it.” Continue reading “Teaching Space and the Solar System” Electricity is a form of energy, a result of the existence of electrical charge. Its theory and inseparable effect is probably the most accurate and complete of all scientific theories. Because of it, invention of motors, generators, telephones, radio and television, medical gadgets, computers and nuclear-energy systems have taken place. More about Electricity here Continue reading “Teaching Electricity” While clouds are not a difficult concept to teach or learn, they often present a challenge in keeping the lesson interesting. The explanation of cloud formation, for example, can quickly become boring at any grade level. An understanding of clouds and cloud formation, however, is important to studies of weather and the water cycle, so students need to fully understand the lesson. As a result, teachers need creative, interesting ways of teaching about clouds. Continue reading “Teaching Clouds”
Homologous recombination is a type of genetic recombination in which nucleotide sequences are exchanged between two similar or identical molecules of DNA. It is most widely used by cells to accurately repair harmful breaks that occur on both strands of DNA, known as double-strand breaks. Homologous recombination also produces new combinations of DNA sequences during meiosis, the process by which eukaryotes make gamete cells, like sperm and egg cells in animals. These new combinations of DNA represent genetic variation in offspring, which in turn enables populations to adapt during the course of evolution. Homologous recombination is also used in horizontal gene transfer to exchange genetic material between different strains and species of bacteria and viruses. Although homologous recombination varies widely among different organisms and cell types, most forms involve the same basic steps. After a double-strand break occurs, sections of DNA around the 5' ends of the break are cut away in a process called resection. In the strand invasion step that follows, an overhanging 3' end of the broken DNA molecule then "invades" a similar or identical DNA molecule that is not broken. After strand invasion, the further sequence of events may follow either of two main pathways discussed below (see Models); the DSBR (double-strand break repair) pathway or the SDSA (synthesis-dependent strand annealing) pathway. Homologous recombination that occurs during DNA repair tends to result in non-crossover products, in effect restoring the damaged DNA molecule as it existed before the double-strand break. Homologous recombination is conserved across all three domains of life as well as viruses, suggesting that it is a nearly universal biological mechanism. The discovery of genes for homologous recombination in protists—a diverse group of eukaryotic microorganisms—has been interpreted as evidence that meiosis emerged early in the evolution of eukaryotes. Since their dysfunction has been strongly associated with increased susceptibility to several types of cancer, the proteins that facilitate homologous recombination are topics of active research. Homologous recombination is also used in gene targeting, a technique for introducing genetic changes into target organisms. For their development of this technique, Mario Capecchi, Martin Evans and Oliver Smithies were awarded the 2007 Nobel Prize for Physiology or Medicine; Capecchi and Smithies independently discovered applications to mouse embryonic stem cells, however the highly conserved mechanisms underlying the DSB repair model, including uniform homologous integration of transformed DNA (gene therapy), were first shown in plasmid experiments by Orr-Weaver, Szostack and Rothstein. Researching the plasmid-induced DSB, using γ-irradiation in the 1970s-1980s, led to later experiments using endonucleases (e.g. I-SceI) to cut chromosomes for genetic engineering of mammalian cells, where nonhomologous recombination is more frequent than in yeast. - 1 History and discovery - 2 In eukaryotes - 3 In bacteria - 4 In viruses - 5 Effects of dysfunction - 6 Evolutionary conservation - 7 Technological applications - 8 References - 9 External links History and discovery In the early 1900s, William Bateson and Reginald Punnett found an exception to one of the principles of inheritance originally described by Gregor Mendel in the 1860s. In contrast to Mendel's notion that traits are independently assorted when passed from parent to child—for example that a cat's hair color and its tail length are inherited independent of each other—Bateson and Punnett showed that certain genes associated with physical traits can be inherited together, or genetically linked. In 1911, after observing that linked traits could on occasion be inherited separately, Thomas Hunt Morgan suggested that "crossovers" can occur between linked genes, where one of the linked genes physically crosses over to a different chromosome. Two decades later, Barbara McClintock and Harriet Creighton demonstrated that chromosomal crossover occurs during meiosis, the process of cell division by which sperm and egg cells are made. Within the same year as McClintock's discovery, Curt Stern showed that crossing over—later called "recombination"—could also occur in somatic cells like white blood cells and skin cells that divide through mitosis. In 1947, the microbiologist Joshua Lederberg showed that bacteria—which had been assumed to reproduce only asexually through binary fission—are capable of genetic recombination, which is more similar to sexual reproduction. This work established E. coli as a model organism in genetics, and helped Lederberg win the 1958 Nobel Prize in Physiology or Medicine. Building on studies in fungi, in 1964 Robin Holliday proposed a model for recombination in meiosis which introduced key details of how the process can work, including the exchange of material between chromosomes through Holliday junctions. In 1983, Jack Szostak and colleagues presented a model now known as the DSBR pathway, which accounted for observations not explained by the Holliday model. During the next decade, experiments in Drosophila, budding yeast and mammalian cells led to the emergence of other models of homologous recombination, called SDSA pathways, which do not always rely on Holliday junctions. Homologous recombination (HR) is essential to cell division in eukaryotes like plants, animals, fungi and protists. In cells that divide through mitosis, homologous recombination repairs double-strand breaks in DNA caused by ionizing radiation or DNA-damaging chemicals. Left unrepaired, these double-strand breaks can cause large-scale rearrangement of chromosomes in somatic cells, which can in turn lead to cancer. In addition to repairing DNA, homologous recombination also helps produce genetic diversity when cells divide in meiosis to become specialized gamete cells—sperm or egg cells in animals, pollen or ovules in plants, and spores in fungi. It does so by facilitating chromosomal crossover, in which regions of similar but not identical DNA are exchanged between homologous chromosomes. This creates new, possibly beneficial combinations of genes, which can give offspring an evolutionary advantage. Chromosomal crossover often begins when a protein called Spo11 makes a targeted double-strand break in DNA. These sites are non-randomly located on the chromosomes; usually in intergenic promoter regions and preferentially in GC-rich domains These double-strand break sites often occur at recombination hotspots, regions in chromosomes that are about 1,000–2,000 base pairs in length and have high rates of recombination. The absence of a recombination hotspot between two genes on the same chromosome often means that those genes will be inherited by future generations in equal proportion. This represents linkage between the two genes greater than would be expected from genes that independently assort during meiosis. Timing within the mitotic cell cycle Double-strand breaks can be repaired through homologous recombination or through non-homologous end joining (NHEJ). NHEJ is a DNA repair mechanism which, unlike homologous recombination, does not require a long homologous sequence to guide repair. Whether homologous recombination or NHEJ is used to repair double-strand breaks is largely determined by the phase of cell cycle. Homologous recombination repairs DNA before the cell enters mitosis (M phase). It occurs during and shortly after DNA replication, in the S and G2 phases of the cell cycle, when sister chromatids are more easily available. Compared to homologous chromosomes, which are similar to another chromosome but often have different alleles, sister chromatids are an ideal template for homologous recombination because they are an identical copy of a given chromosome. In contrast to homologous recombination, NHEJ is predominant in the G1 phase of the cell cycle, when the cell is growing but not yet ready to divide. It occurs less frequently after the G1 phase, but maintains at least some activity throughout the cell cycle. The mechanisms that regulate homologous recombination and NHEJ throughout the cell cycle vary widely between species. Cyclin-dependent kinases (CDKs), which modify the activity of other proteins by adding phosphate groups to (that is, phosphorylating) them, are important regulators of homologous recombination in eukaryotes. When DNA replication begins in budding yeast, the cyclin-dependent kinase Cdc28 begins homologous recombination by phosphorylating the Sae2 protein. After being so activated by the addition of a phosphate, Sae2 uses its endonuclease activity to make a clean cut near a double-strand break in DNA. This allows a three-part protein known as the MRX complex to bind to DNA, and begins a series of protein-driven reactions that exchange material between two DNA molecules. The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow HR DNA repair, the chromatin must be remodeled. In eukaryotes, ATP dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of a DNA damage. In one of the earliest steps, the stress-activated protein kinase, c-Jun N-terminal kinase (JNK), phosphorylates SIRT6 on serine 10 in response to double-strand breaks or other DNA damage. This post-translational modification facilitates the mobilization of SIRT6 to DNA damage sites, and is required for efficient recruitment of poly (ADP-ribose) polymerase 1 (PARP1) to DNA break sites and for efficient repair of DSBs. PARP1 protein starts to appear at DNA damage sites in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. Next the chromatin remodeler Alc1 quickly attaches to the product of PARP1 action, a poly-ADP ribose chain, and Alc1 completes arrival at the DNA damage within 10 seconds of the occurrence of the damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA double-strand breaks. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Two primary models for how homologous recombination repairs double-strand breaks in DNA are the double-strand break repair (DSBR) pathway (sometimes called the double Holliday junction model) and the synthesis-dependent strand annealing (SDSA) pathway. The two pathways are similar in their first several steps. After a double-strand break occurs, the MRX complex (MRN complex in humans) binds to DNA on either side of the break. Next a resection, in which DNA around the 5' ends of the break is cut back, is carried out in two distinct steps. In the first step of resection, the MRX complex recruits the Sae2 protein. The two proteins then trim back the 5' ends on either side of the break to create short 3' overhangs of single-strand DNA. In the second step, 5'→3' resection is continued by the Sgs1 helicase and the Exo1 and Dna2 nucleases. As a helicase, Sgs1 "unzips" the double-strand DNA, while Exo1 and Dna2's nuclease activity allows them to cut the single-stranded DNA produced by Sgs1. The RPA protein, which has high affinity for single-stranded DNA, then binds the 3' overhangs. With the help of several other proteins that mediate the process, the Rad51 protein (and Dmc1, in meiosis) then forms a filament of nucleic acid and protein on the single strand of DNA coated with RPA. This nucleoprotein filament then begins searching for DNA sequences similar to that of the 3' overhang. After finding such a sequence, the single-stranded nucleoprotein filament moves into (invades) the similar or identical recipient DNA duplex in a process called strand invasion. In cells that divide through mitosis, the recipient DNA duplex is generally a sister chromatid, which is identical to the damaged DNA molecule and provides a template for repair. In meiosis, however, the recipient DNA tends to be from a similar but not necessarily identical homologous chromosome. A displacement loop (D-loop) is formed during strand invasion between the invading 3' overhang strand and the homologous chromosome. After strand invasion, a DNA polymerase extends the end of the invading 3' strand by synthesizing new DNA. This changes the D-loop to a cross-shaped structure known as a Holliday junction. Following this, more DNA synthesis occurs on the invading strand (i.e., one of the original 3' overhangs), effectively restoring the strand on the homologous chromosome that was displaced during strand invasion. After the stages of resection, strand invasion and DNA synthesis, the DSBR and SDSA pathways become distinct. The DSBR pathway is unique in that the second 3' overhang (which was not involved in strand invasion) also forms a Holliday junction with the homologous chromosome. The double Holliday junctions are then converted into recombination products by nicking endonucleases, a type of restriction endonuclease which cuts only one DNA strand. The DSBR pathway commonly results in crossover, though it can sometimes result in non-crossover products; the ability of a broken DNA molecule to collect sequences from separated donor loci was shown in mitotic budding yeast using plasmids or endonuclease induction of chromosomal events. Because of this tendency for chromosomal crossover, the DSBR pathway is a likely model of how crossover homologous recombination occurs during meiosis. Whether recombination in the DSBR pathway results in chromosomal crossover is determined by how the double Holliday junction is cut, or "resolved". Chromosomal crossover will occur if one Holliday junction is cut on the crossing strand and the other Holliday junction is cut on the non-crossing strand (in Figure 4, along the horizontal purple arrowheads at one Holliday junction and along the vertical orange arrowheads at the other). Alternatively, if the two Holliday junctions are cut on the crossing strands (along the horizontal purple arrowheads at both Holliday junctions in Figure 4), then chromosomes without crossover will be produced. Homologous recombination via the SDSA pathway occurs in cells that divide through mitosis and meiosis and results in non-crossover products. In this model, the invading 3' strand is extended along the recipient DNA duplex by a DNA polymerase, and is released as the Holliday junction between the donor and recipient DNA molecules slides in a process called branch migration. The newly synthesized 3' end of the invading strand is then able to anneal to the other 3' overhang in the damaged chromosome through complementary base pairing. After the strands anneal, a small flap of DNA can sometimes remain. Any such flaps are removed, and the SDSA pathway finishes with the resealing, also known as ligation, of any remaining single-stranded gaps. During mitosis, the major homologous recombination pathway for repairing DNA double-strand breaks appears to be the SDSA pathway (rather than the DSBR pathway). The SDSA pathway produces non-crossover recombinants (Figure 4). During meiosis non-crossover recombinants also occur frequently and these appear to arise mainly by the SDSA pathway as well. Non-crossover recombination events occurring during meiosis likely reflect instances of repair of DNA double-strand damages or other types of DNA damages. The single-strand annealing (SSA) pathway of homologous recombination repairs double-strand breaks between two repeat sequences. The SSA pathway is unique in that it does not require a separate similar or identical molecule of DNA, like the DSBR or SDSA pathways of homologous recombination. Instead, the SSA pathway only requires a single DNA duplex, and uses the repeat sequences as the identical sequences that homologous recombination needs for repair. The pathway is relatively simple in concept: after two strands of the same DNA duplex are cut back around the site of the double-strand break, the two resulting 3' overhangs then align and anneal to each other, restoring the DNA as a continuous duplex. As DNA around the double-strand break is cut back, the single-stranded 3' overhangs being produced are coated with the RPA protein, which prevents the 3' overhangs from sticking to themselves. A protein called Rad52 then binds each of the repeat sequences on either side of the break, and aligns them to enable the two complementary repeat sequences to anneal. After annealing is complete, leftover non-homologous flaps of the 3' overhangs are cut away by a set of nucleases, known as Rad1/Rad10, which are brought to the flaps by the Saw1 and Slx4 proteins. New DNA synthesis fills in any gaps, and ligation restores the DNA duplex as two continuous strands. The DNA sequence between the repeats is always lost, as is one of the two repeats. The SSA pathway is considered mutagenic since it results in such deletions of genetic material. During DNA replication, double-strand breaks can sometimes be encountered at replication forks as DNA helicase unzips the template strand. These defects are repaired in the break-induced replication (BIR) pathway of homologous recombination. The precise molecular mechanisms of the BIR pathway remain unclear. Three proposed mechanisms have strand invasion as an initial step, but they differ in how they model the migration of the D-loop and later phases of recombination. The BIR pathway can also help to maintain the length of telomeres (regions of DNA at the end of eukaryotic chromosomes) in the absence of (or in cooperation with) telomerase. Without working copies of the telomerase enzyme, telomeres typically shorten with each cycle of mitosis, which eventually blocks cell division and leads to senescence. In budding yeast cells where telomerase has been inactivated through mutations, two types of "survivor" cells have been observed to avoid senescence longer than expected by elongating their telomeres through BIR pathways. Maintaining telomere length is critical for cell immortalization, a key feature of cancer. Most cancers maintain telomeres by upregulating telomerase. However, in several types of human cancer, a BIR-like pathway helps to sustain some tumors by acting as an alternative mechanism of telomere maintenance. This fact has led scientists to investigate whether such recombination-based mechanisms of telomere maintenance could thwart anti-cancer drugs like telomerase inhibitors. Homologous recombination is a major DNA repair process in bacteria. It is also important for producing genetic diversity in bacterial populations, although the process differs substantially from meiotic recombination, which repairs DNA damages and brings about diversity in eukaryotic genomes. Homologous recombination has been most studied and is best understood for Escherichia coli. Double-strand DNA breaks in bacteria are repaired by the RecBCD pathway of homologous recombination. Breaks that occur on only one of the two DNA strands, known as single-strand gaps, are thought to be repaired by the RecF pathway. Both the RecBCD and RecF pathways include a series of reactions known as branch migration, in which single DNA strands are exchanged between two intercrossed molecules of duplex DNA, and resolution, in which those two intercrossed molecules of DNA are cut apart and restored to their normal double-stranded state. The RecBCD pathway is the main recombination pathway used in many bacteria to repair double-strand breaks in DNA, and the proteins are found in a broad array of bacteria. These double-strand breaks can be caused by UV light and other radiation, as well as chemical mutagens. Double-strand breaks may also arise by DNA replication through a single-strand nick or gap. Such a situation causes what is known as a collapsed replication fork and is fixed by several pathways of homologous recombination including the RecBCD pathway. In this pathway, a three-subunit enzyme complex called RecBCD initiates recombination by binding to a blunt or nearly blunt end of a break in double-strand DNA. After RecBCD binds the DNA end, the RecB and RecD subunits begin unzipping the DNA duplex through helicase activity. The RecB subunit also has a nuclease domain, which cuts the single strand of DNA that emerges from the unzipping process. This unzipping continues until RecBCD encounters a specific nucleotide sequence (5'-GCTGGTGG-3') known as a Chi site. Upon encountering a Chi site, the activity of the RecBCD enzyme changes drastically. DNA unwinding pauses for a few seconds and then resumes at roughly half the initial speed. This is likely because the slower RecB helicase unwinds the DNA after Chi, rather than the faster RecD helicase, which unwinds the DNA before Chi. Recognition of the Chi site also changes the RecBCD enzyme so that it cuts the DNA strand with Chi and begins loading multiple RecA proteins onto the single-stranded DNA with the newly generated 3' end. The resulting RecA-coated nucleoprotein filament then searches out similar sequences of DNA on a homologous chromosome. The search process induces stretching of the DNA duplex, which enhances homology recognition (a mechanism termed conformational proofreading ). Upon finding such a sequence, the single-stranded nucleoprotein filament moves into the homologous recipient DNA duplex in a process called strand invasion. The invading 3' overhang causes one of the strands of the recipient DNA duplex to be displaced, to form a D-loop. If the D-loop is cut, another swapping of strands forms a cross-shaped structure called a Holliday junction. Resolution of the Holliday junction by some combination of RuvABC or RecG can produce two recombinant DNA molecules with reciprocal genetic types, if the two interacting DNA molecules differ genetically. Alternatively, the invading 3’ end near Chi can prime DNA synthesis and form a replication fork. This type of resolution produces only one type of recombinant (non-reciprocal). Bacteria appear to use the RecF pathway of homologous recombination to repair single-strand gaps in DNA. When the RecBCD pathway is inactivated by mutations and additional mutations inactivate the SbcCD and ExoI nucleases, the RecF pathway can also repair DNA double-strand breaks. In the RecF pathway the RecQ helicase unwinds the DNA and the RecJ nuclease degrades the strand with a 5' end, leaving the strand with the 3' end intact. RecA protein binds to this strand and is either aided by the RecF, RecO, and RecR proteins or stabilized by them. The RecA nucleoprotein filament then searches for a homologous DNA and exchanges places with the identical or nearly identical strand in the homologous DNA. Although the proteins and specific mechanisms involved in their initial phases differ, the two pathways are similar in that they both require single-stranded DNA with a 3' end and the RecA protein for strand invasion. The pathways are also similar in their phases of branch migration, in which the Holliday junction slides in one direction, and resolution, in which the Holliday junctions are cleaved apart by enzymes. The alternative, non-reciprocal type of resolution may also occur by either pathway. Immediately after strand invasion, the Holliday junction moves along the linked DNA during the branch migration process. It is in this movement of the Holliday junction that base pairs between the two homologous DNA duplexes are exchanged. To catalyze branch migration, the RuvA protein first recognizes and binds to the Holliday junction and recruits the RuvB protein to form the RuvAB complex. Two sets of the RuvB protein, which each form a ring-shaped ATPase, are loaded onto opposite sides of the Holliday junction, where they act as twin pumps that provide the force for branch migration. Between those two rings of RuvB, two sets of the RuvA protein assemble in the center of the Holliday junction such that the DNA at the junction is sandwiched between each set of RuvA. The strands of both DNA duplexes—the "donor" and the "recipient" duplexes—are unwound on the surface of RuvA as they are guided by the protein from one duplex to the other. In the resolution phase of recombination, any Holliday junctions formed by the strand invasion process are cut, thereby restoring two separate DNA molecules. This cleavage is done by RuvAB complex interacting with RuvC, which together form the RuvABC complex. RuvC is an endonuclease that cuts the degenerate sequence 5'-(A/T)TT(G/C)-3'. The sequence is found frequently in DNA, about once every 64 nucleotides. Before cutting, RuvC likely gains access to the Holliday junction by displacing one of the two RuvA tetramers covering the DNA there. Recombination results in either "splice" or "patch" products, depending on how RuvC cleaves the Holliday junction. Splice products are crossover products, in which there is a rearrangement of genetic material around the site of recombination. Patch products, on the other hand, are non-crossover products in which there is no such rearrangement and there is only a "patch" of hybrid DNA in the recombination product. Facilitating genetic transfer Homologous recombination is an important method of integrating donor DNA into a recipient organism's genome in horizontal gene transfer, the process by which an organism incorporates foreign DNA from another organism without being the offspring of that organism. Homologous recombination requires incoming DNA to be highly similar to the recipient genome, and so horizontal gene transfer is usually limited to similar bacteria. Studies in several species of bacteria have established that there is a log-linear decrease in recombination frequency with increasing difference in sequence between host and recipient DNA. In bacterial conjugation, where DNA is transferred between bacteria through direct cell-to-cell contact, homologous recombination helps integrate foreign DNA into the host genome via the RecBCD pathway. The RecBCD enzyme promotes recombination after DNA is converted from single-strand DNA–in which form it originally enters the bacterium–to double-strand DNA during replication. The RecBCD pathway is also essential for the final phase of transduction, a type of horizontal gene transfer in which DNA is transferred from one bacterium to another by a virus. Foreign, bacterial DNA is sometimes misincorporated in the capsid head of bacteriophage virus particles as DNA is packaged into new bacteriophages during viral replication. When these new bacteriophages infect other bacteria, DNA from the previous host bacterium is injected into the new bacterial host as double-strand DNA. The RecBCD enzyme then incorporates this double-strand DNA into the genome of the new bacterial host. Natural bacterial transformation involves the transfer of DNA from a donor bacterium to a recipient bacterium, where both donor and recipient are ordinarily of the same species. Transformation, unlike bacterial conjugation and transduction, depends on numerous bacterial gene products that specifically interact to perform this process. Thus transformation is clearly a bacterial adaptation for DNA transfer. In order for a bacterium to bind, take up and integrate donor DNA into its resident chromosome by homologous recombination, it must first enter a special physiological state termed competence. The RecA/Rad51/DMC1 gene family plays a central role in homologous recombination during bacterial transformation as it does during eukaryotic meiosis and mitosis. For instance, the RecA protein is essential for transformation in Bacillus subtilis and Streptococcus pneumoniae, and expression of the RecA gene is induced during the development of competence for transformation in these organisms. As part of the transformation process, the RecA protein interacts with entering single-stranded DNA (ssDNA) to form RecA/ssDNA nucleofilaments that scan the resident chromosome for regions of homology and bring the entering ssDNA to the corresponding region, where strand exchange and homologous recombination occur. Thus the process of homologous recombination during bacterial transformation has fundamental similarities to homologous recombination during meiosis. Homologous recombination occurs in several groups of viruses. In DNA viruses such as herpesvirus, recombination occurs through a break-and-rejoin mechanism like in bacteria and eukaryotes. There is also evidence for recombination in some RNA viruses, specifically positive-sense ssRNA viruses like retroviruses, picornaviruses, and coronaviruses. There is controversy over whether homologous recombination occurs in negative-sense ssRNA viruses like influenza. In RNA viruses, homologous recombination can be either precise or imprecise. In the precise type of RNA-RNA recombination, there is no difference between the two parental RNA sequences and the resulting crossover RNA region. Because of this, it is often difficult to determine the location of crossover events between two recombining RNA sequences. In imprecise RNA homologous recombination, the crossover region has some difference with the parental RNA sequences – caused by either addition, deletion, or other modification of nucleotides. The level of precision in crossover is controlled by the sequence context of the two recombining strands of RNA: sequences rich in adenine and uracil decrease crossover precision. Homologous recombination is important in facilitating viral evolution. For example, if the genomes of two viruses with different disadvantageous mutations undergo recombination, then they may be able to regenerate a fully functional genome. Alternatively, if two similar viruses have infected the same host cell, homologous recombination can allow those two viruses to swap genes and thereby evolve more potent variations of themselves. When two or more viruses, each containing lethal genomic damage, infect the same host cell, the virus genomes can often pair with each other and undergo homologous recombinational repair to produce viable progeny. This process, known as multiplicity reactivation, has been studied in several bacteriophages, including phage T4. Enzymes employed in recombinational repair in phage T4 are functionally homologous to enzymes employed in bacterial and eukaryotic recombinational repair. In particular, with regard to a gene necessary for the strand exchange reaction, a key step in homologous recombinational repair, there is functional homology from viruses to humans (i. e. uvsX in phage T4; recA in E. coli and other bacteria, and rad51 and dmc1 in yeast and other eukaryotes, including humans). Multiplicity reactivation has also been demonstrated in numerous pathogenic viruses. Effects of dysfunction ||It has been suggested that this section be split out into another article titled Homologous recombination deficiency. (Discuss) (December 2016)| Without proper homologous recombination, chromosomes often incorrectly align for the first phase of cell division in meiosis. This causes chromosomes to fail to properly segregate in a process called nondisjunction. In turn, nondisjunction can cause sperm and ova to have too few or too many chromosomes. Down's syndrome, which is caused by an extra copy of chromosome 21, is one of many abnormalities that result from such a failure of homologous recombination in meiosis. Deficiencies in homologous recombination have been strongly linked to cancer formation in humans. For example, each of the cancer-related diseases Bloom's syndrome, Werner's syndrome and Rothmund-Thomson syndrome are caused by malfunctioning copies of RecQ helicase genes involved in the regulation of homologous recombination: BLM, WRN and RECQL4, respectively. In the cells of Bloom's syndrome patients, who lack a working copy of the BLM protein, there is an elevated rate of homologous recombination. Experiments in mice deficient in BLM have suggested that the mutation gives rise to cancer through a loss of heterozygosity caused by increased homologous recombination. A loss in heterozygosity refers to the loss of one of two versions—or alleles—of a gene. If one of the lost alleles helps to suppress tumors, like the gene for the retinoblastoma protein for example, then the loss of heterozygosity can lead to cancer.:1236 Decreased rates of homologous recombination cause inefficient DNA repair,:310 which can also lead to cancer. This is the case with BRCA1 and BRCA2, two similar tumor suppressor genes whose malfunctioning has been linked with considerably increased risk for breast and ovarian cancer. Cells missing BRCA1 and BRCA2 have a decreased rate of homologous recombination and increased sensitivity to ionizing radiation, suggesting that decreased homologous recombination leads to increased susceptibility to cancer. Because the only known function of BRCA2 is to help initiate homologous recombination, researchers have speculated that more detailed knowledge of BRCA2's role in homologous recombination may be the key to understanding the causes of breast and ovarian cancer. Tumours with a homologous recombination deficiency (including BRCA defects) are described as HRD-positive. While the pathways can mechanistically vary, the ability of organisms to perform homologous recombination is universally conserved across all domains of life. Based on the similarity of their amino acid sequences, homologs of a number of proteins can be found in multiple domains of life indicating that they evolved a long time ago, and have since diverged from common ancestral proteins. Related single stranded binding proteins that are important for homologous recombination, and many other processes, are also found in all domains of life. The RecA recombinase family The proteins of the RecA recombinase family of proteins are thought to be descended from a common ancestral recombinase. The RecA recombinase family contains RecA protein from bacteria, the Rad51 and Dmc1 proteins from eukaryotes, and RadA from archaea, and the recombinase paralog proteins. Studies modeling the evolutionary relationships between the Rad51, Dmc1 and RadA proteins indicate that they are monophyletic, or that they share a common molecular ancestor. Within this protein family, Rad51 and Dmc1 are grouped together in a separate clade from RadA. One of the reasons for grouping these three proteins together is that they all possess a modified helix-turn-helix motif, which helps the proteins bind to DNA, toward their N-terminal ends. An ancient gene duplication event of a eukaryotic RecA gene and subsequent mutation has been proposed as a likely origin of the modern RAD51 and DMC1 genes. The proteins generally share a long conserved region known as the RecA/Rad51 domain. Within this protein domain are two sequence motifs, Walker A motif and Walker B motif. The Walker A and B motifs allow members of the RecA/Rad51 protein family to engage in ATP binding and ATP hydrolysis. The discovery of Dmc1 in several species of Giardia, one of the earliest protists to diverge as a eukaryote, suggests that meiotic homologous recombination—and thus meiosis itself—emerged very early in eukaryotic evolution. In addition to research on Dmc1, studies on the Spo11 protein have provided information on the origins of meiotic recombination. Spo11, a type II topoisomerase, can initiate homologous recombination in meiosis by making targeted double-strand breaks in DNA. Phylogenetic trees based on the sequence of genes similar to SPO11 in animals, fungi, plants, protists and archaea have led scientists to believe that the version Spo11 currently in eukaryotes emerged in the last common ancestor of eukaryotes and archaea. Many methods for introducing DNA sequences into organisms to create recombinant DNA and genetically modified organisms use the process of homologous recombination. Also called gene targeting, the method is especially common in yeast and mouse genetics. The gene targeting method in knockout mice uses mouse embryonic stem cells to deliver artificial genetic material (mostly of therapeutic interest), which represses the target gene of the mouse by the principle of homologous recombination. The mouse thereby acts as a working model to understand the effects of a specific mammalian gene. In recognition of their discovery of how homologous recombination can be used to introduce genetic modifications in mice through embryonic stem cells, Mario Capecchi, Martin Evans and Oliver Smithies were awarded the 2007 Nobel Prize for Physiology or Medicine. Advances in gene targeting technologies which hijack the homologous recombination mechanics of cells are now leading to the development of a new wave of more accurate, isogenic human disease models. These engineered human cell models are thought to more accurately reflect the genetics of human diseases than their mouse model predecessors. This is largely because mutations of interest are introduced into endogenous genes, just as they occur in the real patients, and because they are based on human genomes rather than rat genomes. Furthermore, certain technologies enable the knock-in of a particular mutation rather than just knock-outs associated with older gene targeting technologies. Protein engineering with homologous recombination develops chimeric proteins by swapping fragments between two parental proteins. These techniques exploit the fact that recombination can introduce a high degree of sequence diversity while preserving a protein's ability to fold into its tertiary structure, or three-dimensional shape. This stands in contrast to other protein engineering techniques, like random point mutagenesis, in which the probability of maintaining protein function declines exponentially with increasing amino acid substitutions. The chimeras produced by recombination techniques are able to maintain their ability to fold because their swapped parental fragments are structurally and evolutionarily conserved. These recombinable "building blocks" preserve structurally important interactions like points of physical contact between different amino acids in the protein's structure. Computational methods like SCHEMA and statistical coupling analysis can be used to identify structural subunits suitable for recombination. Techniques that rely on homologous recombination have been used to engineer new proteins. In a study published in 2007, researchers were able to create chimeras of two enzymes involved in the biosynthesis of isoprenoids, a diverse class of compounds including hormones, visual pigments and certain pheromones. The chimeric proteins acquired an ability to catalyze an essential reaction in isoprenoid biosynthesis—one of the most diverse pathways of biosynthesis found in nature—that was absent in the parent proteins. Protein engineering through recombination has also produced chimeric enzymes with new function in members of a group of proteins known as the cytochrome P450 family, which in humans is involved in detoxifying foreign compounds like drugs, food additives and preservatives. Cancer cells with BRCA mutations have deficiencies in homologous recombination, and drugs to exploit those deficiencies have been developed and used successfully in clinical trials. Olaparib, a PARP1 inhibitor, shrunk or stopped the growth of tumors from breast, ovarian and prostate cancers caused by mutations in the BRCA1 or BRCA2 genes, which are necessary for HR. When BRCA1 or BRCA2 is absent, other types of DNA repair mechanisms must compensate for the deficiency of HR, such as base-excision repair (BER) for stalled replication forks or non-homologous end joining (NHEJ) for double strand breaks. By inhibiting BER in an HR-deficient cell, olaparib applies the concept of synthetic lethality to specifically target cancer cells. While PARP1 inhibitors represent a novel approach to cancer therapy, researchers have cautioned that they may prove insufficient for treating late-stage metastatic cancers. Cancer cells can become resistant to a PARP1 inhibitor if they undergo deletions of mutations in BRCA2, undermining the drug's synthetic lethality by restoring cancer cells' ability to repair DNA by HR. - Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P, et al. (2002). "Chapter 5: DNA Replication, Repair, and Recombination". Molecular Biology of the Cell (4th ed.). New York: Garland Science. p. 845. ISBN 0-8153-3218-1. OCLC 145080076. - Capecchi MR (June 1989). "Altering the genome by homologous recombination". Science. 244 (4910): 1288–92. PMID 2660260. doi:10.1126/science.2660260. - Smithies O, Gregg RG, Boggs SS, Koralewski MA, Kucherlapati RS (1985-09-19). "Insertion of DNA sequences into the human chromosomal beta-globin locus by homologous recombination". Nature. 317 (6034): 230–4. PMID 2995814. doi:10.1038/317230a0. - Orr-Weaver TL, Szostak JW, Rothstein RJ (October 1981). "Yeast transformation: a model system for the study of recombination". Proceedings of the National Academy of Sciences of the United States of America. 78 (10): 6354–8. PMC . PMID 6273866. doi:10.1073/pnas.78.10.6354. - Orr-Weaver TL, Szostak JW (July 1983). "Yeast recombination: the association between double-strand gap repair and crossing-over". Proceedings of the National Academy of Sciences of the United States of America. 80 (14): 4417–21. PMC . PMID 6308623. doi:10.1073/pnas.80.14.4417. - Szostak JW, Orr-Weaver TL, Rothstein RJ, Stahl FW (May 1983). "The double-strand-break repair model for recombination". Cell. 33 (1): 25–35. PMID 6380756. doi:10.1016/0092-8674(83)90331-8. - Resnick MA (June 1976). "The repair of double-strand breaks in DNA; a model involving recombination". Journal of Theoretical Biology. 59 (1): 97–106. PMID 940351. doi:10.1016/s0022-5193(76)80025-2. - Jasin M, Rothstein R (November 2013). "Repair of strand breaks by homologous recombination". Cold Spring Harbor Perspectives in Biology. 5 (11): a012740. PMC . PMID 24097900. doi:10.1101/cshperspect.a012740. - Bateson P (August 2002). "William Bateson: a biologist ahead of his time" (PDF). Journal of Genetics. 81 (2): 49–58. PMID 12532036. doi:10.1007/BF02715900. - "Reginald Crundall Punnett". NAHSTE, University of Edinburgh. Retrieved 3 July 2010. - Lobo I, Shaw K (2008). "Thomas Hunt Morgan, genetic recombination, and gene mapping". Nature Education. 1 (1). - Coe E, Kass LB (May 2005). "Proof of physical exchange of genes on the chromosomes". Proceedings of the National Academy of Sciences of the United States of America. 102 (19): 6641–6. PMC . PMID 15867161. doi:10.1073/pnas.0407340102. - Creighton HB, McClintock B (August 1931). "A Correlation of Cytological and Genetical Crossing-Over in Zea Mays". Proceedings of the National Academy of Sciences of the United States of America. 17 (8): 492–7. PMC . PMID 16587654. doi:10.1073/pnas.17.8.492. - Stern, C (1931). "Zytologisch-genetische untersuchungen alsbeweise fur die Morgansche theorie des faktoraustauschs". Biol. Zentbl. 51: 547–587. - "The development of bacterial genetics". US National Library of Medicine. Retrieved 3 July 2010. - "The Nobel Prize in Physiology or Medicine 1958". Nobelprize.org. Retrieved 3 July 2010. - Haber JE, Ira G, Malkova A, Sugawara N (January 2004). "Repairing a double-strand chromosome break by homologous recombination: revisiting Robin Holliday's model". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 359 (1441): 79–86. PMC . PMID 15065659. doi:10.1098/rstb.2003.1367. - Lodish H, Berk A, Zipursky SL, Matsudaira P, Baltimore D, Darnell J (2000). "12.5: Recombination between Homologous DNA Sites: Double-Strand Breaks in DNA Initiate Recombination". Molecular Cell Biology (4th ed.). W. H. Freeman and Company. ISBN 0-7167-3136-3. - Griffiths A, et al. (1999). "8: Chromosome Mutations: Chromosomal Rearrangements". Modern Genetic Analysis. W. H. Freeman and Company. ISBN 0-7167-3118-5. - Khanna KK, Jackson SP (March 2001). "DNA double-strand breaks: signaling, repair and the cancer connection". Nature Genetics. 27 (3): 247–54. PMID 11242102. doi:10.1038/85798. - Nelson DL, Cox MM (2005). Principles of Biochemistry (4th ed.). Freeman. pp. 980–981. ISBN 978-0-7167-4339-2. - Marcon E, Moens PB (August 2005). "The evolution of meiosis: recruitment and modification of somatic DNA-repair proteins". BioEssays. 27 (8): 795–808. PMID 16015600. doi:10.1002/bies.20264. - Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2008). Molecular Biology of the Cell (5th ed.). Garland Science. p. 305. ISBN 978-0-8153-4105-5. - Keeney S, Giroux CN, Kleckner N (February 1997). "Meiosis-specific DNA double-strand breaks are catalyzed by Spo11, a member of a widely conserved protein family". Cell. 88 (3): 375–84. PMID 9039264. doi:10.1016/S0092-8674(00)81876-0. - Longhese MP, Bonetti D, Guerini I, Manfrini N, Clerici M (September 2009). "DNA double-strand breaks in meiosis: checking their formation, processing and repair". DNA Repair. 8 (9): 1127–38. PMID 19464965. doi:10.1016/j.dnarep.2009.04.005. - Cahill LP, Mariana JC, Mauléon P (January 1979). "Total follicular populations in ewes of high and low ovulation rates". Journal of Reproduction and Fertility. 55 (1): 27–36. PMC . doi:10.1371/journal.pbio.0020192. - Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2008). Molecular Biology of the Cell (5th ed.). Garland Science. p. 303. ISBN 978-0-8153-4105-5. - Shrivastav M, De Haro LP, Nickoloff JA (January 2008). "Regulation of DNA double-strand break repair pathway choice". Cell Research. 18 (1): 134–47. PMID 18157161. doi:10.1038/cr.2007.111. - Mimitou EP, Symington LS (May 2009). "Nucleases and helicases take center stage in homologous recombination". Trends in Biochemical Sciences. 34 (5): 264–72. PMID 19375328. doi:10.1016/j.tibs.2009.01.010. - Huertas P, Cortés-Ledesma F, Sartori AA, Aguilera A, Jackson SP (October 2008). "CDK targets Sae2 to control DNA-end resection and homologous recombination". Nature. 455 (7213): 689–92. PMC . PMID 18716619. doi:10.1038/nature07215. - Liu B, Yip RK, Zhou Z (2012). "Chromatin remodeling, DNA damage repair and aging". Curr. Genomics. 13 (7): 533–47. PMC . PMID 23633913. doi:10.2174/138920212803251373. - Sellou H, Lebeaupin T, Chapuis C, Smith R, Hegele A, Singh HR, Kozlowski M, Bultmann S, Ladurner AG, Timinszky G, Huet S (2016). "The poly(ADP-ribose)-dependent chromatin remodeler Alc1 induces local chromatin relaxation upon DNA damage". Mol. Biol. Cell. 27 (24): 3791–3799. PMC . PMID 27733626. doi:10.1091/mbc.E16-05-0269. - Van Meter M, Simon M, Tombline G, May A, Morello TD, Hubbard BP, Bredbenner K, Park R, Sinclair DA, Bohr VA, Gorbunova V, Seluanov A (2016). "JNK Phosphorylates SIRT6 to Stimulate DNA Double-Strand Break Repair in Response to Oxidative Stress by Recruiting PARP1 to DNA Breaks". Cell Rep. 16 (10): 2641–50. PMC . PMID 27568560. doi:10.1016/j.celrep.2016.08.006. - Haince JF, McDonald D, Rodrigue A, Déry U, Masson JY, Hendzel MJ, Poirier GG (2008). "PARP1-dependent kinetics of recruitment of MRE11 and NBS1 proteins to multiple DNA damage sites". J. Biol. Chem. 283 (2): 1197–208. PMID 18025084. doi:10.1074/jbc.M706734200. - Rogakou EP, Pilch DR, Orr AH, Ivanova VS, Bonner WM (1998). "DNA double-stranded breaks induce histone H2AX phosphorylation on serine 139". J. Biol. Chem. 273 (10): 5858–68. PMID 9488723. doi:10.1074/jbc.273.10.5858. - Mailand N, Bekker-Jensen S, Faustrup H, Melander F, Bartek J, Lukas C, Lukas J (2007). "RNF8 ubiquitylates histones at DNA double-strand breaks and promotes assembly of repair proteins". Cell. 131 (5): 887–900. PMID 18001824. doi:10.1016/j.cell.2007.09.040. - Luijsterburg MS, Acs K, Ackermann L, Wiegant WW, Bekker-Jensen S, Larsen DH, Khanna KK, van Attikum H, Mailand N, Dantuma NP (2012). "A new non-catalytic role for ubiquitin ligase RNF8 in unfolding higher-order chromatin structure". EMBO J. 31 (11): 2511–27. PMC . PMID 22531782. doi:10.1038/emboj.2012.104. - Sung P, Klein H (October 2006). "Mechanism of homologous recombination: mediators and helicases take on regulatory functions". Nature Reviews Molecular Cell Biology. 7 (10): 739–50. PMID 16926856. doi:10.1038/nrm2008. - Wold MS (1997). "Replication protein A: a heterotrimeric, single-stranded DNA-binding protein required for eukaryotic DNA metabolism". Annual Review of Biochemistry. 66: 61–92. PMID 9242902. doi:10.1146/annurev.biochem.66.1.61. - McMahill MS, Sham CW, Bishop DK (November 2007). "Synthesis-dependent strand annealing in meiosis". PLoS Biology. 5 (11): e299. PMC . PMID 17988174. doi:10.1371/journal.pbio.0050299. - Bärtsch S, Kang LE, Symington LS (February 2000). "RAD51 is required for the repair of plasmid double-stranded DNA gaps from either plasmid or chromosomal templates". Molecular and Cellular Biology. 20 (4): 1194–205. PMC . PMID 10648605. doi:10.1128/MCB.20.4.1194-1205.2000. - Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2008). Molecular Biology of the Cell (5th ed.). Garland Science. pp. 312–313. ISBN 978-0-8153-4105-5. - Helleday T, Lo J, van Gent DC, Engelward BP (July 2007). "DNA double-strand break repair: from mechanistic understanding to cancer treatment". DNA Repair. 6 (7): 923–35. PMID 17363343. doi:10.1016/j.dnarep.2007.02.006. - Andersen SL, Sekelsky J (December 2010). "Meiotic versus mitotic recombination: two different routes for double-strand break repair: the different functions of meiotic versus mitotic DSB repair are reflected in different pathway usage and different outcomes". BioEssays. 32 (12): 1058–66. PMC . PMID 20967781. doi:10.1002/bies.201000087. - Allers T, Lichten M (July 2001). "Differential timing and control of noncrossover and crossover recombination during meiosis". Cell. 106 (1): 47–57. PMID 11461701. doi:10.1016/s0092-8674(01)00416-0. - Haber lab. "Single-strand annealing". "Brandeis University". Retrieved 3 July 2010. - Lyndaker AM, Alani E (March 2009). "A tale of tails: insights into the coordination of 3' end processing during homologous recombination". BioEssays. 31 (3): 315–21. PMC . PMID 19260026. doi:10.1002/bies.200800195. - Mimitou EP, Symington LS (September 2009). "DNA end resection: many nucleases make light work". DNA Repair. 8 (9): 983–95. PMC . PMID 19473888. doi:10.1016/j.dnarep.2009.04.017. - Pâques F, Haber JE (June 1999). "Multiple pathways of recombination induced by double-strand breaks in Saccharomyces cerevisiae". Microbiology and Molecular Biology Reviews. 63 (2): 349–404. PMC . PMID 10357855. - McEachern MJ, Haber JE (2006). "Break-induced replication and recombinational telomere elongation in yeast". Annual Review of Biochemistry. 75: 111–35. PMID 16756487. doi:10.1146/annurev.biochem.74.082803.133234. - Morrish TA, Greider CW (January 2009). Haber JE, ed. "Short telomeres initiate telomere recombination in primary and tumor cells". PLoS Genetics. 5 (1): e1000357. PMC . PMID 19180191. doi:10.1371/journal.pgen.1000357. - Muntoni A, Reddel RR (October 2005). "The first molecular details of ALT in human tumor cells". Human Molecular Genetics. 14 Spec No. 2 (Review Issue 2): R191–6. PMID 16244317. doi:10.1093/hmg/ddi266. - PMID 18497818. doi:10.1038/nature06971.; Chen Z, Yang H, Pavletich NP (May 2008). "Mechanism of homologous recombination from the RecA-ssDNA/dsDNA structures". Nature. 453 (7194): 489–4. - Kowalczykowski SC, Dixon DA, Eggleston AK, Lauder SD, Rehrauer WM (September 1994). "Biochemistry of homologous recombination in Escherichia coli". Microbiological Reviews. 58 (3): 401–65. PMC . PMID 7968921. - Rocha EP, Cornet E, Michel B (August 2005). "Comparative and evolutionary analysis of the bacterial homologous recombination systems". PLoS Genetics. 1 (2): e15. PMC . PMID 16132081. doi:10.1371/journal.pgen.0010015. - Amundsen SK, Taylor AF, Reddy M, Smith GR (December 2007). "Intersubunit signaling in RecBCD enzyme, a complex protein machine regulated by Chi hot spots". Genes & Development. 21 (24): 3296–307. PMC . PMID 18079176. doi:10.1101/gad.1605807. - Singleton MR, Dillingham MS, Gaudier M, Kowalczykowski SC, Wigley DB (November 2004). "Crystal structure of RecBCD enzyme reveals a machine for processing DNA breaks" (PDF). Nature. 432 (7014): 187–93. PMID 15538360. doi:10.1038/nature02988. Archived from the original (PDF) on 2004-05-25. - Cromie GA (August 2009). "Phylogenetic ubiquity and shuffling of the bacterial RecBCD and AddAB recombination complexes". Journal of Bacteriology. 191 (16): 5076–84. PMC . PMID 19542287. doi:10.1128/JB.00254-09. - Smith GR (June 2012). "How RecBCD enzyme and Chi promote DNA break repair and recombination: a molecular biologist's view". Microbiology and Molecular Biology Reviews. 76 (2): 217–28. PMID 22688812. doi:10.1128/MMBR.05026-11. - Dillingham MS, Kowalczykowski SC (December 2008). "RecBCD enzyme and the repair of double-stranded DNA breaks". Microbiology and Molecular Biology Reviews. 72 (4): 642–71, Table of Contents. PMC . PMID 19052323. doi:10.1128/MMBR.00020-08. - Michel B, Boubakri H, Baharoglu Z, LeMasson M, Lestini R (July 2007). "Recombination proteins and rescue of arrested replication forks". DNA Repair. 6 (7): 967–80. PMID 17395553. doi:10.1016/j.dnarep.2007.02.016. - Spies M, Bianco PR, Dillingham MS, Handa N, Baskin RJ, Kowalczykowski SC (September 2003). "A molecular throttle: the recombination hotspot chi controls DNA translocation by the RecBCD helicase". Cell. 114 (5): 647–54. PMID 13678587. doi:10.1016/S0092-8674(03)00681-0. - Taylor AF, Smith GR (June 2003). "RecBCD enzyme is a DNA helicase with fast and slow motors of opposite polarity". Nature. 423 (6942): 889–93. PMID 12815437. doi:10.1038/nature01674. - Spies M, Amitani I, Baskin RJ, Kowalczykowski SC (November 2007). "RecBCD enzyme switches lead motor subunits in response to chi recognition". Cell. 131 (4): 694–705. PMC . PMID 18022364. doi:10.1016/j.cell.2007.09.023. - Savir Y, Tlusty T (November 2010). "RecA-mediated homology search as a nearly optimal signal detection system" (PDF). Molecular Cell. 40 (3): 388–96. PMID 21070965. doi:10.1016/j.molcel.2010.10.020. - Rambo RP, Williams GJ, Tainer JA (November 2010). "Achieving fidelity in homologous recombination despite extreme complexity: informed decisions by molecular profiling" (PDF). Molecular Cell. 40 (3): 347–8. PMC . PMID 21070960. doi:10.1016/j.molcel.2010.10.032. - De Vlaminck I, van Loenhout MT, Zweifel L, den Blanken J, Hooning K, Hage S, Kerssemakers J, Dekker C (June 2012). "Mechanism of homology recognition in DNA recombination from dual-molecule experiments". Molecular Cell. 46 (5): 616–24. PMID 22560720. doi:10.1016/j.molcel.2012.03.029. - Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2008). Molecular Biology of the Cell (5th ed.). Garland Science. p. 307. ISBN 978-0-8153-4105-5. - Morimatsu K, Kowalczykowski SC (May 2003). "RecFOR proteins load RecA protein onto gapped DNA to accelerate DNA strand exchange: a universal step of recombinational repair". Molecular Cell. 11 (5): 1337–47. PMID 12769856. doi:10.1016/S1097-2765(03)00188-6. - Hiom K (July 2009). "DNA repair: common approaches to fixing double-strand breaks". Current Biology. 19 (13): R523–5. PMID 19602417. doi:10.1016/j.cub.2009.06.009. - Handa N, Morimatsu K, Lovett ST, Kowalczykowski SC (May 2009). "Reconstitution of initial steps of dsDNA break repair by the RecF pathway of E. coli". Genes & Development. 23 (10): 1234–45. PMC . PMID 19451222. doi:10.1101/gad.1780709. - West SC (June 2003). "Molecular views of recombination proteins and their control". Nature Reviews Molecular Cell Biology. 4 (6): 435–45. PMID 12778123. doi:10.1038/nrm1127. - Watson JD, Baker TA, Bell SP, Gann A, Levine M, Losick R (2003). Molecular Biology of the Gene (5th ed.). Pearson/Benjamin Cummings. pp. 259–291. ISBN 978-0-8053-4635-0. - Gumbiner-Russo LM, Rosenberg SM (28 November 2007). Sandler S, ed. "Physical analyses of E. coli heteroduplex recombination products in vivo: on the prevalence of 5' and 3' patches". PloS One. 2 (11): e1242. PMC . PMID 18043749. doi:10.1371/journal.pone.0001242. - Thomas CM, Nielsen KM (September 2005). "Mechanisms of, and barriers to, horizontal gene transfer between bacteria" (PDF). Nature Reviews. Microbiology. 3 (9): 711–21. PMID 16138099. doi:10.1038/nrmicro1234. Archived from the original (PDF) on 2010-06-01. - Vulić M, Dionisio F, Taddei F, Radman M (September 1997). "Molecular keys to speciation: DNA polymorphism and the control of genetic exchange in enterobacteria". Proceedings of the National Academy of Sciences of the United States of America. 94 (18): 9763–7. PMC . PMID 9275198. doi:10.1073/pnas.94.18.9763. - Majewski J, Cohan FM (January 1998). "The effect of mismatch repair and heteroduplex formation on sexual isolation in Bacillus". Genetics. 148 (1): 13–8. PMC . PMID 9475717. - Majewski J, Zawadzki P, Pickerill P, Cohan FM, Dowson CG (February 2000). "Barriers to genetic exchange between bacterial species: Streptococcus pneumoniae transformation". Journal of Bacteriology. 182 (4): 1016–23. PMC . PMID 10648528. doi:10.1128/JB.182.4.1016-1023.2000. - Chen I, Dubnau D (March 2004). "DNA uptake during bacterial transformation". Nature Reviews. Microbiology. 2 (3): 241–9. PMID 15083159. doi:10.1038/nrmicro844. - Claverys JP, Martin B, Polard P (May 2009). "The genetic transformation machinery: composition, localization, and mechanism". FEMS Microbiology Reviews. 33 (3): 643–56. PMID 19228200. doi:10.1111/j.1574-6976.2009.00164.x. - Kidane D, Graumann PL (July 2005). "Intracellular protein and DNA dynamics in competent Bacillus subtilis cells". Cell. 122 (1): 73–84. PMID 16009134. doi:10.1016/j.cell.2005.04.036. - Fleischmann Jr WR (1996). "43". Medical Microbiology (4th ed.). University of Texas Medical Branch at Galveston. ISBN 0-9631172-1-1. - Boni MF, de Jong MD, van Doorn HR, Holmes EC (3 May 2010). Martin DP, ed. "Guidelines for identifying homologous recombination events in influenza A virus". PloS One. 5 (5): e10434. PMC . PMID 20454662. doi:10.1371/journal.pone.0010434. - Nagy PD, Bujarski JJ (January 1996). "Homologous RNA recombination in brome mosaic virus: AU-rich sequences decrease the accuracy of crossovers". Journal of Virology. 70 (1): 415–26. PMC . PMID 8523555. - Chetverin AB (October 1999). "The puzzle of RNA recombination". FEBS Letters. 460 (1): 1–5. PMID 10571050. doi:10.1016/S0014-5793(99)01282-X. - Roossinck MJ (September 1997). "Mechanisms of plant virus evolution". Annual Review of Phytopathology. 35: 191–209. PMID 15012521. doi:10.1146/annurev.phyto.35.1.191. - Arbuckle JH, Medveczky PG (August 2011). "The molecular biology of human herpesvirus-6 latency and telomere integration". Microbes and Infection / Institut Pasteur. 13 (8–9): 731–41. PMC . PMID 21458587. doi:10.1016/j.micinf.2011.03.006. - Bernstein C (March 1981). "Deoxyribonucleic acid repair in bacteriophage". Microbiological Reviews. 45 (1): 72–98. PMC . PMID 6261109. - Bernstein C, Bernstein H (2001). DNA repair in bacteriophage. In: Nickoloff JA, Hoekstra MF (Eds.) DNA Damage and Repair, Vol.3. Advances from Phage to Humans. Humana Press, Totowa, NJ, pp. 1–19. ISBN 978-0896038035 - Story RM, Bishop DK, Kleckner N, Steitz TA (March 1993). "Structural relationship of bacterial RecA proteins to recombination proteins from bacteriophage T4 and yeast". Science. 259 (5103): 1892–6. PMID 8456313. doi:10.1126/science.8456313. - Michod RE, Bernstein H, Nedelcu AM (May 2008). "Adaptive value of sex in microbial pathogens". Infection, Genetics and Evolution. 8 (3): 267–85. PMID 18295550. doi:10.1016/j.meegid.2008.01.002.http://www.hummingbirds.arizona.edu/Faculty/Michod/Downloads/IGE%20review%20sex.pdf - Lamb NE, Yu K, Shaffer J, Feingold E, Sherman SL (January 2005). "Association between maternal age and meiotic recombination for trisomy 21". American Journal of Human Genetics. 76 (1): 91–9. PMC . PMID 15551222. doi:10.1086/427266. - Cold Spring Harbor Laboratory (2007). "Human RecQ Helicases, Homologous Recombination And Genomic Instability". ScienceDaily. Retrieved 3 July 2010. - Modesti M, Kanaar R (2001). "Homologous recombination: from model organisms to human disease". Genome Biology. 2 (5): REVIEWS1014. PMC . PMID 11387040. doi:10.1186/gb-2001-2-5-reviews1014. - Luo G, Santoro IM, McDaniel LD, Nishijima I, Mills M, Youssoufian H, Vogel H, Schultz RA, Bradley A (December 2000). "Cancer predisposition caused by elevated mitotic recombination in Bloom mice". Nature Genetics. 26 (4): 424–9. PMID 11101838. doi:10.1038/82548. - Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2007). Molecular Biology of the Cell (5th ed.). Garland Science. ISBN 978-0-8153-4110-9. - Powell SN, Kachnic LA (September 2003). "Roles of BRCA1 and BRCA2 in homologous recombination, DNA replication fidelity and the cellular response to ionizing radiation". Oncogene. 22 (37): 5784–91. PMID 12947386. doi:10.1038/sj.onc.1206678. - Use of homologous recombination deficiency (HRD) score to enrich for niraparib sensitive high grade ovarian tumors. - Lin Z, Kong H, Nei M, Ma H (July 2006). "Origins and evolution of the recA/RAD51 gene family: evidence for ancient gene duplication and endosymbiotic gene transfer". Proceedings of the National Academy of Sciences of the United States of America. 103 (27): 10328–33. PMC . PMID 16798872. doi:10.1073/pnas.0604232103. - Haseltine CA, Kowalczykowski SC (May 2009). "An archaeal Rad54 protein remodels DNA and stimulates DNA strand exchange by RadA". Nucleic Acids Research. 37 (8): 2757–70. PMC . PMID 19282450. doi:10.1093/nar/gkp068. - Rolfsmeier ML, Haseltine CA (March 2010). "The single-stranded DNA binding protein of Sulfolobus solfataricus acts in the presynaptic step of homologous recombination". Journal of Molecular Biology. 397 (1): 31–45. PMID 20080104. doi:10.1016/j.jmb.2010.01.004. - Huang Q, Liu L, Liu J, Ni J, She Q, Shen Y (2015). "Efficient 5'-3' DNA end resection by HerA and NurA is essential for cell viability in the crenarchaeon Sulfolobus islandicus". BMC Molecular Biology. 16: 2. PMC . PMID 25880130. doi:10.1186/s12867-015-0030-z. - Jain SK, Cox MM, Inman RB (August 1994). "On the role of ATP hydrolysis in RecA protein-mediated DNA strand exchange. III. Unidirectional branch migration and extensive hybrid DNA formation". The Journal of Biological Chemistry. 269 (32): 20653–61. PMID 8051165. - Ramesh MA, Malik SB, Logsdon JM (January 2005). "A phylogenomic inventory of meiotic genes; evidence for sex in Giardia and an early eukaryotic origin of meiosis". Current Biology. 15 (2): 185–91. PMID 15668177. doi:10.1016/j.cub.2005.01.003. - Malik SB, Ramesh MA, Hulstrand AM, Logsdon JM (December 2007). "Protist homologs of the meiotic Spo11 gene and topoisomerase VI reveal an evolutionary history of gene duplication and lineage-specific loss". Molecular Biology and Evolution. 24 (12): 2827–41. PMID 17921483. doi:10.1093/molbev/msm217. - Lodish H, Berk A, Zipursky SL, Matsudaira P, Baltimore D, Darnell J (2000). "Chapter 8.5: Gene Replacement and Transgenic Animals: DNA Is Transferred into Eukaryotic Cells in Various Ways". Molecular Cell Biology (4th ed.). W. H. Freeman and Company. ISBN 0-7167-3136-3. - "The Nobel Prize in Physiology or Medicine 2007". The Nobel Foundation. Retrieved December 15, 2008. - Drummond DA, Silberg JJ, Meyer MM, Wilke CO, Arnold FH (April 2005). "On the conservative nature of intragenic recombination". Proceedings of the National Academy of Sciences of the United States of America. 102 (15): 5380–5. PMC . PMID 15809422. doi:10.1073/pnas.0500729102. - Bloom JD, Silberg JJ, Wilke CO, Drummond DA, Adami C, Arnold FH (January 2005). "Thermodynamic prediction of protein neutrality". Proceedings of the National Academy of Sciences of the United States of America. 102 (3): 606–11. PMC . PMID 15644440. doi:10.1073/pnas.0406744102. - Carbone MN, Arnold FH (August 2007). "Engineering by homologous recombination: exploring sequence and function within a conserved fold". Current Opinion in Structural Biology. 17 (4): 454–9. PMID 17884462. doi:10.1016/j.sbi.2007.08.005. - Otey CR, Landwehr M, Endelman JB, Hiraga K, Bloom JD, Arnold FH (May 2006). "Structure-guided recombination creates an artificial family of cytochromes P450". PLoS Biology. 4 (5): e112. PMC . PMID 16594730. doi:10.1371/journal.pbio.0040112. - Socolich M, Lockless SW, Russ WP, Lee H, Gardner KH, Ranganathan R (September 2005). "Evolutionary information for specifying a protein fold". Nature. 437 (7058): 512–8. PMID 16177782. doi:10.1038/nature03991. - Thulasiram HV, Erickson HK, Poulter CD (April 2007). "Chimeras of two isoprenoid synthases catalyze all four coupling reactions in isoprenoid biosynthesis". Science. 316 (5821): 73–6. PMID 17412950. doi:10.1126/science.1137786. - Landwehr M, Carbone M, Otey CR, Li Y, Arnold FH (March 2007). "Diversification of catalytic function in a synthetic family of chimeric cytochrome p450s". Chemistry & Biology. 14 (3): 269–78. PMC . PMID 17379142. doi:10.1016/j.chembiol.2007.01.009. - Iglehart JD, Silver DP (July 2009). "Synthetic lethality--a new direction in cancer-drug development". The New England Journal of Medicine. 361 (2): 189–91. PMID 19553640. doi:10.1056/NEJMe0903044. - Fong PC, Boss DS, Yap TA, Tutt A, Wu P, Mergui-Roelvink M, Mortimer P, Swaisland H, Lau A, O'Connor MJ, Ashworth A, Carmichael J, Kaye SB, Schellens JH, de Bono JS (July 2009). "Inhibition of poly(ADP-ribose) polymerase in tumors from BRCA mutation carriers". The New England Journal of Medicine. 361 (2): 123–34. PMID 19553641. doi:10.1056/NEJMoa0900212. - Edwards SL, Brough R, Lord CJ, Natrajan R, Vatcheva R, Levine DA, Boyd J, Reis-Filho JS, Ashworth A (February 2008). "Resistance to therapy caused by intragenic deletion in BRCA2". Nature. 451 (7182): 1111–5. PMID 18264088. doi:10.1038/nature06548. |Library resources about |Wikimedia Commons has media related to Homologous recombination.|
In this Article - Microcephaly facts* - What is microcephaly? - What causes microcephaly? - What are the signs and symptoms of microcephaly? - Is there any treatment for microcephaly? - What is the prognosis for microcephaly? - What research is being done on microcephaly? - For more information What causes microcephaly? It is most often caused by genetic abnormalities that interfere with the growth of the cerebral cortex during the early months of fetal development. It is associated with Down's syndrome, chromosomal syndromes, and neurometabolic syndromes. Babies may also be born with microcephaly if, during pregnancy, their mother: - abused drugs oralcohol, - became infected with a cytomegalovirus, - rubella (German measles), or varicella (chickenpox) virus, - was exposed to certain toxic chemicals, or - had untreated phenylketonuria (PKU). Babies born with microcephaly will have a smaller than normal head that will fail to grow as they progress through infancy. What are the signs and symptoms of microcephaly? Depending on the severity of the accompanying syndrome, children with microcephaly may have: - mental retardation, - delayed motor functions and speech, - facial distortions, - dwarfism or short stature, - difficulties with coordination and balance, and - other brain or neurological abnormalities. Some children with microcephaly will have normal intelligence and a head that will grow bigger, but they will track below the normal growth curves for head circumference. Viewers share their comments Find out what women really need.
A glossary of tsunami-related terms. - The distance inland that a tsunami travels. - The Pacific Tsunami Warning Center provides international warnings for tele-tsunami to countries in the Pacific basin. - The decline in sea level before a tsunami. Recession is a natural warning sign that a tsunami is approaching. - Tele-tsunami or distant tsunami - A tsunami originating from a source usually more than 1000 km away. - Tide gauge - A device that measures sea level height. - Travel time - The time for the first tsunami wave to travel from its source to a given point on a coastline. - A sequence of extremely long travelling waves, generated by large disturbances - movements of the sea floor during earthquakes, volcanic eruptions, landslides or even meteor impacts. Tsunami travel fast, and each wave in the sequence may be separated from the next by between 15 and 60 minutes.
Engaging learners is a vital component in any learning environment. Educators must design creative, challenging, and exciting activities for their learners to ensure they are motivated and engaged in their classrooms and online programs. According to Durrington, Berryhill, and Swafford (2006), learning can be effective whether students are participating in traditional face-to-face classrooms and/or in online programs. Educators can incorporate technology devices, software, and strategies into their curriculums that will assist their learners in being higher-order thinkers, and successful in their learning experience. Learning and instruction must provide opportunities for students to collaborate and build relationships with others and promote positive attitudes as well as increase student participation. In order for learners to be successful in the learning experience, instructors must provide a clear outline of what is required and how students will be assessed within their course of study. Learners need to feel a sense of comfort, supportiveness, and respect with the learning environment, so they can be successful. Instructors can provide a discussion board for students to collaborate and discuss concepts within the course to build relationships with other students. Students will become more comfortable with others, and develop meaningful connections through dialogue and positive feedback from their peers. Eventually, students will begin to ask knowledgeable questions about course related concepts, tasks, etc., and this will foster opportunities for collaboration within the learning community. Students an also connect through email, telephone calls, computers using facebook, instant messaging, smart phones with twitter, You Tube, ooVoo, iPods, iPads. For more strategies and tools used to engage students in a traditional face-to-face classroom and/or in Distance Learning Programs, see information located at http://www.redorbit.com/news/technology/433631/strategies_for_enhancing_student_interactivity_in_an_online_environment/. Durrington, V. A., Berryhill, A., & Swafford, J. (2006). Strategies for enhancing student interactivity in an online environment. Information retrieved from http://www.redorbit.com/news/technology/433631/strategies_for_enhancing_student_interactivity_in_an_online_environment/
Worm compost bins in the classroom are popular because not only are they fun, they compliment science curriculum and social responsibility goals. Allowing students to dispose of their lunch scraps into a worm compost bin is a great goal. Here are a few tips: - Start new systems fairly early in the school year or else obtain a system that has already been established. - Lunch foods that can go in are fruits, veggies, bread crusts, rice, and noodles. Avoid meat, dairy, sauces, grease, or salty foods. - Remember that a pound of red wiggler worms can eat about 1/2 pound of organics per day in an established bin. - Having small groups of students take turns adding their lunch scraps works out best at first. As the worms multiply, more food can be added. - Check the moisture level weekly. The bedding should be moist but not wet. - Red wigglers will be fine if left over school vacations — just follow our vacation care tips first! There are complete instructions for how to start a worm composting system here.
Anatomy of the Spine – Upper Back, Lower Back and Neck An inside look at the structure of the back. When most people mention their back, what they are actually referring to is their spine. The spine runs from the base of your skull down the length of your back, going all the way down to your pelvis. It is composed of 33 spool-shaped bones called vertebrae, each about an inch thick and stacked one upon another. Each vertebra consists of the following parts: The body is the largest part of the vertebrae and the part that bears the most weight. The lamina is the lining of the hole (spinal canal) through which the spinal cord runs. The spinous process is the bony protrusions you feel when you run your hand down your back. The transverse processes are the pairs of protrusions on either side the vertebrae to which the back muscles attach. The facets are two pairs of protrusions where the vertebrae connect to one another, including: • The superior articular facets, which face upward • The inferior articular facets, which face downward. The connection points between the vertebrae are referred to as the facet joints, which keep the spine aligned as it moves. Similar to other joints in the body, the facet joints are lined with a smooth membrane called the synovium, which produces a viscous fluid to lubricate the joints. Located between the individual vertebrae, discs serve as cushions or shock absorbers between the bones. Each disc is about the size and shape of a flattened doughnut hole and consists of two parts: • The annulus fibrosis – a strong outer cover • The nucleus pulposis – a "jelly-like" filling. Running through the center of the spinal column is the spinal cord, a bundle of nerve cells and fibers that transmit electrical signals back and forth between the brain and the rest of the body via 31 pairs of nerve bundles that branch off the spinal cord and exit the column between the vertebrae. Supporting the spine, while providing it flexibility, are ligaments (tough bands of connective tissue that attach bone to bone) and muscles. Two main ligaments are: • anterior longitudinal ligament • posterior longitudinal ligament. Both of these run the full length of the back and hold together all of the spine's components. The two main muscle groups involved in back function are: • The extensors, which include the many muscles that attach to the spine and work together to hold your back straight while enabling you to extend it. • The flexors, which attach at your lumbar spine (lower back), and enable you to bend forward. Located at the front of your body, the flexors include your abdominal and hip muscles. Although the spine is a continuous structure, it is often described as if it were five separate units. These units are the five different sections of the spine: 1. The cervical spine – the neck and upper back, composed of the seven vertebrae closest to the skull. The cervical spine supports the weight and movement of your head and protects the nerves exiting your brain. 2. The lumbar spine – the lower back, composed of five vertebrae, provides support for the majority of your body's weight. 3. The thoracic spine – the middle back, made up of the 12 vertebrae in between the cervical and lumbar spine. 4. The sacrum – the base of the spine that is composed of five vertebrae fused (joined together) as one solid unit. The sacrum attaches to ilium of the pelvis, forming the sacroiliac joints. 5. The coccyx – the "tailbone" located below the sacrum, composed of four fused vertebrae. Want to read more? Subscribe Now to Arthritis Today!
© Photographer: John Leaver | Agency: Dreamstime.com In an electrical storm, the storm clouds are charged like giant capacitors in the sky. The upper portion of the cloud is positive and the lower portion is negative. How the cloud acquires this charge is still not agreed upon within the scientific community, but the following description provides one plausible explanation. In the process of the water cycle, moisture can accumulate in the atmosphere. This accumulation is what we see as a cloud. Interestingly, clouds can contain millions upon millions of water droplets and ice suspended in the air. As the process of evaporation and condensation continues, these droplets collide other moisture that is in the process of condensing as it rises. Also, the rising moisture may collide with ice or sleet that is in the process of falling to the earth or located in the lower portion of the cloud. The importance of these collisions is that electrons are knocked off of the rising moisture, thus creating a charge separation. The newly knocked-off electrons gather at the lower portion of the cloud, giving it a negative charge. The rising moisture that has just lost an electron carries a positive charge to the top of the cloud. Beyond the collisions, freezing plays an important role. As the rising moisture encounters colder temperatures in the upper cloud regions and begins to freeze, the frozen portion becomes negatively charged and the unfrozen droplets become positively charged. At this point, rising air currents have the ability to remove the positively charged droplets from the ice and carry them to the top of the cloud. The remaining frozen portion would likely fall to the lower portion of the cloud or continue on to the ground. Combining the collisions with the freezing, we can begin to understand how a cloud may acquire the extreme charge separation that is required for a lightning strike. When there is a charge separation in a cloud, there is also an electric field that is associated with the separation. Like the cloud, this field is negative in the lower region and positive in the upper region. The strength or intensity of the electric field is directly related to the amount of charge buildup in the cloud. As the collisions and freezing continue to occur and the charges at the top and bottom of the cloud increase, the electric field becomes more and more intense -- so intense, in fact, that the electrons at the earth's surface are repelled deeper into the earth by the strong negative charge at the lower portion of the cloud. This repulsion of electrons causes the earth's surface to acquire a strong positive charge. All that is needed now is a conductive path for the negative cloud bottom to contact the positive earth surface. The strong electric field, being somewhat self-sufficient, creates this path. We'll look at the next stage of the lightning creation process, air ionization, next.
Geomagnetic disturbances (GMDs) are natural occurring events that are known to negatively affect the normal operation of modern technology. One of the major causes for these disturbances is the sun. In addition to supplying visible light that is essential for most of the life on Earth, the sun emits other forms of energy. These other forms of energy include sudden bursts of electromagnetic radiation associated with solar flares and high concentrations of ionized particles (plasma) known as coronal mass ejections (CMEs). As a CME nears, the charged particles begin to interact with the Earth in a number of ways including causing observable changes in the magnetosphere and a build up of circulating currents in the ionoshphere. This build up is the source of the southern- (aurora australis) and northern-lights (aurora borealis), which offer the only visible representation of a CME event. With the changes in the Earth’s magnetic field and the added build up of charged particles in the ionosphere, currents can be naturally added to long conductors like transmission lines of the electrical power grid. These naturally occurring geomagnetically induced currents (GICs) can have a devastating impact to normal system operation and can lead to full system collapse. Real World Cases - 03/13/1989 Hydro-Quebec - 08/28/1859 Carrington Event - Ancient Supernova Explosion Planetary K Index The K-index quantifies disturbances in the horizontal component of earth's magnetic field with an integer in the range 0-9 with 1 being calm and 5 or more indicating a geomagnetic storm. It is derived from the maximum fluctuations of horizontal components observed on a magnetometer during a three-hour interval.
Ethnic Groups of Delaware From Ancestry.com Wiki Swedish, Finnish & Dutch Prior to 1664, the area claimed by the Dutch along the west bank of the Delaware River was populated by only a few hundred people, a mix of Swedish and Finnish settlers from the period of New Sweden along with mostly Dutch soldiers and merchants, almost all within a couple of miles from the mouth of the Cristiana River. When the British conquered the area in 1664, most of the Dutch were either forcibly relocated (some sold as slaves in Virginia) or willingly relocated back to the settlements in New Netherlands (which was renamed New York). The handful of Swedes and Finns in the area were primarily farmers who pledged their allegiance to the British government and were thus allowed to stay. Much of this history, along with post-1664 census records, is outlined in Scharf's History of Delaware (see Background Sources for Delaware). The bulk of growth in Delaware after the British takeover in 1664 came from three sources: 1) Land patents sold to immigrants by William Penn; 2) Maryland landowners encouraged by the Calvert government and looking for new land to expand their plantations; and 3) Virginia emigrants also seeking either new land opportunities and/or to escape the growing religious intolerance of the Virginia colony. These immigrants were overwhelmingly English and quickly became the dominant ethnicity along the west bank of the Delaware. For more information on these migratory movements, see Thomas J. Scharf's History of Delaware (see Background Sources for Delaware). As the Baptist movement grew in Great Britain and Wales during the 17th century, many of the Baptist groups came under growing persecution by the successive changes of government. A group of 17 Welsh Baptists endeavored to emigrate to Philadelphia in 1701 to escape this persecution. Once arrived, they joined the existing Baptist community in Philadelphia. However, their differences in language, culture, and religious beliefs left them estranged from the established Philadelphia Baptist community, and in 1703, their religious group purchased 30,000 acres in New Castle County, Delaware, near what is now known as Newark, Delaware, deeded by William Penn in the 1680s to three Quaker Welshmen: David Evans, William Davis, and William Willis. This area, centered around a geological outcropping known as Iron Hill in Pencader Hundred, became known as the Welsh Tract and became the basis for the Welsh Baptist community that relocated from Philadelphia. They remained relatively isolated from the larger communities established along the Delaware River to the east of them. They also established a vibrant religious community, becoming the source for missionaries sent out and established many of the first Baptist churches throughout the American South. More of this community and its history may be found on-line by searching 'Welsh Baptist' and 'Iron Hill'. A more involved depiction of this community and its history is contained in John T. Christian's History of the Baptists (Sunday School Board of the Southern Baptist Convention, 1922): Vol. II, pp. 120-126. Scots-Irish & Irish The struggles for government and religious freedom in the British Isles in the mid-17th century (referred to today as the 'Wars of the Three Kingdoms', and including the English Civil War, the execution of King Charles I, the establishment of the Commonwealth, and subsequently the English Restoration) led to the capture of many Scots-Irish and Irish Republicans by the victorious English forces as prisoners of war. As Delaware, along with most of the southern American colonies, was suffering a chronic shortage of manpower, the British government sent many of these prisoners, along with other incarcerated English - a total forced emigration estimated at around 50,000 people - to the American colonies over the next decade as indentured servants. Others came over on their own volition to escape religious persecution and seek fresh economic opportunities. For most of the 18th century, Scots-Irish (Presbyterian) and Irish (Catholic) worked side-by-side with the African slaves and made up as much as 1/2 of the population under servitude in Delaware. Unlike the African community, who gradually found themselves enslaved for life and over multiple generations, most of the Scots-Irish and Irish eventually gained their emancipation and either assimilated into the local community or moved west into new American frontiers. A good source that describes this demographic can be found in William H. Williams' Slavery and Freedom in Delaware, 1639-1865 (Scholarly Resources, Inc, Wilmington, DE 1996). In addition to Williams' Slavery and Freedom in Delaware, mentioned above, chapters 29 and 30 of Reed’s History (see Background Sources for Delaware) provide an overview on Delaware African Americans, and articles of interest have been published in Delaware History. See also two articles by Mary Fallon Richards: “Black Birth Records, New Castle County, Delaware, 1810–1853,” National Genealogical Society Quarterly 67 (1979): 264-66, which lists the name of the African-American child and date of birth, names of parents (usually mother), of master or mistress, and date of registration; and also “Licenses to Import and Export Slaves,” Delaware Genealogical Society Journal 1 (1980–81): 8-12, 30-37. Native Americans were on relatively friendly terms with the Europeans that settled Delaware for the first several decades, but were eventually pushed out as forests were cleared and marshes were drained, decimating the flora and fauna that the Native Americans relied upon for their livelihoods. Information about Delaware’s Native Americans is found in at least six works. Frank Gouldsmith Speck wrote The Nanticoke and Conoy Indians (Wilmington: Historical Society of Delaware, 1927). The other five, by Clinton A. Weslager, are entitled Delaware’s Forgotten Folk: The Story of the Moors and Nanticokes (Washington, D.C.: Library of Congress, 1970); Delaware Indians: A History (New Brunswick, N.J.: Rutgers University Press, 1972); The Delaware Indian Westward Migration (Wallingford, Pa.: Middle Atlantic Press, 1978); Red Men on the Brandywine (1953; reprint, Wilmington, Del.: Delmar News Agency, 1976); and The Delaware: A Critical Bibliography (Bloomington: Indiana University Press, 1978).