content
stringlengths
275
370k
Quake (natural phenomenon) |This article needs additional citations for verification. (March 2012) (Learn how and when to remove this template message)| A quake is the result when the surface of a planet, moon or star begins to shake, usually as the consequence of a sudden release of energy transmitted as seismic waves, and potentially with great violence. Types of quakes include: An earthquake is a phenomenon that results from the sudden release of stored energy in the Earth's crust that creates seismic waves. At the Earth's surface, earthquakes may manifest themselves by a shaking or displacement of the ground and sometimes cause tsunamis, which may lead to loss of life and destruction of property. An earthquake is caused by tectonic plates (sections of the Earth's crust) getting stuck and putting a strain on the ground. The strain becomes so great that rocks give way and fault lines occur. A moonquake is the lunar equivalent of an earthquake (i.e., a quake on the Moon). They were first discovered by the Apollo astronauts. Moonquakes are much weaker than the largest earthquakes, though they can last for up to an hour, due to the lack of water to dampen seismic vibrations. Information about moonquakes comes from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972. The instruments placed by the Apollo 12, 14, 15 and 16 functioned perfectly until they were switched off in 1977. According to NASA, there are at least four different kinds of moonquakes: - Deep moonquakes (~700 km below the surface, probably tidal in origin) - Meteorite impact vibrations - Thermal moonquakes (the frigid lunar crust expands when sunlight returns after the two-week lunar night) - Shallow moonquakes (20 or 30 kilometers below the surface) The first three kinds of moonquakes mentioned above tend to be mild; however, shallow moonquakes can register up to 5.5 on the Richter scale. Between 1972 and 1977, 28 shallow moonquakes were observed. On Earth, quakes of magnitude 4.5 and above can cause damage to buildings and other rigid structures. A marsquake is a quake that occurs on the planet Mars. A recent study suggests that marsquakes occur every million years. This suggestion is related to recently found evidence of Mars's tectonic boundaries. A venusquake is a quake that occurs on the planet Venus. A venusquake may have caused a new scarp and a landslide to form. An image of the landslides was taken in November 1990 during the first flight around Venus by the Magellan spacecraft. Another image was taken on July 23, 1991 as the Magellan revolved around Venus for the second time. Each image was 24 kilometers (14.4 miles) across and 38 kilometers (23 miles) long, and was centered at 2° south latitude and 74° east longitude. The pair of Magellan images shows a region in Aphrodite Terra, within a steeply sloping valley that is cut by many fractures (faults). |This section needs expansion. You can help by adding to it. (March 2012)| Planetquake is the generic term for quakes occurring on terrestrial planets, since current observational technology cannot penetrate to the solid core of gaseous planets. A sunquake is a quake that occurs on the Sun. On July 9, 1996, a sunquake was produced by an X2.6 class solar flare and its corresponding coronal mass ejection. According to researchers who reported the event in Nature, this sunquake was comparable to an earthquake of a magnitude 11.3 on the Richter scale. That represents a release of energy approximately 40,000 times greater than that of the devastating 1906 San Francisco earthquake, and far greater than that of any earthquake ever recorded. It is unclear how such a relatively modest flare could have liberated sufficient energy to generate such powerful seismic waves. A starquake is an astrophysical phenomenon that occurs when the crust of a neutron star undergoes a sudden adjustment, analogous to an earthquake on Earth. A paper published in 2003 in Scientific American by Kouveliotou, Duncan & Thompson suggests these starquakes to be the source of the giant gamma ray flares that are produced approximately once per decade from soft gamma repeaters. Starquakes are thought to result from two different mechanisms. One is the huge stresses exerted on the surface of the neutron star produced by twists in the ultra-strong interior magnetic fields. A second cause is a result of spindown. As the neutron star loses angular velocity due to frame-dragging and by the bleeding off of energy due to it being a rotating magnetic dipole, the crust develops an enormous amount of stress. Once that exceeds a certain amount, the shape adjusts itself to a shape closer to non-rotating equilibrium: a perfect sphere. The actual change is believed to be on the order of micrometers or less, and occurs in less than a millionth of a second. The largest recorded starquake was detected on December 27, 2004 from the ultracompact stellar corpse (magnetar) SGR 1806-20, which created a quake equivalent to a magnitude 32. The quake, which occurred 50,000 light years from Earth, released gamma rays equivalent to 1037 kW in intensity. Had it occurred within a distance of 10 light years from Earth, the quake would have possibly triggered a mass extinction. - United States Geological Survey. "Earthquake Hazards Program". USGS. Retrieved 5 April 2012. - Latham, Gary; Ewing, Maurice; Dorman, James; Lammlein, David; Press, Frank; Toksőz, Naft; Sutton, George; Duennebier, Fred; Nakamura, Yosio (1972). "Moonquakes and lunar tectonism". Earth, Moon, and Planets. 4 (3–4): 373–382. Bibcode:1972Moon....4..373L. doi:10.1007/BF00562004. - Space.com (14 August 2012). "A photo of Mars from NASA's Viking spacecraft, which launched in 1975. 7 Biggest Mysteries of Mars Mars Curiosity Rover with Rocks 1st Photos of Mars by Curiosity Rover (Gallery) Filaments in the Orgueil meteorite, seen under a scanning electron microscope, could be evidence of extraterrestrial bacteria, claims NASA scientist Richard Hoover. 5 Bold Claims of Alien Life Mars Surface Made of Shifting Plates Like Earth, Study Suggests". Yin, An. Space.com. Retrieved 15 August 2012. - "Solar Flare Leaves Sun Quaking". Retrieved 31 March 2012. - Kosovichev, A. G.; Zharkova, V. V. (28 May 1998). "X-ray flare sparks quake inside Sun". Nature. 393 (28 May). Bibcode:1998Natur.393..317K. doi:10.1038/30629. Retrieved 31 March 2012. - Kouveliotou, C.; Duncan, R. C.; Thompson, C. (February 2003). "Magnetars". Scientific American; Page 35. - "Huge 'star-quake' rocks Milky Way". BBC News. 18 February 2005.
Saturday, January 02, 2016 Based on a technological concept originating in 1964, LPP Fusion is developing a dense plasma focus (DPF) device. No external magnetic field is required, since the method generates its own magnetic field --- making it potentially much more compact than mainstream fusion technologies. For a few millionths of a second, an intense current flows from an outer to an inner electrode through a low pressure gas. This current starts to heat the gas, creating an intense magnetic field. This in turn generates a super-dense plasma, condensed into a tiny ball only a few thousandths of an inch across called a plasmoid. Again, all of this happens without being guided by external magnets. The magnetic fields very quickly collapse, and these changing magnetic fields induce an electric field which causes a beam of electrons to flow in one direction and a beam of ions – atoms that have lost electrons – in the other. The electron beam heats the plasmoid to extremely high temperatures, the equivalent of billions of degrees C (particles energies of 100 keV or more). (This temperature level is orders of magnitude hotter than the core of the sun, and many times hotter than alternative fusion power technologies.) This technology can in principle be used to produce X-rays or to generate fusion power. To create fusion power, energy can be transferred from the electrons to the ions using the magnetic field effect. Collisions of the ions with each other cause fusion reactions, which add more energy to the plasmoid. Thus, in the end, the ion beam contains more energy than was input by the original electric current. (The energy of the electron beam is dissipated inside the plasmoid to heat it.) This happens even though the plasmoid only lasts 10 ns (billionths of a second) or so, because of the very high density in the plasmoid, which is close to solid density. This level of density makes nuclear collisions (and thus fusion reactions) very likely, and they occur extremely rapidly. The ion beam of charged particles is then directed into a decelerator which acts like a particle accelerator in reverse. Instead of using electricity to accelerate charged particles, they decelerate charged particles and generate electricity. Some of this electricity is recycled to power the next fusion pulse while the excess (net) energy is the electricity produced by the fusion power plant. Some of the X-ray energy produced by the plasmoid can also be directly converted to electricity through the photoelectric effect (as occurs in solar panels). An interesting aspect of the DPF design is that it generates sufficient temperature levels to power fusion reactions in elements with higher molecular weights, which in turn holds out the promise of a shortcut to boronic fusion, the "holy grail" of fusion power research, as this particular fusion reaction generates electricity rather than neutron radiation... and electricity is what we actually want! Another interesting feature here is that LPP Fusion is an incorporated company (in this sense, similar to General Fusion), and thus they accept investments from private contributors. Again, this is all "micro" scale when compared, say, to Exxon Mobil or Conoco Phillips, but at least a base has been established to which further investments can be added.....
WHAT IS THE RIGHT WHALE’S POPULATION? The Northern right whale (Eubalaena glacialis) is the rarest of all large whale species, as well as one of the rarest of all marine mammals. What is the right whales population? It currently stands at between 300 and 350 individuals. Once numbering in the tens of thousands, the Northern right whale was hunted to near-extinction during the 19th and early 20th centuries by whalers who prized the animal for the abundance of whale oil it produced, as well as for the fact that if floated after it was killed, making it easy to harvest. (The name “right” whale came about because whalers considered it the right whale to kill.) By 1935, when right-whale hunting was banned, the population had been reduced to approximately 100 individuals. Although hunting is no longer a factor and the right whale’s population has more than tripled in the three-quarters of a century since the 1935 low point, the species still faces serious survival challenges. Chief among the hazards to right whales are ship strikes and entanglements in fishing gear. More than four-fifths of all adult right whales bear scars from being struck by ship propellors or hulls. In recent years, conservationists carefully monitoring right whales from the air have had significant success in directing shipping traffic away from areas where the whales are gathering. The problem of fishing-gear entanglements has proved more difficult to solve, however, and it remains unclear whether right whales are reproducing quickly enough to replace the individuals killed during encounters with man-made hazards. With such uncertainty about reproduction versus mortality, scientists currently cannot accurately predict whether the right whales population with continue to increase or decline over the coming decades.
|The so-called 'Greenhouse Effect' is caused primarily by:||b) carbon dioxide (CO2)| No, that is incorrect. It has not been proven that human additions to atmospheric CO2 (primarily through the burning of fossil fuels, land use change, and plant decay) are causing the Earth to warm significantly. CO2 levels and temperatures varied widely, rising and falling, long time before human civilization. Carbon dioxide is actually a tiny constituent of our atmosphere-- comprising less than 4/100 of 1% of all gases present (385 part per million by volume). CO2 has been generally increasing for the last 18,000 years, coinciding with an interuption in the Ice Age and the subsequent onset of warming. Although there appears to be a relationship between increases in CO2 levels and global temperature, it has not been proven that the relatively small amounts of CO2 added by humans has raised or will raise global temperatures. More importantly, temperature changes are seen to precede CO2 changes, in complete contradiction to the fundamental assumption of the human-caused warming hypothesis. Rising global temperatures due to increased solar input may simply allow Earth's oceans to surrender more CO2 to the atmosphere, similar to a warm bottle of soda pop which burps and fizzes when opened because cold liquid can hold more CO2 than warm liquid. At first glance, the graphs at left appears to indicate that rises in CO2 drive increasing temperature. However, a closer look at the data (below) shows that CO2 rise lags temperature rise by about 800 years. Caillon et al. (2003, Science, 299, 1728) make the point, “This confirms that CO2 is not the forcing that initially drives the climatic system during a deglaciation [warming out of ice ages]. Rather, deglaciation is probably initiated by some insolation [solar] forcing.” In fact, temperature changes precede CO2 changes in records of any time period in contradiction to the basic assumption of the theory that human CO2 is causing temperature increase. Did you know. . . 192 – 224 billion tonnes of carbon (Gt C) in the form of CO2 are emitted into the atmosphere each year. Of this total over 95% is from natural sources. Here is the breakdown: Respiration Humans, Animals, Phytoplankton 43.5 - 52 Gt C/ year Ocean Outgassing (Tropical Areas) 90 - 100 Gt C/year Volcanoes, Soil degassing 0.5 - 2 Gt C/ year Soil Bacteria, Decomposition 50 - 60 Gt C/ year Forest cutting, Forest fires 0.6 - 2.6 Gt C/year Anthropogenic [human-caused] emissions (2005) 7.5 Gt C/year (UN estimate) TOTAL: 192 to 224 Gt C/ year Notice that the human contributions (anthropogenic emissions) are less than the range of the estimate in three of the natural contributions. The Global Carbon Cycle: The numbers represent the sizes of the reservoirs of carbon and the amounts moving between these reservoirs in Gt C. Most CO2 is eventually locked up in ocean sediments as phytoplankton die and the carbon in their bodies falls to the ocean floor.
People have been using the mighty energy of water for thousands of years. Using physical principles of operation of hydroelectric power stations, a group of Chinese researchers from Fudan University developed a tiny generator that can be installed inside large human blood vessels and generate electricity from the blood stream. In 2011, Swiss scientists have already developed tiny turbines, which in theory could be placed inside the veins and produce a small amount of electricity. However, they had a serious drawback: as an artificial obstacle, microturbines could provoke the formation of blood clots. The development of Chinese scientists is a new type of electrical device that generates electricity by means of an ordered array of carbon nanotubes wrapped around a polymer core. The device was named “fiber-like liquid nano-generator” – FFNG. While it is unclear how much energy it can produce, however, according to its creators, FFNG will have an energy conversion ratio of about 20%. Scientists have already conducted successful experiments on frogs. In the future, FFNG can become a source of nutrition for implanted medical devices in the human body . The post Nanogenerators can turn our veins into power stations appeared first on TechWiki.
Cecropia moths (Halophora cecropia) belong to the Lepidoptera family Saturniidae. This family contains royal moths and giant silk moths. Almost all species are large; the cecropia is the largest moth in North America. Their lifespan is a perfect circle lasting approximately one year; the female lays her eggs and dies shortly after. The best time to spot these winged creatures is during late spring and early summer when the adults are active. The conspicuous cecropia adults have brilliantly patterned wings and bodies. Their colors and patterns aside, their large size makes them favorites of collectors and nature enthusiasts. Reaching approximately 5 to 6 inches, the cecropia is the largest silkmoth in the United States. Their wings have a frosted, blackish-reddish-brown color with cream, red and black accents in the form of stripes and spots. The thick larvae are equally impressive, reaching mature lengths of approximately 4 inches. Bright blue and yellow tubercles covered in black spines adorn their pale-to-lime green bodies. Although they only live five or six days, the adults capture the eye of anyone lucky enough to see one at night. The adults hatch from their cocoon on mornings in spring -- May through June throughout much of their range. Their first day as winged adults is spent clinging to a branch as they allow their bodies to dry and quickly pump their wings to get blood flowing to them and lengthen them. The females remain perched due to being egg-laden, and send out a pheromone to attract males just before dawn. After mating, the male will hide during the day and continue finding females just before dawn. The female, however, will almost immediately begin laying eggs in small clutches. She diligently spreads out the small clutches of eggs to help minimize the risk of competition among larvae. Once she's done, she'll have laid about 350 eggs. The larvae hatch from the cream-colored eggs as 1/4-inch black caterpillars. The larval stage typically has five instars -- periods of postembryonic growth -- lasting one to two weeks each; the larval stage can last one to two months. During the second instar, the black larvae begin developing colorful bodies, although the colors can vary. The larvae display their bold color patterns during the last three instars. The first instar is when the caterpillars feed voraciously, although they feed throughout all phases. In late summer, the large, green caterpillars begin constructing their cocoon, a small, brownish-gray sac they attach to a twig. The caterpillar will begin pupation in this cocoon and overwinter in this protective case. In general, cecropia larvae prefer deciduous trees as their host plants. Fruit trees are a favorite of these brightly colored caterpillars; apple, cherry and plums are all preferred host plants. Other favorites include birch, black cherry and white oak. Their habitats vary throughout their range, from deeply wooded areas to suburban landscapes. - University of Kentucky: Saturniid Moths - Cornell Cooperative Extension of Oneida County: Cecropia Moth - University of Illinois: Species Spotlight: Cecropia Moth - University of Florida IFAS Extension: Cecropia Moth, Cecropia Silk Moth, Robin Moth, Hyalophora Cecropia Linnaeus - Iowa State University: Cecropia Moth - Brand X Pictures/Brand X Pictures/Getty Images
The combination of horizontal drilling and hydrofracking has led to substantial benefits in the past decade. The surge in domestic production of shale oil has reduced US dependence on imports, while generating electricity by burning natural gas from shale produces only half the CO2 emissions of coal. Yet there is much room for improvement in the efficiency and safety of production of unconventionals. In particular, only a small fraction of hydrocarbons trapped in a given volume of shale is produced at present. Improving the yield would produce more energy with less environmental impact. In addition to such practical considerations, there are interesting scientific questions to pursue. Among these is the observation that only a tiny fraction (~ 0.01%) of the deformation accompanying hydrofracking releases seismic energy - the rest is silent.
Presentation on theme: "Anatomy of a Quadratic Function. Quadratic Form Any function that can be written in the form Ax 2 +Bx+C where a is not equal to zero. You have already."— Presentation transcript: Warm Up! Complete this problem at the bottom of your sheet Solve 4x 2 +5=20 Solving using the Calculator Quadratic formulas can have more than one solution Because a square root of a number can give a positive and negative number They can also have no solutions, or just one So how do I know if I am right? Use your calculator Solve so the entire equation is set equal to 0 Go to y= on your calculator Plug the equation into y1 Look for the x intercepts of the graph Use the Solve key to find values Pythagorean Theorem a 2 + b 2 =c 2 Works only for right triangles What is a right triangle?
Free Online 4th Grade Reading Comprehension Worksheets – Each worksheet has been specifically created to aid learners in improving their comprehension of reading. Many worksheets contain an established reading passage or a series of questions. Some worksheets specialize in a specific reading technique (comparison inference, comparison, Cause & Effect and comparing, creating images, and numerous others). Each of these strategies can be taught in a separate lesson depending on the needs of the student. While you can learn the basics of comprehension in class by means of instructions, worksheets have a higher effectiveness and motivating value since they engage students. Free Online 4th Grade Reading Comprehension Worksheets Reading comprehension worksheets are available in various formats. They are available for printing or in audio format. The internet offers free printable versions. You can also purchase the audio format. Printing worksheets is an excellent method of providing a reference for students at all levels. They offer a variety of exercises to determine the extent to which students are progressing. They can be used at any point however they are most useful when used as part of a review or the beginning of a reading course. It is also possible to create worksheets to practice reading comprehension. This will help students build confidence in their reading comprehension abilities. This is especially crucial when using reading comprehension worksheets for youngsters. This is to help children overcome reading anxiety. If they aren’t sure of the answer to a question, students are less enthusiastic. It is very easy to improve reading confidence with children with worksheets. These worksheets include a series questions that test your comprehension of reading. Students who do not have critical thinking skills often find these worksheets useful. Students will often use reading comprehension worksheets to help them answer questions based on what they have read. This is because they’re familiar with worksheets for reading comprehension. Teachers reward students according to how many worksheets they have completed successfully. This is a way to motivate them and helps them improve their comprehension. By following a prescribed reading comprehension grade, they can achieve their goal. Reading comprehension worksheets are a great way to test competencies. The successful completion of these tests may result in students being awarded certificates. Advanced learners may take the Certified Reading Specialist examination (CRS), to gain advanced certification. The internet offers many sources for those looking to enhance their reading abilities. They include test practice pages and test samples. For those who are looking to improve their school grades Online learning is becoming increasingly popular. Students can study at their own pace from the comfort of their own homes. There aren’t any set time limits or schedules to follow. Students are able to learn how to comprehend reading worksheets at the pace that suits them best to increase their reading scores.
A blueberry bush uses energy from the Sun to make carbohydrates. Which set of energy transformations BEST describe this process? Light energy, mostly from sunlight, is the main requirement for photosynthesis to occur. Photosynthesis is a process wherein light energy is converted into chemical energy. Organisms that can do this process is the autotrophs, they can facilitate photosynthesis which they gather energy from the sun, water and carbon dioxide in order to create energy by then, the transfer of this energy to other organism is played by the food chain –food web.
Top of the morning( Sharecropping ) : Sharecropping in the U. S. became a widespread response to the economic upheaval caused by the end of slavery during and after the reconstruction period. Sharecropping was a way for a Freedman to work on the land and be paid for it. The landowner provided land, housing(a shack), tools and seed, and perhaps even a mule. At harvest time the Frredman/sharecropper received a share of the crop (from one-third to one-half), which paid off his debt to the landowner. A disadvantage to Sharecropping was a new credit system known as the CROP LIEN. Under this system, a landowner extended a line of credit to the sharecropper while taking the year’s crops as collateral. The sharecropper could then draw food and supplies all year long. When the crop was harvested, the landowner who held the lien sold the harvest for the the cropper and settled the debt, while sometimes taking more than his fair share and leaving the sharecropper with little to nothing. #topofthemorning #sharecropping #lien #blackhistory #americanhistory
From hairy-chested yeti crabs to the deepest known fields, hydrothermal vents have been enjoying a bit of science celebrity in the last few weeks. Beneath the headlines, there has been an eruption of vent-related research published in the scientific literature and some exciting new expeditions just left port. The exhaustive author list on this paper reads like a who’s who in hydrothermal vent biogeography. This is the paper that introduced “the Hoff” crab to the world, but the findings are far more significant. Hydrothermal vent systems are sorted into biogeographic provinces, with different regions supporting different communities. The iconic giant tube worms dominate the eastern Pacific, while the western Pacific (prominently featured in Deep Fried Sea) plays host to fist sized snails, and the Atlantic features shrimp as its dominant species. There are several missing gaps in our understanding of how these qualitatively different communities are connected – the Southern Ocean, the south Atlantic, the Indian Ocean, and the Cayman Trough, among others. Filling in these gaps in our knowledge can help us understand the history and evolution of hydrothermal vent ecosystems.
Romans 8 Latin Workbook The Romans 8 Latin & Greek Workbook is a Latin, Greek, and English parallel printable guide to studying Romans 8 verse by verse to gain a deeper understanding of the language. The Romans 8 Latin, Greek, and English Workbook PDF is a downloadable file with the Vulgate, SLB Greek New Testament, and KJV of the Bible side by side. Students can learn Latin as they observe the patterns and compare them to the English translation. Students walk through the passage with these Bible study strategies: 1. What do you see? 2. Highlight the nouns in one color. 3. Highlight the verbs in a different color. 4. What words in Latin remind you of words in 5. Look for repeating words. 6. Parse several words in the passage. 7. Don’t despise basic observations!
Looking out at our Universe today, we not only see a huge variety of stars and galaxies both nearby and far away, we also see a curious relationship: the farther away a distant galaxy is, the faster it appears to move away from us. In cosmic terms, the Universe is expanding, with all the galaxies and clusters of galaxies getting more distant from one another over time. In the past, therefore, the Universe was hotter, denser, and everything in it was closer together. If we extrapolate back as far as possible, we'd come to a time before the first galaxies formed; before the first stars ignited; before neutral atoms or atomic nuclei or even stable matter could exist. The earliest moment at which we can describe our Universe at hot, dense, and uniformly full-of-stuff is known as the Big Bang. Here's how it first began. Some of you are going to read that last sentence and be confused. You might ask, "isn't the Big Bang the birth of time and space?" Sure; that's how it was originally conceived. Take something that's expanding and of a certain size and age today, and you can go back to a time where it was arbitrarily small and dense. When you get down to a single point, you'll create a singularity: the birth of space and time. Only, there's a ton of evidence that points to a non-singular origin to our Universe. We never achieved those arbitrarily high temperatures; there's a cutoff. Instead, our Universe is best described by an inflationary period that occurred prior to the Big Bang, and the Big Bang is the aftermath of what occurred at the end of inflation. Let's walk through what that looked like. During inflation, the Universe is completely empty. There are no particles, no matter, no photons; just empty space itself. That empty space has a huge amount of energy in it, with the exact amount of energy slightly fluctuating over time. Those fluctuations get stretched to larger scales, while new, small-scale fluctuations are created on top of that. (We described what the Universe looked like during inflation previously.) This continues as long as inflation goes on. But inflation will come to an end randomly, and not in all locations at once. In fact, if you lived in an inflating Universe, you'd likely experience a nearby region have inflation come to an end, while the space between you and it expanded exponentially. For a brief instant, you'd see what happens at the start of a Big Bang before that region disappeared from view. In an initially, relatively small region, perhaps no bigger than a soccer ball but perhaps much larger, the energy inherent to space gets converted into matter and radiation. The conversion process is relatively fast, taking approximately 10-33 seconds or so, but not instantaneous. As the energy bound up in space itself gets converted into particles, antiparticles, photons and more, the temperature starts to rapidly rise. Because the amount of energy that gets converted is so large, everything will be moving close to the speed of light. They will all behave as radiation, whether the particles are massless or massive doesn't matter. This conversion process is known as reheating, and signifies when inflation comes to an end and the stage known as the hot Big Bang begins. In terms of the expansion speed, you'll witness a tremendous change. In an inflationary Universe, space expands exponentially, with more distant regions accelerating away as time goes on. But when inflation ends, the Universe reheats, and the hot Big Bang starts, more distant regions will recede from you more slowly as time goes on. From an outside perspective, the part of the Universe where inflation ends sees the expansion rate there drop, while the inflating regions surrounding it see no such drop. Probability-wise, it's extremely likely that from the perspective of whatever region of inflating space you're in prior to the Big Bang, you'll see inflation end in nearby regions many times. These locations where inflation ends will quickly fill with matter, antimatter, and radiation, and expand more slowly than the still-inflating regions do. These regions will expand away from all the other locations where inflation still goes on exponentially, meaning they will very quickly recede from view. In the standard inflationary picture, because of this expansion rate change, there's virtually no chance that any two Universes, where separate hot Big Bangs occur, will ever collide or interact. Finally, the region where we will come to live gets cosmically lucky, and inflation comes to an end for us. The energy that was inherent to space itself gets converted to a hot, dense, and almost uniform sea of particles. The only imperfections, and the only departures from uniformity, correspond to the quantum fluctuations that existed (and were stretched across the Universe) during inflation. The positive fluctuations correspond to initially overdense regions, while the negative fluctuations get converted into initially underdense regions. We cannot observe these density fluctuations, today, as they were when the Universe first underwent the hot Big Bang. There are no visual signatures we can access from that early on; the first one we've ever accessed come from 380,000 years later, after they've undergone countless interactions. Even at that, we can extrapolate back what the initial density fluctuations were, and find something extremely consistent with the story of cosmic inflation. The temperature fluctuations that are imprinted on the first picture of the Universe — the cosmic microwave background — gives us confirmation of how the Big Bang began. What might be observable to us, however, are the gravitational waves left over from the end of inflation and the start of the hot Big Bang. The gravitational waves that inflation generates move at the speed of light in all directions, but unlike the visual signatures, no interactions can slow them down. They will arrive continuously, from all directions, passing through our bodies and our detectors. All we need to do, if we want to understand how our Universe got its start, is find a way to observe these waves either directly or indirectly. While many ideas and experiments abound, none have returned a successful detection so far. Once inflation comes to an end, and all the energy that was inherent to space itself gets converted into particles, antiparticles, photons, etc., all the Universe can do is expand and cool. Everything smashes into one another, sometimes creating new particle/antiparticle pairs, sometimes annihilating pairs back into photons or other particles, but always dropping in energy as the Universe expands. The Universe never reaches infinitely high temperatures or densities, but still attains energies that are perhaps a trillion times greater than anything the LHC can ever produce. The tiny seed overdensities and underdensities will eventually grow into the cosmic web of stars and galaxies that exist today. 13.8 billion years ago, the Universe as-we-know-it had its beginning. The rest is our cosmic history.
Enabled by the JGI’s Community Science Program, a team led by University of British Columbia researchers tracked the impact of a large-scale heatwave event in the ocean known as “The Blob.” The microbial communities in the ocean drive the biological pump that takes carbon from the atmosphere and keeps it in the deep ocean. With genomic samples collected before, during and after The Blob, the researchers developed a preliminary model of how marine microbial communities are affected by warming events. They recently shared their findings in Communications Biology. The work underscores the value of large-scale research collaborations and of conducting even more time-series studies. Learn more here on the JGI website.
If you have difficulties distinguishing certain colors, you might have a form of color blindness. But does having this vision condition actually mean you can’t see color at all? Here we learn more about how our eyes perceive color, and why some people see colors much better than others. Seeing in Color Our eyes work with our brains to translate light into color. Your eye has two important parts when it comes to color perception: At the front of your eye is the lens. Light enters through the lens and sends it to the retina, a thin layer of tissue at the back of the eye covered with millions of light-sensitive nerve cells: rods and cones. Each of these types of nerve cells has its own important job when it comes to seeing. Rods are highly concentrated around the edge of the retina and transmit black and white information to our brains. Rods are responsible for helping us see in dim light and for our peripheral vision. Cones are more concentrated in the center of the retina, and because they contain pigments that send information about color to the brain, it’s our cones that are in charge of what colors we see. Problems seeing color occur when there’s a problem with these pigments (also called photopigments). When the cones contain all the pigments, eyes see all colors, but if one or more pigments aren’t present, you’ll have trouble seeing certain colors. This is because those cones have a reduced sensitivity to the wavelengths produced by certain light “colors.” Interestingly, the color of an object isn’t in that object at all; instead, the object’s surface reflects some colors and absorbs the others. We see the reflected colors. For example, a blade of grass isn’t actually green. Its surface reflects the wavelengths our brain perceives as green. Three Types of Color Blindness: The Basics Even though it’s called “blindness,” it’s actually not blindness at all. A more accurate term is “color vision deficiency.” In other words, individuals who are color blind tend to see colors as “washed out,” or they confuse one color, such as green, with another color, such as red. Which colors they see depend upon which cones have faulty pigments. Red-green color blindness People with this type of color vision deficiency have reduced sensitivity to red light (called protanomaly), green light (deuteranomaly), or both. They may perceive a world of muddy greens dotted with brighter blues and yellows. They likely confuse browns, oranges, and shades of red and green. They also might have a hard time identifying pale shades of colors, and may confuse blues with purples. This is the most common inherited kind of color blindness, affecting 8% of men and 1% of women. Blue-yellow color blindness These individuals have reduced sensitivity to blue light (called tritanomaly), and so their world is punctuated by reds, pinks, blacks, whites, grays, and turquoises. They may confuse blues and yellows, violets and reds, and blues and greens, as well as light blues with grays, dark purples with black, greens with blues, and oranges with reds. People who have this form of color blindness see no color at all, and instead see everything in different shades of grays. This black-and-white TV view of the world is also known as monochromatic vision or monochromacy, and is extremely rare. Who Is At Risk of Being Color Blind? Color blindness is often hereditary, so if the condition runs in your family, you’re more likely to have it. But there are other things that can put you at risk, including physical or chemical damage to your eye or the parts of the brain that process color information, certain eye diseases (such as glaucoma and age-related macular degeneration), or other health conditions (like diabetes and multiple sclerosis). Likewise, certain medications may increase your risk. Caucasian men are more at risk for color blindness than others. Is There a Cure for Color Blindness? While there isn’t a cure, most people adjust and don’t have trouble living with color blindness in daily life, especially if they are born with it. If color blindness does impact your job or cause issues with everyday tasks, your eye doctor can prescribe special glasses and contact lenses that may help you tell the difference between colors. Phone apps and other technology can also help those with color blindness improve their color awareness. If your color blindness is a result of a medication you’re taking or an underlying health condition, your doctor can help you identify the issue and work with you to solve the root of the problem.
A time signature is made up of two numbers, one on top of the other and looks a bit like a fraction. Moreover, the notes from a scale form melodies and harmonies. Extended chords create a richer, more harmonically complex sound than basic major and minor triads. of time, allowing you to concentrate on teaching music. And all you need is an easy-to-understand intro to music theory. The simplest written form of music for the guitarist is a chord chart, like this: F C7 F, Hey, Jude, don’t make it bad, take a sad song and make it better. When we are talking only about the notes in a given scale, we are talking about diatonic intervals or harmony. by Roy Maxwell English | May 27, 2020 | ISBN: N/A | ASIN: B089CZYT3S | 149 pages | EPUB | 1.17 Mb The term “music theory” might scare you. The best interactive online course for beginners is Ricci Adams' MusicTheory.net . Because the key of a song determines the scale that the song uses for its melody, the key also determines the chords used for harmony. The chords and chord progressions in a piece of music support or complement the melody. Now, the chromatic scale is just the name given to all the notes used in western music, showing you how the notes are organized by pitch. Music theory is a practice musicians use to understand and communicate the language of music. The more notes a chord has, the more possible inversions. But it’s also essential to remember musical theory is not hard rules. Visit our YouTube channel for fun guitar videos. Chord inversions add variation, excitement, and smoother transitions in chord progressions. To understand the difference, it’s helpful to. There are twelve possible natural major scales. Mentorships with industry professionals let you access real-world insights and help you personalize your music education. Transposing the bottom note in a chord to the next octave creates an inversion. They’re built off a single starting note called the root. Guitar theory can be a daunting prospect for many guitarists. However, you can benefit from learning some aspects of music theory. If Western music has a structure, that structure is the major scale. For example, a C diminished triad has the notes: C-E♭-G♭. When learning music theory for beginners it’s important that we talk about rhythm. Name all the notes going up the neck on one string, Play a major scale up one string starting on the open string, then on the third fret, Pick a note on one string and find an interval from that note on the next string. The major scale is the universe of notes that make up at least 75 percent of the melody of the song, and also as you’ll see later, the majority of the chords or harmony of the song. However, the two main types are the major scale and the minor scale. What is Music Theory? Are you ready to start your musical journey? There are twelve key signatures, each derived from the twelve available notes. 16th March 2018. the advanced theory that students will want to pursue after mastering the basics will vary greatly. Music Theory for Beginners: Discover How to Read Music at Any Age and Start Having Fun With Your Guitar, Piano, or Any Other Instrument. Harmony is when multiple notes or voices play simultaneously to produce a new sound. You know that in any song, some notes are higher and some are lower. The multiple voices that make up a choir blend to make a harmonious sound. The Fundamentals of Music Theory (Music Theory for Dummies), Basic Music Theory: Learn the Circle of Fifths, Learn How to Improve Your Music with Music Modes, Learn How to Improve Progressions with Chord Inversions. Try applying the concepts in this guide to your workflow. Music is a language. The Music Scale. What’s The Difference Between Sharps and Flats? This booklet is a complete manual on music theory, a guide for beginners, intermediate students, and even advanced learners. Join us on Facebook for daily guitar tips. Many music teachers and composers have put up beginner music theory lessons online. These melodies or voices work together to create pleasant-sounding harmonies. Listen to our Learn Guitar Podcast for rapid guitar progress. They have two or more notes in a sequence that sound musically pleasing. Chords are the harmonious building blocks of music. This section describes all the available notes and the specific relationships between them. Understanding Basic Music Theory is a comprehensive insight into the fundamental notions of music theory: music notation, rules of harmony, ear training, etc. Amish 7 Day Pickles, Candle Symbol Copy And Paste, Corporate Housing Denver Cherry Creek, What Do Zinnia Seedlings Look Like, Klipsch The Fives Vs The Sixes, Skippers Fish Skin Fingers, Pitbull Dog Png, Khaya Senegalensis Flower,
Sequential Spelling (revised edition by Wave 3 Learning) This well-organized program that teaches spelling by word families has a very satisfying appeal. Imagine lists of words that are organized by word families but that go well beyond a simple vowel family (i.e. c-at; b-at; etc.) right from the start. Although the first day's word list includes only four words - in, pin, sin, spin; by the third day, the "in" list has expanded to pinned, skins, twins (and other "in" words) and started on the "e" (i.e. b-e; sh-e) family and even includes a crossover word (begin). By building from the easier words of a family to important power words, the program builds self-confidence. Traditional spelling programs introduce words "vocabularily" - in other words, when the child is likely to encounter the word in reading, or based on a chosen theme. As a result, word sequences are odd and incomplete. In Sequential Spelling (Revised), the phonics necessary for decoding is being presented through the back door, so to speak. Every learning channel is employed with this program. Here's the process: a word is given verbally and used in a sentence (audio); the student attempts to spell the word (kinesthetic); the correct spelling is given using colored markers on a white board to differentiate between family and other letters (oral interaction & visual); students correct their own spelling (kinesthetic). Eager learners and definitive results are produced by utilizing the simple educational techniques of having students correct their own mistakes when they make them - not hours, days, or even weeks later - and creating a positive learning environment by maintaining that mistakes are opportunities to learn. Tests are used as learning devices, not as a method of evaluation. If you feel compelled to give grades, written tests (reproducible) are available after the 40th, 80th, 120th, 160th, and 180th days. The Teacher Books (Revised) for each Level hold introductory teaching information, and an overview of the approach. Next follows the 180 word lists which include sentences for the homophones (same pronunciation, different spelling; i.e. bare and bear) and heteronyms (same spelling, different word and different pronunciation; i.e. bow your head, bow and arrow). The first several days of lessons are laid out in detail - completely scripted. In addition to the teaching process, a positive can-do attitude is modeled in these lessons. After the eighth day, the process is continued as established. In the lists, common words appear in bold typing. Homophones , heteronyms , and words that do not follow the normal pattern (like "gyp") are all marked. Review and repetition is built in as you progress through the days (lists). Reproducible test forms are provided along with an answer key. The teacher book includes answers to the tests and student activities in levels 1-4, and only the test answers in levels 5-7. Volume Levels are progressive but do not really conform to grade levels. For instance, the ending lessons of Level 1 include words like breathless, hedging, horrifying, and basically which would never be seen in a first or second-grade spelling book. Because they are introduced as parts of word families, they become doable for the early grades. This also means that an older child starting at Level 1 doesn't feel like he's way behind. (By the way, the parent is given complete freedom to drop some words from the lists if they feel it will be preferable for their child.) So, the bottom line is that you can start any grade level child at Level 1 and proceed through the books in order. It is recommended that children are reading at a second-grade level before beginning Level 1, so children in first grade may or may not be ready to begin, depending on their reading skill. Older children may or may not need to start with Level 1 so you may want to check out the placement test available on our website. The Student Workbooks (Revised) hold pages with blank spaces for each day's spelling lesson and an additional activity on the following page that uses words from the lesson. Activities vary, including using words from the lesson in a sentence, unscrambling words, filling in blanks, writing the definition of words the student is not familiar with, and listing words that contain a particular word family (like 'ake'). Student Workbooks are available for Levels 1-5. The Student Response Book provides the writing space for the daily word lists and can be used with any level of the program although the Student Activity Books at the early levels provide more review and repetition. To look at these books, you would scratch your head. The column for the 1st day words is in the middle of page 3 (with the 61st day on the left and the 121st day on the right). We don't see the 2nd day column until page 5. This peculiar arrangement is designed to prevent the child from copying words and/or word family parts from one day to the next, a tendency which gets in the way of truly learning the pattern. Once you figure out the system, it makes perfect sense. Although a student will need one book per level, the response book is not level-specific. We offer Level Sets that include both Teacher Book and Student Book for each level and also sets which include the Teacher Book and the Student Response Book. This is a well-organized program that offers a multi-sensory approach making it perfect for many students. ~ Janice
The study published June 5 in PLoS Computational Biology, explains how bats use echolocation for more than just spatial knowledge and it might also help explain how some bats travel at high speed, at night, in formation without interfering with each other. The researchers first tested the ability of four greater mouse-eared M. myotis bats from Bulgaria (sorry, Dracula fans, not Romania) to distinguish between the echolocation calls of other bats. After observing that the bats learned to discriminate the voices of other bats after two to three weeks, they then programmed a computer model that reproduces the recognition behavior of the bats. Analysis of the model suggests that the spectral energy distribution in the signals contains individual-specific information that allows one bat to recognize another. Animals must recognize each other in order to engage in social behaviour. Vocal communication signals are helpful for recognizing individuals, especially in nocturnal organisms such as bats. Little is known about how bats perform strenuous social tasks, such as remaining in a group when flying at high speeds in darkness, or avoiding interference between echolocation calls. The finding that bats can recognize other bats within their own species based on their echolocation calls may therefore have some significant implications. CITATION: Yovel Y, Melcon ML, Franz MO, Denzinger A, Schnitzler H-U (2009) The Voice of Bats: How Greater Mouse-eared Bats Recognize Individuals Based on Their Echolocation Calls. PLoS Comput Biol 5(6): e1000400. doi:10.1371/journal.pcbi.1000400 - Study: How Mexican Freetail 'Singing' Bats Communicate - Bats: Fossils Show Flying Came Before Echolocation - A True Batphone: 'Private Bandwidth' In Rhinolophidae - And The New Winner For The Animal Kingdom's Highest-Pitch Love Call Is... - If Upworthy Did Science: I Was Shocked To Learn The Evolutionary Truth About Electric Fish!
Given an array of N numbers, we wish to choose a contiguous sub-sequence of the array, so that the bitwise XOR of all chosen numbers is maximum. You have to print the maximum XOR possible for such contiguous sub-sequence. The First Line will contain an Integer N, denoting the size of the array A. The next line will contain N-space separated Integers denoting the elements of the array. 2 <= N <= 10^4 A contains 32 bit Integers. Print the maximum XOR value possible for any of the contiguous sub-sequence. Sample TestCase 1 2 3 5 Subarray of Size 2: XOR(2, 3) = 1 XOR(3, 5) = 6 Subarray of Size 3: XOR(2, 3, 5) = 4 Maximum XOR = 6
Practical Strategies for Implementing the Pyramid Model The Pyramid Model is comprised of practices that are implemented by teachers and families. Below are ideas, resources, and illustrations of strategies that might be used to implement Pyramid Model practices and promote young children’s social and emotional competence. Scripted Stories for Social Situations Scripted Stories for Social Situations help children understand social interactions, situations, expectations, social cues, the script of unfamiliar activities, and/or social rules. As the title implies, they are brief descriptive stories that provide information regarding a social situation. When children are given information that helps them understand the expectations of a situation, their problem behavior within that situation is reduced or minimized. Tools for Working on Building Relationships These easy-to-use guides were created especially for teachers/caregivers and parents to provide hands-on ways to embed social emotional skill building activities into everyday routines. Each book nook is comprised of ideas and activities designed around popular children’s books such as Big Al, Hands are Not for Hitting, On Monday When it Rained and My Many Colored Days. Examples of suggested activities include using rhymes to talk about being friends, making emotion masks to help children identify and talk about different feelings, playing games around what to do with hands instead of hitting and fun music and movement activities to express emotions.
Planet V (the "V" being the Roman numeral for "five") is the name assigned to a theoretical planet once existing in the early Solar System between Mars and the Asteroid Belt. It was proposed as a possible theory to explain an event called the "Late Heavy Bombardment" and no longer exists today at it eventually plunged into the Sun. Let's explore this idea in more detail... Again, this event is hypothetical, too. It is thought to have happened some four billion years ago, very early in the life of the Solar System. At this time, Mercury, Venus, Earth, our Moon and Mars experienced a lot more collisions than usual from objects originating in the Asteroid Belt. This event was christened the Late Heavy Bombardment, or LHB. Some scientists also refer to it as the Lunar Cataclysm. This theory is primarily as a result of examination of rocks brought back from the Moon by the Apollo astronauts. Analysis suggested that the rocks were melted by collisions all within a relatively narrow time band. The fact that an event which affected the four inner planets was theorised based on such a tiny sample of rocks caused some scientists to be very sceptical that it ever happened. However, assuming it did, there were several theories put forward to explain its cause and the subject of this page is one of them. So called because, if it had existed, it would have been the fifth planet from the Sun, it has been proposed to have been positioned near the Asteroid Belt, with the potential to severely disrupt the orbits of some of them, sending them tumbling into the inner Solar System. When formed (at the same time as the other four inner planets) it would have been around 170,000,000 miles out from the Sun, affecting a fairly stable orbit. However, over the course of millions of years, gravitational effects from the other four inner planets would send this orbit into ever increasing eccentricity, eventually sending it into the Asteroid Belt. The theory was first proposed by NASA scientists John Chambers and Jack Lissauer. They conducted computer simulations of the early Solar System and concluded that, if their planet had a mass one-quarter that of Mars, its increasingly eccentric orbit's effect on the Asteroid Belt would produce results consistent with the LHB. The planet no longer exists today because its orbit would become so crazy it would have crashed into the Sun. To this day, the one-time existence of Planet V is still under debate.
Teaching your child how to resolve conflicts is a necessary part of their emotional intellect, and critical decision making skills. Conflicts are a typical part of every life – whether simple or complicated. It is essential that we all are equipped with the knowledge of how to resolve conflicts. Conflict resolution should be taught to children at a very young age. While it is your responsibility to teach this very important lesson to your child, you are likely to find that it is quite challenging. This is why I elected to create this guide for teaching your child how to resolve conflicts. 1. The first thing that you should teach your child when it comes to resolving conflicts is that we are all different. We do not always think the same thoughts, believe in the same things, or act in the same ways. It is important that your child understands that these are the things that make us individuals and it is ok to have different thoughts, beliefs, and even act differently. We should teach them to appreciate the uniqueness in each person that we come in contact with. By teaching your child this, you are already taking the first step in conflict resolution before a problem even arises. 2. The next step in teaching your child how to resolve conflicts is to ensure that you express to them the importance of their safety, and the safety of others when they become angry. It should be stressed that no one should ever hit, push, or engage in any other act of violence against another person. In the same respect, no one should ever take their anger out in such a way that they become physical in a negative fashion. If your child feels as if they are angry and that they need some type of physical release, they should be encouraged to run, or some type of other exercise. It is not appropriate to hurt themselves, or another person. 3. The next way to help your child when it comes to conflict resolution is to ensure that you teach them how to effectively communicate that in which they are feeling. Many times, children will actually demean the person that they are angry with, or they will work to accuse someone else. Instead of doing this, encourage your child to say things like “I feel as if ________”. By talking to a person in this fashion, the person is most likely to ease up and not become defensive. The conversation will turn productive, and the conflict will be on a productive path to resolution. 4. It is important to teach your child that it is not appropriate to yell, or get loud in any way with the person that they are angry with. It is important to ensure that you let them know talking is always much more productive than yelling and losing control. Maintaining control is always more effective than losing it. Inform your child that it takes a stronger, more mature individual to maintain control in the midst of a conflict. 5. Next, it is important to inform your child that when it comes to being wrong, they should admit it. Admitting wrong in a situation is the first step to self improvement and growth. By teaching them to accept that they are wrong and made a mistake, you will be teaching them to take responsibility for their actions. If the child is wrong, they should express an apology. When teaching them to admit wrong, you should also teach them how to make a sincere apology. Teaching your child how to resolve conflicts is very important. No matter what walk of life, or who a person is, conflicts are evident. By equipping your child with the knowledge to handle those conflicts, you are equipping them with an important life skill that they will hold for a lifetime! No part of this article may be copied or reproduced in any form without the express permission of More4Kids Inc © 2009
The Chesapeake Bay is the largest estuary in the United States. It runs north-south from the mouth of the Susquehanna River to the Atlantic Ocean. It is one of the most productive estuaries in the world, with over 3,600 species of animals and plants. The bay provides vitally important habitats for wildlife, lots of recreational opportunities for people, and is an important fishery upon which both people and wildlife depend. The Chesapeake Bay watershed includes parts of six states and is home to some 17 million people, including the cities of Washington, D.C., and Baltimore, Maryland. The watershed's many rivers provide people with not only drinking water, but also places for fishing, boating, and birding opportunities. Its wetlands are also sites for boating, along with bird-watching and waterfowl hunting. The bay itself is popular for boating and recreational fishing. The Chesapeake's commercial fishery is worth billions of dollars and includes blue crabs, rockfish, menhaden, and eastern oysters. The bay also includes two of the biggest East Coast commercial ports—Baltimore and Hampton Roads. The Chesapeake Bay is a very large and complex ecosystem with many kinds of wildlife habitats, including forests, wetlands, rivers, and the bay estuary itself. The bay supports over 3,600 species of plant and animal life, including more than 300 fish species and 2,700 plant types. The waters of the bay are a mix of saltwater and freshwater. Saltwater comes into the bay from the Atlantic Ocean and freshwater enters through rivers and streams, as well as through underground water flows called groundwater. Many of the bay's wildlife, including the blue crab and waterfowl, depend on underwater bay grasses that grow in the shallow waters. Around 350 varieties of fish live in the bay, including some that prefer freshwater, such as the pumpkinseed; some that are migratory, such as the summer flounder; and some that move between freshwater and saltwater, such as the American shad. Four kinds of sea turtles come to lower reaches of the bay: the loggerhead, Kemp's ridley, leatherback, and green sea turtle. Hundreds of invertebrates, like the blue crab and the oyster, and other less edible but important species such as the horseshoe crab, also call the bay home. Oysters, once very populous in the bay, have greatly declined. Oysters filter and clean water, and their loss has affected the water quality of the bay and the health of other species. Different birds inhabit the bay during different times of the year, from raptors such as bald eagles and ospreys, to waterfowl like swans and ducks, and migratory birds like sanderlings and ruby-throated hummingbirds. The region's beaches support some of the largest populations of shorebirds in the Western Hemisphere, such as the red knot and piping plover. For waterfowl, the Chesapeake is a major stopover site and wintering ground along the Atlantic Flyway. Every year, one million waterfowl winter in the Chesapeake Bay region. Land Use and Pollution The millions of people living in the Chesapeake Bay watershed have made their imprint upon its lands and waters. About 55 percent of the watershed is forest, while the rest has been converted by people for agricultural (30 percent) and suburban and urban uses (9 percent). These land use changes have impacts on the bay. One of the bay's biggest problems is too many nutrients in the water. Excess nutrients come from many sources, including treated wastewater, runoff from agricultural areas, runoff from suburban areas such as lawn and garden fertilizers and septic systems, and even air pollution. Although it may not sound like a bad thing, too many nutrients can cause a lot of problems in the bay. Phosphorus and nitrogen are limiting factors for plants. With the addition of runoff nutrients, algae plants have nothing to keep them in check, so they grow into giant blooms. Algae blooms blocks sunlight that underwater bay grasses need to survive. Many bay species depend on grasses for food and protection. Algae blooms also take oxygen from the water that species like crabs and oysters need to survive. Forests and wetlands can serve as a sink for excess nutrients—absorbing them before they reach the bay. But in urban and suburban areas around the Chesapeake Bay, many of the forests and wetlands have been removed. A hundred acres of forest habitat in the bay watershed are lost each day due primarily to development. The watershed is covered with too much pavement and other hard surfaces that water cannot run through, such as roads, rooftops, sidewalks, and parking lots (also called "impervious surfaces"). These hard surfaces make up 21 percent of all urban lands in the bay watershed. Not only do they contribute to the excess nutrients (by making it easier for nutrients to be picked up by rain), they also have their own set of problems. Water that falls on these surfaces cannot be slowly absorbed into the ground to replenish area groundwater, but instead flows quickly into streams and rivers, causing erosion, or directly into storm sewers, causing flooding. Climate change threatens to not only exacerbate many of the environmental threats already facing the Chesapeake Bay, it is also causing a rise in sea level that is eating away the diverse estuaries and wildlife habitat. The Chesapeake Bay is our nation's largest estuary and sustains more than 3,600 species of plants, fish, and animals. If climate change continues unabated, projected rising sea levels will significantly reshape the region's coastal landscape, threatening waterfowl hunting and recreational saltwater fishing in Virginia and Maryland. With its expansive coastline, low-lying topography, and growing coastal population, the Chesapeake Bay region is among the places in the nation most vulnerable to sea level rise. Average sea levels in the Chesapeake Bay have been rising. Many places along the bay have seen a one-foot increase in relative sea level rise over the 20th century, six inches due to climate change and another six inches due to naturally subsiding coastal lands—a factor that places the Chesapeake Bay region at particular risk. Already, at least 13 islands in the bay have disappeared entirely, and many more are at risk of being lost soon. Sea level rise in the Chesapeake Bay region could reach 17 to 28 inches above 1990 levels by 2095. Delaware Bay: As sea level rises, Delaware Bay's marshes—which provide valuable habitat for waterfowl and nursery ground for fish—will be inundated with greater frequency. In Delaware Bay, a 27.2-inch rise in sea-level would mean a 92 percent decline in brackish marsh, a three-fold increase in saltmarsh, and a 13 percent increase in open water. In some areas, the marshes will be able to migrate inland, allowing continued viability of the habitat, however it will contribute to a loss of nearly 41,000 acres of undeveloped dry land. In total, 41 percent of marshes are predicted to be lost across the region by 2100. In addition to supporting many waterfowl species, these coastal marshes are important nursery and spawning grounds for multiple fish species, including Atlantic menhaden, bluefish, flounder, spot, mullet, croaker, and rockfish. Maryland Shore: The considerable low-lying marshes and dry lands of the Eastern Maryland Shore region are at risk from sea-level rise over the next century. Along the Maryland Shore, a 27.2-inch rise in sea-level would mean a 47 percent decline in brackish marsh, a 38 percent decline of its tidal swamp, and all but 32 acres of tidal flat. The brackish marsh habitat of Kent Island and the Eastern Neck Island National Wildlife Refuge are especially at risk of inundation. As the soil saturates, freshwater swamps will expand by about 20 percent across the region. Blackwater National Wildlife Refuge: This refuge is a crown jewel among Chesapeake Bay's treasured places. Unfortunately, it could be largely underwater by 2100. Dramatic habitat losses are predicted for the refuge and surrounding areas, where global sea-level rise is compounded by high rates of land subsidence due to groundwater withdrawal for agriculture and relatively lower rates of natural accretion in marshes. The site is predicted to lose over 90 percent of its tidal fresh marsh, tidal swamp and brackish marsh, which are converted to saltmarsh and—ultimately—open water. The loss of brackish marsh could be particularly harmful to species that have adapted to these habitats, including rockfish and white perch, as well as anadromous species such as herring and shad, which use brackish marsh habitat as they transition between their freshwater and saltwater life cycles. Similarly, the loss of tidal fresh marshes could affect minnows, carp, sunfish, crappie and bass, which depend on these habitats for shelter, food, and spawning. Virginia's Eastern Shore: Virginia's Eastern Shore faces a dramatic loss of estuarine and ocean beaches. By 2025, estuarine beach is projected to decline by 52 percent and ocean beach by 26 percent. By 2100, more than 80 percent of these beaches could disappear and be converted to open water. Like other regions around the bay, the peninsula is projected to lose more than half of its brackish marsh by 2050, with a nearly complete loss by 2100. The extremely rare sea-level fens are also at risk. Located upland of wide, ocean-side tidal marshes on the upper east side of the peninsula, these habitats are comprised entirely of open, freshwater wetlands whose primary water source is groundwater. Only certain types of plants and animals can thrive in the fens, including the ten-angled pipewort, carnivorous sundew, bladderwort, elfin skimmer dragonfly, and eastern mud turtle. Upper Tidewater Region: The extensive tidal swamp, brackish marsh, and tidal flat habitats of the Upper Tidewater Region could undergo major shifts due to climate change. If sea level rises 27.2 inches this century, the region would face a 30 percent decline in tidal swamp, an 85 percent decline in the area of brackish marsh, and a 76 percent decline in tidal flats. In addition, Plum Tree Island National Wildlife Refuge, home to many species of migratory waterfowl, would largely disappear. At the same time, that amount of sea level rise is projected to cause a 33 percent expansion of freshwater swamp area, which includes both forested and scrub-shrub habitat, with notable expansion into the undeveloped dry land along Mobjack Bay. Overall, the area of undeveloped dry land across this site declines by 17 percent, or 45,611 acres. Lower Tidewater Region: The Lower Tidewater Region, including the cities of Norfolk and Virginia Beach, has extensive urban development surrounded by agricultural and conservation lands. Nearly 20 percent of undeveloped dry land at this site is at risk of inundation, mostly as rivers widen and transitional marsh and saltmarsh expand. While these undeveloped lands provide opportunities for habitats to migrate inland, pressure to develop some of these lands will likely increase because human population in this part of Virginia is projected to grow considerably in the coming decades. Proactive measures to identify and protect lands where habitats can migrate will be critically important. In addition, the region is projected to face a 79 percent loss of ocean beach by 2100, without extensive beach re-nourishment. Tangier Sound: The sound is home to some of the bay's larger islands--including Smith, Deal, and Tangier--the majority of which could be gone by 2100. Towns on the mainland, like Crisfield, will also see surrounding wetlands disappear and undeveloped dry land inundated by rising seas. Thousands of acres of brackish marsh in this region will be converted to salt marsh and open water, possibly ravaging lucrative commercial and recreational fisheries that depend on healthy marshes. The critical seagrass beds in this area are also at significant risk from sea-level rise and increased deposition of sediments from the Blackwater area to the north. The Mid-Atlantic region is defined by one unifying feature: water. In 2009, we helped launch the Choose Clean Water Coalition to advocate for restoring the thousands of streams and rivers flowing to the Chesapeake Bay. The coalition brings together more than 200 organizations from Pennsylvania, New York, Maryland, Delaware, Virginia, West Virginia, and the District of Columbia to advocate for clean water. (Read more about our work in the bay.) Chesapeake Bay Program Chesapeake Mid-Atlantic Regional Center, National Wildlife Federation National Park Service Chesapeake Bay Office National Oceanic and Atmospheric Administration Chesapeake Bay Office We’re addressing the environmental issues that threaten healthy wildlife populations and put species at risk. » Saving America’s wildlife strengthens our democracy and prosperity for future generations. Join our conservation army. » Our nation's diverse and wondrous lands provide invaluable resources that require bold, future-focused management strategies. » The crisis isn't just a global problem—we're facing it in our own backyards. Meet some of the species that are already seeing an impact.Read More President and CEO Collin O’Mara reveals in a TEDx Talk why it is essential to connect our children and future generations with wildlife and the outdoors—and how doing so is good for our health, economy, and environment.Watch Now What's on deck with the National Wildlife Federation? Check out our scheduled events—we just might be coming to a city near you!See Events Place your order today for the themed box that delivers everything you need to create family memories while discovering nature and wildlife.Learn More More than one-third of U.S. fish and wildlife species are at risk of extinction in the coming decades. We're on the ground in seven regions across the country, collaborating with 52 state and territory affiliates to reverse the crisis and ensure wildlife thrive.
Drawing is at the heart or soul of an artist’s way to express themselves. As a communication tool, drawing is a creative way to express the feelings and thoughts of an artist or designer. A drawing can be a sketch, a plan, a design, or graphic representation made with the help of pens, pencils, or crayons. The final result depends upon its nature and purpose. Below you may find different kinds of drawings with original illustrations by Yaren Eren. These are drawings that are created to represent the lay-out of a particular document. They include all the basic details of the project concerned clearly stating its purpose, style, size, color, character, and effect. Drawings that result from direct or real observations are life drawings. Life drawing, also known as still-life drawing or figure drawing, portrays all the expressions that are viewed by the artist and captured in the picture. The human figure forms one of the most enduring themes in life drawing that is applied to portraiture, sculpture, medical illustration, cartooning and comic book illustration, and other fields. Similar to painting, emotive drawing emphasizes the exploration and expression of different emotions, feelings, and moods. These are generally depicted in the form of a personality. Sketches that are created for clear understanding and representation of observations made by an artist are called analytic drawings. In simple words, analytic drawing is undertaken to divide observations into small parts for a better perspective. Perspective drawing is used by artists to create three-dimensional images on a two-dimensional picture plane, such as paper. It represents space, distance, volume, light, surface planes, and scale, all viewed from a particular eye-level. When concepts and ideas are explored and investigated, these are documented on paper through diagrammatic drawing. Diagrams are created to depict adjacencies and happenstance that are likely to take place in the immediate future. Thus, diagrammatic drawings serve as active design process for the instant ideas so conceived. Geometric drawing is used, particularly, in construction fields that demand specific dimensions. Measured scales, true sides, sections, and various other descriptive views are represented through geometric drawing.
A permanent magnet (ferromagnet) is a material that produces a magnetic field. Permanent magnets are made from ferromagnetic materials, such as iron, and are created when the material is placed inside of a magnetic field. When the magnetic field is removed, the object remains magnetized. Permanent magnets have a permanent magnetic field and do not turn on and off like electromagnets do. Permanent magnets are the oldest type and are still used for a wide variety of applications today. How Permanent Magnets Work Permanent magnetization of a material involves its electrons and how they move around their nucleus. In most materials, electrons move in a rather spontaneous manner. In a ferromagnetic material, however, electrons align themselves and spin in the same direction while orbiting their nucleus. This produces a small magnetic field. As more electrons align themselves, the magnetic field becomes stronger. In the case of permanent magnets, electrons tend to stay aligned, unless heated above the Curie temperature and then cooled. Permanent magnets represent the majority of magnetic materials and are used in a variety of ways. For example, permanent magnets are often attached to gears or shafts to be used in conjunction with an electromagnet inside of a motor. In fact, this is the basis of every electromechanical device in the world. Permanent magnets are often used on refrigerators to hold paper and other light objects to the metal surface. Permanent magnets are also used to keep cabinets and other doors closed that must be accessed frequently. Permanent magnets have several advantages that electromagnets do not. For example, permanent magnets do not require any power source and usually produce a powerful magnetic field compared to their size. Contrastingly, electromagnets must be continuously connected to a power source that may be quite large depending on the magnetic field needed. Although permanent magnets are advantageous, they do have several disadvantages. For example, permanent magnets constantly produce a magnetic field and cannot be turned off like electromagnets. Likewise, it is not easy to control the intensity of a permanent magnet’s magnetic field. Furthermore, permanent magnets do not usually have a very large magnetic field, a property that makes it difficult to use them over long distances.
Steel is generally referred to as "carbon" steel, because it is a combination of iron atoms interspersed with carbon atoms. The overall structure of steel is a crystalline lattice comprising both elements, which provides steel with a combination of strength and ductility. Adding other alloys such as chromium and aluminum gives steel more properties such as protection against rust and lighter weight and durability. Plain, or "carbon," steel is an alloy metal of iron and carbon. In order to produce steel, iron must first be smelted from ore in a furnace. Impurities that were present in the iron ore must be extracted. The iron that results generally still contains a carbon content which is too high for workable steel. The metal must be smelted further to reduce the carbon content to between 0.2 to 1.5 percent. Depending on how the steel will be used, the metal is subjected to additional tempering. Carbon steel strength is due to its crystalline structure. Groups of iron and carbon atoms are arranged in a lattice, with the carbon atoms preventing the iron atoms from slipping over each other, in effect making the steel rigid. The addition of an alloy such as titanium or manganese strengthens this structure by adding different atomic sizes to the lattice. This reinforces steel's rigidity by further impeding molecular movement when the metal is it is subjected to stresses. Steel alloys are made by combining elements during the smelting process when the iron is still molten. Other metals such as chromium, aluminum or titanium are added at this stage. Alloys have properties which make them more durable than simple carbon steel. This is due to the structural properties of how iron, carbon and other elements interact. Other metals are added to give carbon steel specific enhancements, such as extra strength, high temperature tolerance, or more malleability. Plain carbon steel has a wide variety of applications, but must be tempered at specific heat conditions to give the steel a combination of ductility and durability. Alloying steel has advantages, such as protection against corrosion when steel is mixed with chromium. Other elements such as titanium, nickel and boron further harden steel. Weldability can be increased by adding sulfur or lead, whereas carbon steel by itself is more sensitive to cracking when being welded. "Galvanized" steel is produced by immersion in a tank of molten zinc. Zinc atoms diffuse into the top layers of the steel, forming a protective layer against corrosion. Galvanizing can be performed on various steel alloys as an additional protection against rust. Galvanized steel is a cheaper method of rust-proofing steel than alloying it with chromium.
Good Roots: Understanding the Differences Between Heirloom and Commercial Wheat and Barley Roots The architecture of a plant’s root system has a strong impact on crop performance, and understanding the root structure is important for both plant breeders and farmers. Root architecture affects drought tolerance, nutrient and water uptake, and tolerance to mineral toxicity. For example, wheat and barley crops with more roots in the upper soil layer will absorb most of their nutrients from that area. However, a plant with this architecture will be less drought tolerant because water stores are located deeper within the ground. Meanwhile, roots that are longer and thinner reach a greater amount of soil and have greater chance of reaching the resources the plant needs. In this study the researchers assessed the roots of heirloom and commercial cultivars of wheat and barley to better understand small grain root architecture and determine their potential for low input sustainable agriculture. This study took place in the Plant Science Laboratory at the University of British Columbia in 2012. Five heirloom and four commercial wheat and barley cultivars were tested. The complete list of cultivars tested can be found at the research article's link included below. The seeds were sowed inside laboratory growth chambers on germination paper. After 10 days, the roots were removed from the chambers and were assessed for total root length, surface area, diameter, volume, number of tips, and branching angle. Roots of heirloom cultivars had longer and thinner roots, greater surface area and higher number of tips compared with the roots of commercial cultivars. Cultivar ‘Glenn’ had the highest amount of thin roots, and the commercial cultivars ‘Scarlet’ and ‘Norwell’ had the coarsest roots. Roots of Individual barley cultivars had differences in length, area, volume, and angle of branching. However, there were no differences between commercial and heirloom cultivars. The heirloom cultivar ‘Jet’ had the longest and finest roots with the greatest surface area and the highest branching angle. Commercial cultivar ‘Camus’ had the coarsest roots. About this research This brief is based on the following peer-reviewed journal article: Heirloom wheat cultivars had deeper, longer and thinner roots, more surface area, higher number of tips, and greater branching angle compared to commercial cultivars. These traits are often linked with resistance to drought stress and improved phosphorus uptake. Commercial wheat cultivars had coarser roots. Barley cultivars showed no difference between commercial and heirloom, though individual cultivars did vary. The root architectures of the heirloom wheat and barley cultivars indicate they may be better suited for low phosphorus and/or drought conditions, typical of low input or organic production. The root architectures of the commercial cultivars, on the other hand, are more suitable for high input conditions. When heirloom wheat cultivars were grown under organic or low input conditions, they tended to grow long roots with more surface area and yield potential. The longer and finer roots of the heirloom cultivars suggest breeding potential to increase the plant’s nutrient uptake efficiency and drought tolerance. You may be interested in: Take a walk on the wild side: Wild plants are better at repelling insect pests than domesticated varieties Humans have domesticated plants by selecting for traits such as better taste and yield, and this…
Jasper is an opaque rock of virtually any color stemming from the mineral content of the original sediments or ash. Patterns arise during the consolidation process forming flow and depositional patterns in the original silica rich sediment or volcanic ash. Hydrothermal circulation is generally thought to be required in the formation of jasper. Jasper can be modified by the diffusion of minerals along discontinuities providing the appearance of vegetative growth, i.e., dendritic. The original materials are often fractured and/or distorted, after deposition, into myriad beautiful patterns which are to be later filled with other colorful minerals. Weathering, with time, will create intensely colored superficial rinds. The classification and naming of jasper varieties presents a challenge. Terms attributed to various well-defined materials includes the geographic locality where it is found, sometimes quite restricted such as “Bruneau” (a canyon) and “Lahontan” (a lake), rivers and even individual mountains, many are fanciful such as “forest fire” or “rainbow”, while others are descriptive such as “autumn” or “porcelain”. A few are designated by the place of origin such as a brown Egyptian or red African. Picture jaspers exhibit combinations of patterns (such as banding from flow or depositional patterns (from water or wind), dendritic or color variations) resulting in what appear to be scenes or images, on a cut section. Diffusion from a center produces a distinctive orbicular appearance, i.e., leopard skin jasper, or linear banding from a fracture as seen in leisegang jasper. Healed, fragmented rock produces brecciated (broken) jasper. While these “picture jaspers” can be found all over the world, specific colors or patterns are unique based upon the geographic region from which they originate. Oregon’s biggs jasper and bruneau jasper from Bruneau Canyon near the Bruneau River in Idaho are known as particularly fine examples. Other examples can be seen at Llanddwyn Island in Wales. The term basanite has occasionally been used to refer to a variety of jasper, a black flinty or cherty jasper found in several New England states of the USA. Such varieties of jasper are also learnrmally known as lydian stone or lydite and have been used as touchstones in testing the purity of precious metal alloys.
Accurate world maps were not completed until the middle of the 16th century. Back then, people were mapping coastlines from the decks of ships, and physical features such as mountains and rivers by traversing them on foot. Soon after the world was well mapped, people began to recognize that certain coastlines seemed to ‘fit’ together (most notably the east coast of South American and the west coast of Africa). As data on the rock types and ages around the world were compiled, the relationships between the continents became more clear. In 1912, a German named Alfred Wegener proposed the theory of continental drift. Wegener provided evidence for how all the earth’s present-day landmasses were united in a single supercontinent he dubbed ‘Pangaea’ about 225 million years ago. Wegener’s ideas were built upon and modified as later scientific discoveries added to his theory. More was learned about the composition of the continental and ocean crust, and as the bottoms of the oceans were thoroughly mapped, scientists began to recognize features in the ocean floor which pointed to a more complex theory of the movement of the continents. Ocean bottom imaging revealed deep trenches around the rim of some continents, and linear ridges running down the middle of some ocean floors. These observations led to the theory of plate tectonics. Figure 7.4.1 Plate Tectonics The theory of plate tectonics provides us with a comprehensive model of the earth’s inner workings. According to the model, the earth’s rigid outer shell, the lithosphere, is broken into several individual pieces called plates. From Wegener’s model we understood that these rigid plates are slowly and continually moving. The plate tectonics theory provides a mechanism for this motion: the circulation of molten rock in the earth’s mantle. The circulation of material in the mantle is not unlike the circulation in the atmosphere. Hot material from deep in the mantle rises because it is more buoyant, this drives powerful thermal cycles which push the plates laterally. The thermal circulations in the earth's mantle provided the critical mechanism that brought together the theory of Plate Tectonics (see schematic below) Ultimately, this movement of the earth’s lithosphere plates generates earthquakes, volcanic activity and the deformation of large masses of rock into mountains. Because each plate moves as a distinct unit, interactions between plates occur at their boundaries. You may want to search for images that illustrate the fourteen plates that currently cover the earth’s surface. Image used with permission (Public Domain;). There are three distinct types of plate boundaries: - Divergent boundaries are zones where plates move apart, leaving a gap between them. One example of divergent boundaries between ocean plates is the mid-ocean ridge in the middle of the Atlantic. Another example of a divergent boundary between two continental plates is the East African Rift zone, part of which has formed the Persian Gulf. - Convergent boundaries are zones where plates move together. When an ocean plate meets a continental plate, the heavier ocean place is pushed beneath the continental plate. When two continental plates collide, they smash into each other, forming a large mountain belt (like the Himalayas). - Transform boundaries form where plates slide past each other, scraping and deforming as they pass. Each plate is bounded by a combination of these zones. Look for more graphic examples of plate motion. - You should check out more graphics of Pangaea at the Paleomap Project. K. Allison Lenkeit-Meezan (Foothill College)
Unit overview | In this unit, students learn the purpose and use of definition arguments. Students examine a current issue of problem from different viewpoints, learning how writers make and develop definitional arguments, particularly in relation to the types of claims, the evidence, and the warrants. Students also consider the figurative and visual language used to define an issue. To understand the elements of definition and the ways in which those elements work, students compare and contrast two arguments about the same issue, identifying the definitions, figurative language, and visuals for the ways in which the writers define and configure an issue for an audience. - Chapter 9: Arguments of Definition - Chapter 13: Style in Arguments - Chapter 14: Visual Arguments - Handbook: Grammatical Sentences Major Assignment | Compare and contrast how two different writers define a current issue (4 pages). Students may extend work done in the first unit or instructor may assign a text. - The assignment sheet [PDF] [DOC] - The rubric [PDF] [DOC] A 71-year-old objects after NPR refers to her as “elderly”. When did elderly become an objectionable term, and how should news organizations get around it? Watch the video for a great discussion about the definition of a term.
Humidity is described as the amount of water vapor or water molecules that are currently present in the air. So, what causes humidity in the air to begin with? It can all be traced back to the water cycle and wind patterns on the planet. When it rains and water reaches the ground, it evaporates back into the atmosphere when it reaches a certain temperature. When you hear meteorologists talking about humidity, they are referring to "relative humidity". What does that mean? This term simply compares the current amount of moisture in the air to the total amount of moisture that the air can hold. Warm air can hold more water than cold air, which is why you experience those unbearably hot and humid days during the summer months. So, it's safe to say that you can usually expect higher levels of humidity in warm climates, especially if they are by a large body of water or experience a lot of rainfall. Humidity is often pushed up from wind currents originating from humid places around the world. This is exactly what causes changes in weather. Humidity outside is one thing to deal with, and humidity indoors is another. The level of humidity in your home can impact the concentration of indoor air pollutants, which relates to the overall air quality. Having excess moisture in your house creates the perfect conditions for mold and mildew to form. Mold in the home can cause a variety of health disturbances, and it can affect the structural integrity of your home if it goes unchecked. Headaches, dizziness, respiratory issues, and asthma symptoms are some of the health dangers that mold in the household can bring. Mold caused by high humidity levels can wreak havoc on your health, emotional state, and bank account if you have to resort to remediation! This is why it is important to control the level of humidity in your home, especially in unfavorable weather conditions. September is Mold Awareness Month! This month of awareness aims to educate and raise awareness about the different effects that mold can have on your health and home. Exposure to mold and the subsequent mycotoxins produced can wreak havoc on your families health. The more knowledge that you have, the less chance you have of suffering from mold in your home. It's true that some types of mold are a lot more dangerous than others. This is why you don't need a hazmat suit to take care of moldy leftovers in your fridge! Some of the most common types of mold found within a household include aspergillus, stachybotrys atra (or black mold), and cladosporium. Aspergillus is most often found on food and within systems of the house like air ducts and HVAC systems. Cladosporium is usually green or dark brown with a pepper like appearance commonly found in bathroom areas. Black mold is the most harmful type of mold to have in your home. This type of mold often occurs after a household has experienced heavy flooding or water damage that was not taken care of in a timely manner. Don't be alarmed! Be aware of the conditions that mold likes to grow in, and keep an eye out for any changes in the appearance and smell of your home, as well as the health of your family. If you find mold in your home, the best thing you can do is hire a professional mold remediation company as soon as possible. Attempting to clean the mold yourself could make matters worse by spreading the spores and exposing yourself to harmful mycotoxins. A professional company can take care of any mold issues you run into in the best way possible. The longer mold isn't taken care of, the more potential it has to wreak havoc on your house and health. The presence of mold in your home can have lasting negative effects. Not only is it damaging to the structural quality of your home, but also to your health. A lot of homeowners want to fix problems themselves to save time, money, and effort. If you're a motivated self-starter, you might have the instinct to clean the mold yourself. Before you pick up the cleaning products, do a bit of research. Always leave the mold remediation to professionals, or you can create more issues! There are countless situations that can lead to the presence of mold in your home. You could have had a household leak or weather damage that let dampness accumulate. No matter the cause, it's important to remove the mold as soon as possible. You also want to address what brought on the mold issue, to begin with. Given the serious health risks that mold can trigger, it should not be a DIY job. Without the proper equipment, training, and experience, you could end up causing more problems in the long run. If you don't know what you're doing when it comes to mold removal, you can exacerbate the issue at hand. Mold spores are microscopic, and you can easily spread the infestation where it wasn't present before. You might think that you're saving yourself money by tackling this project yourself, but that's rarely the case. If you aren't careful, you can double the costs by spreading spores to different areas. This will make the job more intensive when you eventually have to call a professional. Mold remediation experts have the latest and greatest equipment and cleaning agents. They know how to tackle every stage of mold situations. Sure, vinegar might take care of a mildew smell lurking in your laundry room, but it would never kill a large colony. Powerful chemicals are extremely dangerous when they're in inexperienced hands. Professionals know how to shield themselves with respirators, goggles, and other articles of protective clothing. A simple dust mask and long sleeve clothing aren't enough to protect your health! There are different types of mold, and some are more dangerous than others. The reality is that you don't have years of experience diagnosing and testing for mold issues. This means you can't make a distinction between different types of mold by looking up pictures and information online. Without a mold removal expert, you won't know what you're dealing with. Not only can you misdiagnose, but you can miss hidden mold completely if you aren't experienced. If you disturb black mold and release spores, you could experience symptoms like vomiting, nausea, fatigue, and even bleeding in extreme cases. There are countless reasons why you should hire a mold removal expert. While there are many DIY resources available nowadays, mold removal shouldn't be one of these DIY jobs. Leave the job to the experts who know how to contain and remediate the mold issues in your home. It's the smartest thing for your home, budget, and health.
Who doesn’t love the Montessori, hands-on, self-correcting style of learning? It seems like most kids learn best when they get to move pieces around with their hands to understand how things work rather than have someone verbally explain a concept to them. I created this addition strip board and the associated addition table worksheets and addition chart using the book, Teaching Montessori in the Home: The School Years by Elizabeth Hainstock (affiliate link). In the future, I plan to make subtraction, multiplication, and division printables as well. If you’d like to be notified when they’re available, please consider signing up for my newsletter. I recently decided to try homeschooling my oldest child for kindergarten rather than continuing to send him to the Montessori school he’s been attending for the past couple years. However, I still want him to have exposure to all the wonderful Montessori tools. Granted, we just started homeschooling a week ago, so he still has a lot of enthusiasm in general, but he loves using this board and the associated strips to add single digit numbers together. We haven’t attempted the full addition chart yet, but he’s done many of the tables, and I’m confident that he understands the concept of addition. He’s even getting better at adding numbers together in his head. Full instructions for creating and using the board and associated worksheets are below. - Printed Addition Strip Board document - Optional: Printed Addition Tables and Addition Tables – Mixed Up - Optional: Printed Addition Chart document - 6 laminating pouches - 1 sheet of poster board (I used a 14″x22″ sheet, though it will be trimmed to roughly 13″x18″) Supplies & Tools: - glue stick - Print out the Addition Strip Board document. - Laminate all 6 sheets. - Cut out the 18 addition strips (9 red and 9 blue) and set aside in an envelope or Ziploc bag. - Cut out the 4 sections of the addition strip board. To use the board, start with Addition Table: Ones. Have your child find the blue “1” square and place it in the upper left corner. Next have them find the red “1” square and place it next to the blue one square. Looking at the top row of numbers, they will see that their 1 blue square and 1 red square has brought them to the number “2”. They then can write the number 2 on the 1+1 line. Next they leave their red square where it is, but move the blue “1” square down to the next row. Since the next problem on their worksheet is 1+2, they find the red “2” square and place it next to the blue “1” square to see that 1+2=3. They can keep moving down the board until their worksheet is complete. After doing a few of these tables, they might realize that there is a pattern and that they can “cheat” by just adding 1 to their previous answer. For example, the answer on the Addition Table: Ones worksheet are 2, 3, 4, 5, 6, 7, 8, 9, and 10. Since my son realized this right away, I also made an optional Addition Tables: Mixed-Up version that requires them to actually prove that they understand how to add. This is not an actual Montessori tool, so your child won’t be missing the full experience if they skip it. My son also does not like using all the rows on his Addition Strip Board, so I let him use whatever row he wants when completing his worksheets. He frequently just uses the top row for every problem. After a child is good at doing their addition tables, they might want to attempt the full Addition Chart. To use the addition chart, you write the number in each box for the sum of the corresponding row and column. So, for example, when filling out row 4, the first box is 4+1 since it is in row 4, column 1, the second box is 4+2 since it is in row 4, column 2 and so on. A completed chart is included so that the child can compare their answers. A child may or may not need to use the addition strip board to fill out the chart depending on their abilities. Click here for more Addition and Subtraction activities for kids.
An international team of astronomers has announced the discovery of over a hundred new exoplanet candidates. These exoplanets were found using two decades' worth of data from the Keck Observatory in Hawaii. Their results were recently published in a paper in the Astronomical Journal, and among the discoveries is a planet orbiting the fourth-closest star to our own, only 8 light-years away. Finding exoplanets isn't easy. Planets beyond our solar system are tiny and dark when compared to their host stars, so some advanced techniques have to be used to pinpoint them. The Kepler space telescope, for instance, finds exoplanets by looking for stars that regularly dim slightly. This dimming is caused by an exoplanet blocking some of the star's light when it passes in front, and the change in brightness can tell us a lot about the size of the planet and how fast it orbits. However, there are additional ways to spot an exoplanet. The Keck Observatory uses a different method, called the radial velocity method, that looks at how the star moves. When a planet orbits a star, the planet's gravity causes the star to wobble a little bit. For instance, our own planet causes the sun to move a few inches per second, while Jupiter causes the sun to move about 40 feet per second. This wobbling is detectable by very sensitive telescopes, like the HIRES spectrometer at the Keck Observatory. Before Kepler, the radial velocity method was the best way to find new exoplanets. Scientists using this method have found hundreds of worlds over the past 25 years, and we found the very first known exoplanet using this method. However, studying a star's radial velocity typically requires a lot of time for observation in order to separate the signal and any interferences. The Keck data spans two decades, which is more than enough time to separate out the signal, and the data covers so many star systems that it could potentially contain evidence for thousands of new exoplanets. In fact, the dataset is so massive that one group of people could never get though all of it. To solve this problem, the team is releasing their data to the public, in the hopes that people will use that data to find even more exoplanets. If you're interested in discovering your very own alien planet, you can find the data and instructions on the team's website here. Source: Carnegie Science
Arabic Numbers --- Introduction --- Add --- Subtract --- Multiply --- Divide --- Practise sums --- Less than 1 --- Types of numbers Arabic numerals are the 'normal' numbers used for arithmetic. They are also known as decimal numbers, since they are base 10, unlike binary and other number bases. Arabic numbers are also called Hindu-Arabic numbers. This is a better name as it was the Hindus in India who invented the system from about the 4th century BC onwards. This number system spread to the Middle East in about the 9th century AD, where it was used by Arab mathematicians and astronomers. (Muslim scientists used the Babylonian number system, and merchants used the Abjad numerals, a system based on letters, rather like Greek numbers.) Arabic numbers then spread to Europe. Before this, Europeans were using Roman numbers, with abacuses for calculation. Fibonacci wrote a book about Arabic numbers in the thirteenth century AD. At first, these numbers were very unpopular in Europe, since people were used to using abacuses where you could watch the calculation taking place. But they soon realised how much easier it was to do calculations with Arabic numbers. Now Arabic numbers are generally used throughout the world for calculation, although systems such as Roman, Greek and Chinese are still used for formal purposes. © Jo Edkins 2006 - Return to Numbers index
Osteomyelitis is the term used to describe an infection of the bone or bone marrow. Osteomyelitis is usually caused by bacterial infection but can also occur as the result of injury or another underlying condition. Osteomyelitis occurs suddenly (acute illness) as a result of infection or injury, for example, and when the conditions keeps recurring, it is referred to as chronic (long-term) osteomyelitis. The symptoms of osteomyelitis are described below. - Fever (temperature of 38˚C or above), chills and shivering - Pain in the affected bone that can be intense - Tenderness and swelling in the affected area - The condition usually affects the long bones in the legs, but other bones such as those in the arms can also be affected - General feeling of malaise - Restricted movement of the affected area - Swollen lymph nodes near affected area - Irritable mood - Loss of appetite The majority of osteomyelitis cases are caused by bacterial infection, although the condition may also be caused by fungi. Fungal infection is rare among people with a healthy immune system and tends to only occur in those with suppressed immunity such as HIV patients or people taking immunosuppressive agents such as corticosteroids. Diagnosis and treatment Diagnosis is usually based on a physical examination, blood tests, imaging studies and sometimes biopsy. The condition is usually treated with antibiotics over a course of four to six weeks. In severe cases, surgery may also be required. Surgery generally involves removal of the damaged bone and drainage of accumulated pus. Reviewed by Sally Robertson, BSc
Last week’s killer floods in Tennessee were unusual by almost any measure. (Except for flood-inured Texans who would call 15 inches in rain in two days “a damp weekend”.) But, were they so unusual that they would not have happened without global warming? It would be amazing if my answer were “yes”. To date, there has not been a single weather event in the United States that I have blamed on global warming. This is for good reason, I think: temperatures have only warmed a bit more than a degree Fahrenheit on average across the continental United States, and it’s hard for that small a temperature change (small compared to the 10F difference between glacial and interglacial periods) to have a major impact on individual weather events. Climate change is still mainly the province of statistics. The effects only start becoming obvious if you average over years and decades or consider the overall behavior of a large number of weather events. Let’s start with the statistics of this event. The Army Corps of Engineers has called this flood a 1000-year event. That’s a preliminary number, and I don’t know exactly what it’s based on. But here’s what it means: the chances of a flood of this magnitude occurring in any given year is 0.1%, which, if you work it out, means that it should only occur about once every 1000 years. In this case, 1000 years is the expected return period, but this does not mean that such floods are spaced 1000 years apart. A moment’s thought reveals that the concept of a 1000-year return period is a little bit strange. How would we know whether an event’s odds were really 0.1% per year? Ideally, you’d like to observe several instances of the event and see how frequently it occurs, and then use that information to estimate the odds. In this case, you’d need about 10,000 years of data. Obviously we don’t have rain gauges or streamflow gauges going back that far, though there can be other ways of detecting major prehistoric floods. In the case of rainfall, the standard technique is to estimate the likelihood from existing events in the past 100 years or so. For example, if a 5″ rainstorm occurred about every year, a 7″ rainstorm occurred maybe once per decade, and a 9″ rainstorm occurred once in 100 years, one could infer that an 11″ rainstorm would occur on average once in 1000 years. It’s also standard to combine observations from rain gauges throughout the area, so that, for example, if only one rain gauge out of ten recorded an 11″ rainstorm in the past 100 years, you’ve got more evidence that the true likelihood at a given location is only about once every 1000 years. (This description is way simplified from how it’s actually done mathematically, but the principles are the same.) The figure above shows the estimated probabilities for a 2-day rainfall in Nashville. According to the figure, a two-day rainfall total of 8.4″ should occur about once every 100 years. The red and green lines show the 90% confidence interval, based on the statistical information available. Reading horizontally, the true return period for an 8.4″ two-day rainstorm has a 90% chance of being somewhere between 75 and 200 years. Reading vertically, the true 100-year rainfall amount has a 90% chance of being between 7.8″ and 8.9″. Now let’s look at how much rain actually fell in the Nashville area during the first two days of May. The previous record in Nashville was 6.68″, from the remnants of Hurricane Frederick in 1979. This, based on the analysis above, is a bit lower than what would have been expected for a station in existence there for many decades. The new record, established May 1-2, 2010, is 13.57″. Going back to our diagram above, we’re literally off the chart. Extrapolating the curves a bit, the expected recurrence interval for that much rain is somewhere around 3000 years! It gets better. It turns out that 13.52″ of that actually fell in just a 36-hour period. By noon on May 2, they had already broken their previous all-time record for the month of May. And the Nashville Airport did not receive anything close to the highest amount observed during this event. Numerous areas received over 16″ of rainfall. Antioch, a suburb of Nashville, received 16.21″ of rainfall, and Franklin, in the next county to the south, received 17.87″ of rainfall. Now, the rainfall frequency diagram for Franklin doesn’t look a whole lot different than the one for Nashville. (You can retrieve all the diagrams for Tennessee you want from this NWS web site. If I extrapolate the curves for Franklin return periods, I estimate that 17.87″ of rainfall should occur slightly less than once every 10,000 years. This estimate is similar to the 5,000 to 8,000 year flood estimate that is now being mentioned. That makes it something close to a “once per interglacial” event! In reality, it’s absurd to draw that conclusion, because the climate has changed a lot over the past 10,000 years. The estimate is based only on what’s been observed in the climate over the past 100 years or so. The rainfall statistics 6,000 years ago might have been (and probably were) considerably different. Better to stick to conventional probability language: based on the past 100 years or so, the odds of such an event occurring in any given year were 10,000 to 1. Keep in mind that we’re talking about the two-day rainfall total for Franklin, TN, here. If there are 10,000 long-term precipitation stations in the United States (not a bad guess), you should expect one of them, somewhere, to receive a 10,000 to 1 rain event on average once every year. [Update: a better way to say that would be that you should expect there to be a 1 in 10,000 rain event in the US recorded about once per year.] [Update #2: see the comments for an analysis considering simultaneous events at neighboring stations that concludes that "about once every two to three years" is a better estimate.] And you might expect a similar frequency of occurrence of unusual 1-day rainfall events, or 6-hour rainfall events, etc. So this sort of extremely unusual rainfall event is actually quite common, just not at any particular location. I suspect the odds in Tennessee were not as long as the statistical analysis says. Look at the map for the 100-year 2-day rainfall event below. There’s a local minimum around Nashville: the Nashville area has (or had) received relatively few massive 2-day rainfall totals compared to its surroundings, so its 100-year 2-day rainfall event had a lower value. Similarly, the 1000-year rainfall event value was smaller near Nashville than elsewhere. There might be a geographical explanation for this: Nashville is surrounded on almost all sides by higher terrain. But other similarly situated locations in the Ohio River Valley aren’t rainfall minima to the same extent. So I believe that the statistical analysis of rainfall frequency in the Nashville area was wrong, not because of any flaw in the methodology, but because the Nashville area had, by pure chance, managed to avoid the really heavy 2-day rainfall events that had befallen its neighbors. If I’m right about that, the proper estimate for the return period in Nashville would have been about 1,500 years, and in Franklin about 5,000 years. Still long odds, but not so long as before. Now to the key question: how much has global warming changed the odds? The statistical analysis was based on the past 100 or so years, but our climate today is not like our average climate of the past 100 years. In particular, the moisture content of air encountering a stationary front in Tennessee is on average higher than it was 50 years ago, because in a heavy rainfall event in the central or eastern United States the air draws its moisture from the tropical Atlantic, and tropical Atlantic sea surface temperatures are higher than they were 50 years ago. In fact, in March 2010, the latest month for which data is available, the tropical Atlantic sea surface temperature anomaly was at an all-time record high value ( the records go back to 1948). But little of the record warmth can be attributed to global warming; most of it is due to natural variability. March 2009, for example, was cooler than normal in the tropical Atlantic. The long-term temperature increase in the tropical Atlantic compared to the middle of the past century is probably only about 0.3C to 0.4C. Now let’s convert that to a change in precipitation. The typical relative humidity of tropical air is about 70%. A good rule of thumb is that a 10C difference in temperature implies a factor of two difference in saturation water vapor mixing ratio, so a change of 0.3C implies a change in saturation water vapor mixing ration of about 0.5 g/kg. Reduce that to a typical relative humidity, and the difference is about 0.3 g/kg, compared to a likely actual mixing ratio of 17 g/kg. Since 0.3 g/kg is about 2% of 17 g/kg, I estimate that global warming was responsible for about a 2% increase in precipitation during the flood event. Assuming this is the only difference between the meteorological setup now and the analogous meteorological setup 50 years ago, the global warming contribution to precipitation was 2%, or 0.28″ of Nashville’s 13.62″. That’s not a very big amount. According to the statistics, global warming was responsible for turning a 1 in 1200 year event into a 1 in 1500 year event. Not a big change in the odds, and not a big change in the impact either. I conclude that this flood event, while influenced by global warming, was mostly a non-global-warming event. This estimate, while crude, is consistent with the middle-of-the road projection for Tennessee precipitation change over the next century: an increase of 0%-5%, with some models drier or wetter. But the observed precipitation increase in Tennessee over the past century, according to our calculations, is actually 10%. In my mind, it’s not appropriate to attribute this precipitation change to global warming, because a causal connection to temperature change has not been established. It could be due to natural variability, changes in atmospheric aerosol composition, or some other factor. Whatever the cause, it falls under the category of “climate change” rather than “global warming”. Let’s redo the calculation. Climate change has produced a net 10% increase in precipitation in the Nashville area since the beginning of the last century. For Nashville, the contribution to the current event would be 1.36″. Subtract that from the 13.62″ observed gives a value of 12.26″ for what might have happened under similar circumstances at the beginning of the last century. If the current event had a return period of 1500 years, the hypothetical event in 1900 would have had a return period of about 500 years. Still a major, damaging flood, but not so catastrophic and without so much loss of life. So my guess is that climate change turned a 500-year rainfall event for Nashville into a 1500-year rainfall event, and an 1800-year rainfall event into a 5000-year rainfall event for Franklin. In other words, an event of this severity was three times as likely to happen after a century’s worth of climate change than before. This climate change was partly natural and partly man-made, partly greenhouse gases and partly other anthropogenic contributions. However you add it up, this event probably would not have happened with such severity, if not for climate change. And it illustrates the folly of assuming that the infrastructure of the future should be designed to withstand the climate of the past 100 years.
Clear up the confusion of how all-encompassing terms like artificial intelligence, machine learning, and deep learning differ. Machine learning and artificial intelligence (AI) are all the rage these days — but with all the buzzwords swirling around them, it's easy to get lost and not see the difference between hype and reality. For example, just because an algorithm is used to calculate information doesn’t mean the label "machine learning" or "artificial intelligence" should be applied. Before we can even define AI or machine learning, though, I want to take a step back and define a concept that is at the core of both AI and machine learning: algorithm. What Is an Algorithm? An algorithm is a set of rules to be followed when solving problems. In machine learning, algorithms take in data and perform calculations to find an answer. The calculations can be very simple or they can be more on the complex side. Algorithms should deliver the correct answer in the most efficient manner. What good is an algorithm if it takes longer than a human would to analyze the data? What good is it if it provides incorrect information? Algorithms need to be trained to learn how to classify and process information. The efficiency and accuracy of the algorithm are dependent on how well the algorithm was trained. Using an algorithm to calculate something does not automatically mean machine learning or AI was being used. All squares are rectangles, but not all rectangles are squares. Unfortunately, today, we often see the machine learning and AI buzzwords being thrown around to indicate that an algorithm was used to analyze data and make a prediction. Using an algorithm to predict an outcome of an event is not machine learning. Using the outcome of your prediction to improve future predictions is. AI vs. Machine Learning vs. Deep Learning AI and machine learning are often used interchangeably, especially in the realm of big data. But these aren’t the same thing, and it is important to understand how these can be applied differently. Artificial intelligence is a broader concept than machine learning, which addresses the use of computers to mimic the cognitive functions of humans. When machines carry out tasks based on algorithms in an “intelligent” manner, that is AI. Machine learning is a subset of AI and focuses on the ability of machines to receive a set of data and learn for themselves, changing algorithms as they learn more about the information they are processing. Training computers to think like humans is achieved partly through the use of neural networks. Neural networks are a series of algorithms modeled after the human brain. Just as the brain can recognize patterns and help us categorize and classify information, neural networks do the same for computers. The brain is constantly trying to make sense of the information it is processing, and to do this, it labels and assigns items to categories. When we encounter something new, we try to compare it to a known item to help us understand and make sense of it. Neural networks do the same for computers. Benefits of neural networks: Extract meaning from complicated data Detect trends and identify patterns too complex for humans to notice Learn by example Deep learning goes yet another level deeper and can be considered a subset of machine learning. The concept of deep learning is sometimes just referred to as "deep neural networks," referring to the many layers involved. A neural network may only have a single layer of data, while a deep neural network has two or more. The layers can be seen as a nested hierarchy of related concepts or decision trees. The answer to one question leads to a set of deeper related questions. Deep learning networks need to see large quantities of items in order to be trained. Instead of being programmed with the edges that define items, the systems learn from exposure to millions of data points. An early example of this is the Google Brain learning to recognize cats after being shown over ten million images. Deep learning networks do not need to be programmed with the criteria that define items; they are able to identify edges through being exposed to large amounts of data. Data Is at the Heart of the Matter Whether you are using an algorithm, artificial intelligence, or machine learning, one thing is certain: if the data being used is flawed, then the insights and information extracted will be flawed. What is data cleansing? “The process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database and refers to identifying incomplete, incorrect or irrelevant parts of the data and then replacing, modifying or deleting the dirty or coarse data.” And according to the CrowdFlower Data Science report, data scientists spend the majority of their time cleansing data — and surprisingly this is also their least favorite part of their job. Despite this, it is also the most important part, as the output can’t be trusted if the data hasn’t been cleansed. For AI and machine learning to continue to advance, the data driving the algorithms and decisions need to be high-quality. If the data can’t be trusted, how can the insights from the data be trusted?
multiparty system, bicameral legislature, presidential powers, Economic Cooperation Organization, Peace program Kyrgyzstan is a democratic, secular republic. Its first post-Soviet constitution was ratified in 1993 after a great deal of public debate. Major constitutional amendments were approved by referendum in 1994 and 1996. Under the constitution, all citizens age 18 and older are eligible to vote. The president of Kyrgyzstan acts as head of state. The president is directly elected for a five-year term and may serve no more than two consecutive terms. The president appoints the prime minister, with the approval of the legislature, to head the government. The president also appoints the cabinet of ministers, on the recommendations of the prime minister. The constitutional amendments of 1994 gave the president the right to call for referendums without the approval of the legislature and approved referendums as a means of amending the constitution. The 1996 amendments broadened presidential powers at the expense of the legislature, giving the president the authority to veto legislation and appoint cabinet ministers (except the prime minister) without legislative approval. Kyrgyzstan has a bicameral legislature called the Jogorku Kenesh (Supreme Council). It consists of a 35-member Legislative Assembly (lower chamber), which is a standing body that represents the population as a whole, and a 70-member Assembly of People’s Representatives (upper chamber), which meets twice yearly to debate matters of regional interest. Members of both chambers are directly elected for five-year terms, although members of the Assembly of People’s Representatives are elected on a regional basis. The judicial system consists of the Constitutional Court, the Supreme Court, the Higher Court of Arbitration (which decides legal disputes between businesses), and regional and local courts. The president nominates all judges, and the Jogorku Kenesh confirms them. The president may remove regional and local judges on the basis of a poor performance review. The Constitutional Court holds supreme authority in constitutional matters and is comprised of seven judges, in addition to a chairperson and his or her deputies; its judges are appointed to serve for 15 years. The Supreme Court is the country’s highest court in matters of civil, criminal, and administrative justice; its judges are appointed to serve for ten years. For purposes of local government, Kyrgyzstan is divided into six regions and the municipality of Bishkek. Each region is in turn divided into districts. The most important official in each region is the governor, or akim, who is appointed by the president. Each region also has a popularly elected legislature, but these bodies have little political power. Bishkek is administered independently of regional authority, and its local government reports directly to the central government. Kyrgyzstan has a multiparty system, and 11 parties contested the most recent national parliamentary elections in 1995. In addition, approximately 500 officially recognized private associations—ranging from labor unions and women’s organizations to sporting clubs—also play a role in politics. Until 1990 the Kirgiz Communist Party was the only legal party in the republic. It was disbanded in 1991 and then reestablished in 1992 as the Kyrgyz Communist Party, but by then it had lost its monopoly of power. Only 3 of its candidates were elected to the 105-member Jogorku Kenesh in the 1995 elections. The Social Democratic Party, a party of local governors formed in 1994, captured 14 seats, the largest single bloc in parliament. The next largest groups, with 4 seats each, are the Asaba Party of National Revival, advocating the revival of Kyrgyz language and culture, and the National Unity Democratic Movement, working for unity among Kyrgyzstan’s different ethnic groups. A number of nationalist parties hold seats in parliament, including the Ata-Meken (Motherland) Party and the Erkin (Free) Kyrgyzstan Democratic Party. Other parties that won seats in the 1995 elections represent specific interests. The Agrarian Party of Kyrgyzstan, for example, represents the interests of farmers, while other parties advocate the interests of ethnic Russians and Germans. Until Kyrgyzstan became independent, its armed forces were part of the Soviet security system. In 1992 Kyrgyzstan began to form a national defense force, and by 2001 it had an army of 9,000 troops. All 18-year-old males must perform military service for a period of 12 to 18 months. Since 1991 Kyrgyzstan has been a member of the Commonwealth of Independent States (CIS), a loose alliance of 12 former Soviet republics. Kyrgyzstan became a full member of the United Nations (UN) in 1992. Also that year, the republic joined the Economic Cooperation Organization (ECO), an organization that promotes economic and cultural cooperation between Islamic states, and the Organization for Security and Cooperation in Europe (OSCE; until 1994 named the Conference on Security and Cooperation in Europe). In 1994 Kyrgyzstan became a participant in the Partnership for Peace program of the North Atlantic Treaty Organization (NATO), a program that allows for limited military cooperation between NATO and non-NATO states. Article key phrases:
Scientists have discovered that there is a 1 in 300 chance that a huge asteroid, called 1950 DA, will collide with Earth in the year 2880. If the asteroid does hit Earth, is expected to collide with the planet at a speed of 33,800, which scientists say would create a explosive force equal to around 44,800 megatons of TNT. The impact would probably destroy human life on the planet as we know it. However, researchers at the University of Texas are looking into ways to possible destroy or divert the asteroid before it hits. Luckily for them, they have a lot of time, as the giant rock is not expected to hit Earth until the year 2880. It was recently discovered that some asteroids, like the massive 1950 DA, are not held together by gravity. The asteroid 1950 DA is actually spinning too quickly to be held together by gravity, meaning that it “defies gravity.” “We found that 1950 DA is rotating faster than the breakup limit for its density,” said Ben Rozitis, a postdoctoral researcher at the University of Tennessee. “So if just gravity were holding this rubble pile together, as is generally assumed, it would fly apart. Therefore, interparticle cohesive forces must be holding it together.” Asteroid 1950 DA, which has a diameter of one kilometer, is currently rotating once every two hours and six minutes as it travels through space as the amazingly fast speed of nine miles per second. It is held together by cohesive forces, called van der Waal forces. By learning more about these cohesive forces and how they keep asteroids like 1950 DA together, scientists think that they may be able to stop the huge asteroid from ever hitting the planet. “Following the February 2013 asteroid impact in Chelyabinsk, Russia, there is renewed interest in figuring out how to deal with the potential hazard of an asteroid impact,” explained Rozitis. “Understanding what holds these asteroids together can inform strategies to guard against future impacts.” A study of the asteroid 1950 DA was published in the journal Nature.
The phlebotomine sand flies, Phlebotomus spp (Old World sand flies) and Lutzomyia spp (New World sand flies), are members of the family Psychodidae. These flies are confined primarily to the tropical and subtropical regions of the world. Members of these genera are tiny, moth-like flies, ~1.5–4 mm long. The legs are as long as the antennae, comprising 16 segments that often have a beaded, hairy appearance. They are commonly known as sand flies, moth flies, or owl midges. The key morphologic feature for identification is that the body of the sand fly is covered with fine hairs. The females have piercing mouthparts and feed on blood of a variety of warm-blooded animals, including people. Many species feed on reptiles. Male sand flies suck moisture from any available source and are even said to suck perspiration from people. Sand flies tend to be active only at night and are weak fliers; their flying is deterred by air currents, even slight ones. During the day, sand flies seek protection in crevices and caves, among vegetation, and within dark buildings. They often seek protection within rodent and armadillo burrows; these mammals can serve as reservoir hosts for Leishmania spp. Sand flies breed in dark, humid environments that have a supply of organic matter that serves as food for the larvae. They do not breed in aquatic environments. These tiny flies serve as an intermediate host for Leishmania spp, a protozoan parasite that infects the reticuloendothelial cells of capillaries, the spleen, and other organs but may be seen in monocytes, polymorphonuclear leukocytes, and macrophages of people, dogs, cats, horses, and sheep (see Leishmaniosis). Like black flies, sand flies can most often be collected in the field and are not found on animals. They can be identified by their small size and hairy wings and bodies. Identification of genus and species is probably best left to an entomologist. Treatment and Control Insecticide spraying of larval habitat is usually not possible because of the difficulty of accessing breeding sites. Removal of dense vegetation discourages breeding. Spraying of residual insecticides on surfaces in the home is the main way to control sand flies; however, this is ineffective for species that bite away from the home. Generally speaking, populations of sand flies have been reduced as a result of intense mosquito control programs. Deltamethrin-impregnated collars may be recommended to dog owners to protect their pets from sand fly bites. Last full review/revision August 2013 by Charles M. Hendrix, DVM, PhD
Value, nurture, protect and challenge. Take a lesson with Professor Von Hans! Written multiplication! Click here! Through our mathematics curriculum we enable children to: - become fluent in the fundamentals of mathematics. For example knowing number facts and times tables off by heart. - reason mathematically. This involves our pupils explaining their thinking and testing out theories. - solve problems. Our pupils participate in regular problem solving activitiy to enable them to apply mathematics in a range of situations. These documents show the core teaching for each year group over the course of a year.
The Purchase of Louisiana established American independence in practice, as the Declaration of Independence had established it in principle. Although the treaty was signed on April 30, 1803, the news only reached Washington on July 3. Fittingly, President Thomas Jefferson had it announced publicly on July 4, 1803. The Purchase pushed the French from North America, brought Spanish claims there to the brink of extinction, and denied the British control or even influence in the Mississippi River valley. It added an immense domain of fertile land to the republic, created America’s western destiny (Meriwether Lewis headed west from Washington the day the treaty was announced), and put America on the course of national greatness. It was also, according to Jefferson an exercise of power not expressly granted by the Constitution of the United States. Nowhere among the Constitution’s enumerated powers could Jefferson find one allowing the federal government to acquire territory. These documents tell the story of the purchase and provide Jefferson’s commentary on why he—now famous as a strict constructionist—acted beyond the Constitution’s powers to bring about the purchase. The letter to Livingston lays out the necessity for the United States getting control of the Louisiana Territory from the French. The letter to Nicholas, written a few months after the Purchase was announced, shows Jefferson rejecting a way to gloss over the unconstitutionality of the Purchase. Jefferson did not want to create a precedent for finding implied powers in the Constitution. He argued instead for a constitutional amendment to give the government the power he had assumed in authorizing the purchase. The letter to Colvin, written in Jefferson’s retirement, offers Jefferson’s justification for assuming that power. He argues that “circumstances… sometimes occur, which make it a duty in officers of high trust, to assume authorities beyond the law.” Recognizing the danger in such a principle, Jefferson concludes the letter by explaining that it may be rightly applied only in a few special circumstances. David Tucker, Associate Professor of Defense Analysis, Naval Postgraduate School Letter to Robert R. Livingston, US Minister to France, Thomas Jefferson, April 18, 1802 Letter to Wilson Cary Nicolas, Thomas Jefferson, September 7, 1803 Letter to John Colvin, Thomas Jefferson, September 20, 1810 “Since the ancient Greek city states, democracy has been recognized as a form of government that could easily devolve into tyranny,” says David Tucker, who with Christopher Flannery will offer a new topics course in the Master of Arts in American History and Government program: “Democracy and Tyranny.” To examine this hazard, the course will consider Robert Penn Warren’s great American political novel, All the King’s Men, bringing to bear on it the political thought of classical authors and American statesmen. The course is offered in the first on-campus summer session. Warren’s novel is often summarized as a fictional account of the career of Huey Long. This Depression-era Governor of Louisiana used a charismatic leadership style and populist polices not only to dominate his own state but also to contest for a national role. However, serious readers of the novel find its portrait of Willie Stark more complex and compelling than the historical figure on whom he is loosely based, and they recognize in Warren’s novel a penetrating analysis of American mass politics. We talked with Professors Flannery and Tucker about the themes of the novel and the questions their upcoming course will consider. What circumstances or realities make democracy vulnerable to sliding into tyranny? Flannery:Democracy is always as vulnerable to tyranny as human beings are. American democracy began with the wonderful and ambitious determination to base our politics on the consent of the governed. If the governed become strongly inclined to consent to or demand tyranny of some form, that’s what we’ll get. So, whether our democracy becomes a tyranny or not depends ultimately on whether the people become tyrannical. The most important reality that makes democracy vulnerable to tyranny, James Madison thought, was the fact that men are not angels. Human beings are fallible and culpable. We tend to want to do things we shouldn’t do, and our reason tends to get carried away by our passions. Has the descent to tyranny always been a significant hazard for American democracy, or has this danger increased as we move further from the Founding? Flannery: It always has been. Madison was concerned with the local tyrannies developing in the independent American states after the Revolution. That was one big motive for seeking a stronger national constitution. The Antifederalists, on the other hand, became concerned that Madison’s Constitution would create a consolidated government that would become a tyranny. Lincoln, however, when he was still an obscure young man in the 1830s, argued in his Lyceum Address that America was becoming more vulnerable to mob rule and to tyranny as it moved away from the Founding. Tucker: Slavery, of course, was present in American life from the beginning, and it injected a tyrannical tendency. Jefferson in Notes on Virginia says that when the children of a slave owner watch the way their father treats his slaves, they are being taught to be tyrants. Jefferson thought the institution of slavery clearly incompatible with a republican form of government, where we rule and are ruled in turn. In fact, in the antebellum period, there was a criticism of southern slaveholders’ behavior in Congress: that they could not tolerate being overruled by others in the course of democratic deliberations. In the middle of Warren’s novel, set in the 1920s and 1930s, there is what appears to be a digression where Warren tells a story about the antebellum and Civil War South. Why does he do this? I think Warren wants us to see a connection between the character of Willie Stark and what happened during that time. Does tyranny sometimes wear a democratic face? Flannery: Oh, yes. Democratic tyranny, or majority tyranny, was what Madison (again!) thought was the greatest threat to a republic based on majority rule. Madison’s most well-known contributions to political theory are his efforts to figure out how to protect against this ever-present danger. Where a monarch is the sovereign, you have to bend your efforts toward protecting your liberty from him; when the people are sovereign, you have to bend your efforts toward protecting your liberty from them. You don’t include in your course pack any material on Huey Long. Do you think the story of Huey Long is relevant to the concerns of the novel? Tucker: There are clearly parallels between Long’s biography and the fictional life of Willie Stark but I don’t think the particular biographical details are that important. It is relevant in this sense: People tend to think a tyranny could never happen in the US. But Long subverted democratic government in Louisiana and was gaining support nationally. After being governor, he was a U.S. Senator and was planning on running for President. FDR is reputed to have called Huey Long one of the two most dangerous men in America. He was a very popular politician who was building a national reputation. This was the 1930s; there were lots of populist, fascist movements arising in Europe. There was an argument that democratic institutions could not survive in modern industrial states with mass populations; and then there was the economic depression and the fear that capitalism was collapsing. It’s hard for us to imagine now how volatile the situation was in the thirties. But Warren’s portrait of Willie Stark is much more interesting than what I know of the historical Huey Long. Warren saw through Long and the political danger he posed to the abiding political and moral problems we face. In doing that he made Stark more interesting than Long. But Warren also makes Stark seem a recognizable American political type. In what ways? Tucker: There are ways in which Warren portrays Stark that recall much of what we know about Lincoln. He grows up poor, is very smart, is self-taught, becomes a lawyer. He is extraordinarily ambitious. He has a great natural touch with people, a sense of humor and a knowledge of what makes people tick. Some historians hostile to Lincoln have argued that he was a tyrant. Well, Lincoln himself talked in the Lyceum speech about the danger of a tyrant arising among the American people, and one can see in this Lincoln pondering his own potential. I think among other things Warren wants us to think about the same question: Why does one extraordinary leader become the savior of the republic, while another sets out to destroy it? Do you have any advice for students as they tackle the reading?—that is, would you suggest they read the novel first, or read the selections from political philosophy and American statesmen first? Flannery: Definitely read the novel first; and again. The other readings are meant to help us become better readers of Warren’s book. Tyranny is a prominent theme in the great writings of political philosophy from the beginning. And it is typically related to democracy. The tyrannical soul does whatever it wants and has power to do, regardless of others. And democracy (literally “people power”) is traditionally described as a form of government in which the people do whatever they want. Robert Penn Warren wrote within and added to this tradition.
Healthy Food Environment Good nutrition is vital to good health, disease prevention, and essential for healthy growth and development of children and adolescents. Evidence suggests that a diet of nutritious foods and a routine of increased physical activity could help reduce the incidence of heart disease, cancer, and diabetes—the leading causes of death and disability in the United States. Just hearing about the benefits of a balanced diet persuades some people to change their eating habits and lifestyles. For others, eating a healthy diet may be more difficult because healthy food options are not readily available, easily accessible, or affordable in their communities. In fact, scientific studies have found that low-income and underserved communities often have limited access to stores that sell healthy food, especially high-quality fruits and vegetables. And rural communities often have a higher number of convenience stores, where healthy foods are less available than in larger, retail food markets. Planning for improvement in overall community health should include access to affordable and healthy food. Planners, local government officials, food retailers, and food policy councils are among those who can help ensure a healthy food environment in their community. For more information, please refer to the following resources: Strategies for Creating and Maintaining a Healthy Food Environment: - Land Use Planning and Urban/Peri-Urban Agriculture - Farmland Protection - Food Policy Councils - Retail Food Stores: Grocery Stores and Supermarkets and Small Retail Locations - Community Gardens - Farmers Markets, Community Supported Agriculture, and Local Food Distribution - Transportation and Food Access - Farm-To-Institution and Food Services Tools for Assessing Areas with Limited Healthy and Fresh Food Access and Examining Community Food Systems: References Used To Develop This Page: Story M, Kaphingst KM, Robinson-O'Brien R, Glanz K. Creating healthy food and eating environments: policy and environmental approaches. Annual Review of Public Health, April 2008;29:253–72. Powell L. Food store availability and neighborhood characteristics in the United States. Preventive Medicine 2007;44:189–195. Liese AD, Weis KE, Pluto D, Smith E, Lawson A. Food store types, availability and cost of foods in a rural environment. J Am Dietetic Assoc 2007;107(11):1916–23. Zenk SN, Schulz AJ, Israel BA, James SA, Bao SM, Wilson ML. Fruit and vegetable access differs by community racial composition and socioeconomic position in Detroit, Michigan. Ethn Dis 2006;16(1):275–80. Glanz, et al. Healthy nutrition environments: concepts and measures. Am J Health Promotion. 2005;19(5):330–3, ii. Horowitz CR, Colson KA, Hebert PL, Lancaster K. Barriers to buying healthy foods for people with diabetes: evidence of environmental disparities. Am J Pub Health. 2004;94:1549–54.
Chapter 10 - DC Network Analysis In Millman’s Theorem, the circuit is re-drawn as a parallel network of branches, each branch containing a resistor or series battery/resistor combination. Millman’s Theorem is applicable only to those circuits which can be re-drawn accordingly. Here again is our example circuit used for the last two analysis methods: And here is that same circuit, re-drawn for the sake of applying Millman’s Theorem: By considering the supply voltage within each branch and the resistance within each branch, Millman’s Theorem will tell us the voltage across all branches. Please note that I’ve labeled the battery in the rightmost branch as “B3” to clearly denote it as being in the third branch, even though there is no “B2” in the circuit! Millman’s Theorem is nothing more than a long equation, applied to any circuit drawn as a set of parallel-connected branches, each branch with its own voltage source and series resistance: Substituting actual voltage and resistance figures from our example circuit for the variable terms of this equation, we get the following expression: The final answer of 8 volts is the voltage seen across all parallel branches, like this: The polarity of all voltages in Millman’s Theorem are referenced to the same point. In the example circuit above, I used the bottom wire of the parallel circuit as my reference point, and so the voltages within each branch (28 for the R1 branch, 0 for the R2 branch, and 7 for the R3 branch) were inserted into the equation as positive numbers. Likewise, when the answer came out to 8 volts (positive), this meant that the top wire of the circuit was positive with respect to the bottom wire (the original point of reference). If both batteries had been connected backwards (negative ends up and positive ends down), the voltage for branch 1 would have been entered into the equation as a -28 volts, the voltage for branch 3 as -7 volts, and the resulting answer of -8 volts would have told us that the top wire was negative with respect to the bottom wire (our initial point of reference). To solve for resistor voltage drops, the Millman voltage (across the parallel network) must be compared against the voltage source within each branch, using the principle of voltages adding in series to determine the magnitude and polarity of voltage across each resistor: To solve for branch currents, each resistor voltage drop can be divided by its respective resistance (I=E/R): The direction of current through each resistor is determined by the polarity across each resistor, not by the polarity across each battery, as current can be forced backwards through a battery, as is the case with B3 in the example circuit. This is important to keep in mind, since Millman’s Theorem doesn’t provide as direct an indication of “wrong” current direction as does the Branch Current or Mesh Current methods. You must pay close attention to the polarities of resistor voltage drops as given by Kirchhoff’s Voltage Law, determining direction of currents from that. Millman’s Theorem is very convenient for determining the voltage across a set of parallel branches, where there are enough voltage sources present to preclude solution via regular series-parallel reduction method. It also is easy in the sense that it doesn’t require the use of simultaneous equations. However, it is limited in that it only applied to circuits which can be re-drawn to fit this form. It cannot be used, for example, to solve an unbalanced bridge circuit. And, even in cases where Millman’s Theorem can be applied, the solution of individual resistor voltage drops can be a bit daunting to some, the Millman’s Theorem equation only providing a single figure for branch voltage. As you will see, each network analysis method has its own advantages and disadvantages. Each method is a tool, and there is no tool that is perfect for all jobs. The skilled technician, however, carries these methods in his or her mind like a mechanic carries a set of tools in his or her tool box. The more tools you have equipped yourself with, the better prepared you will be for any eventuality. - Millman’s Theorem treats circuits as a parallel set of series-component branches. - All voltages entered and solved for in Millman’s Theorem are polarity-referenced at the same point in the circuit (typically the bottom wire of the parallel network). Published under the terms and conditions of the Design Science License
Contextual Factors of the Classroom This paper discusses the contextual factors within the school community and how they can affect the learning and teaching process. The paper looks at different factors such as community, school district, classroom, and student characteristics. Within these factors an explanation is given for how each one can affect student learning and achievement. Also, implications are discussed and strategies are given for how teachers can incorporate contextual factors and still reach student achievement. Contextual Factors of the Classroom, School, and Community and How They Affect the Teaching and Learning Process Many people think that there aren't many contextual factors within the teaching profession. They think that the teacher teaches the lesson, the students listen quietly, and then they complete their assignments. While that may be a "dream" classroom, is it far from reality. There are many implications that go along with the profession. The surrounding community, as well as, the school and school districts have a lot of contributing factors that affect the teaching and learning process. Classroom dynamics and student characteristics are also important factors when it comes to teacher planning and student learning. Teachers need to take all of these factors into account to ensure that all the needs of our students are met. The community plays a big part in the learning process and school achievement. Some communities tend to become a very high transient area. Many people move around depending on where jobs are located, which leads to students coming and going throughout the academic school year. This instability causes a disruption in teaching. Achievement gaps are created because student instruction is not consistent, which leads to poor motivation within student learning. Also, some states may have a lot of English Language Learners (ELL) depending on where they are located geographically. States that are close to the outside borders may notice an increase in ELL students, which too, may cause a challenge when providing instruction. Teachers will have to adapt their instruction to provide strategies for ELL students and make sure that they enrich their vocabulary knowledge. Different economical statuses may also be a contributing factor within the teaching and learning process. Schools that are located in low socio-economic areas may not get the support or resources from the outside community. Parents are not able to provide supplies for their children or the classroom, which may hinder instructional opportunities for students. Whereas, schools that are located in higher economic areas have a lot of community support and local donations to help provide students with adequate resources they need for learning. School Districts have an immense influence when it comes to the learning and teaching process. They are the ones that pave the way for academic success. Lately, many districts have been going through a budget shortfall. They are being forced to lay off teachers and fill the positions with long term substitutes. Many long term substitutes do not have the same educational background and training that licensed teachers have, which may result in academic failure with our students. Districts are also being forced to cut many programs and resources. The most frequent programs to be cut are extracurricular activities. Many students gain motivation from these extracurricular activitiesrobert.askey2010-09-20T15:01:00 I could not agree more.. Students are required to hold a certain grade point average which forces students to try harder in school and promote academic achievement. In David Reeves' (2008) article, a study was conducted to measure the relevance between extracurricular activities and student achievement. Woodstock High School, in Woodstock Ill., found that students who took part in three or four extracurricular activities during the year had dramatically better grades than those who participated in no extra-curricular activitiesrobert.askey2010-09-20T15:02:00 This is a great example of how to use a reference to support the point you are trying to make.. The classroom is the most important factor when it comes to student learning and teacher instruction. It is a place where students should feel safe and learning should be promoted, therefore it should be clean and in superior condition. All student desks and chairs should be in good condition. If classroom furniture is uncomfortable, students may lose their lack of focus. Students should also be facing the direction where instruction is being taught, if they have to turn around to see, their focus will be lost. The materials in the classroom should be organized and available for easy access. This will allow little time to be taken away from instruction. Technology resources are another contributing factor for student learning and the teaching process. Resources such as computers, SMART boards, and Elmos, provide a more hands on learning experience for students. These resources will allow teachers to prepare our students for the technology savvy professional community. Another factor within the classroom is a strong sense of rules and routines. Students need to know and abide by classroom routines and rules. In the classroom, students often spend a lot of time waiting for a new activity to begin. This can lead to a lot of wasted instructional time. It is important for teachers to have effective routines in place so that the maximum amount of instructional time can be utilized. Proper routines and rules also lead to minimal disruptions and behavior problems, therefore promoting the learning process. Cooperative learning is another important factor for the learning process. Students are able to work together and build a classroom community. During the grouping students are typically forced to use problem solving strategies to come up with solutions and enhance critical thinking skills. Cooperative groups are also typically heterogeneous so that varied levels are incorporated into each group; everyone has something different to bring to the group. Students often come into our classroom with a whole lot of "baggage." There are many factors that students have to deal with which can affect their learning process. Many classrooms today are multicultural. It is important that teachers understand the cultural differences within their classroom, and get to know their students. Students may come from a background where education is not well respected and higher education is not an option. This may have an effect on those individual students's achievement. Teachers will need to modify and engage learning to help motivate these students. In today's classroom's, many students have special needs. Teachers need to realize that not all students are on the same level, and that instruction should be differentiated to meet the needs of each student. Most schools have adopted the inclusive model where children with special needs spend at least half of the day in a general education classroom with special assistants. Inclusive classrooms not only benefit students with special needs, but the general education students as well. Teachers are provided an assistant to help during instruction which will allow more attention to all students within the classroom. Special needs students are also introduced to more grade level content, helping them reach IEP goals. In turn, this will be beneficial towards the learning process for all students. Students also have different learning modalities. Some students may be an auditory learner, they need to hear directions or complete oral assignments. Other students may be a visual learner; visual images are a big part of their instruction preference. Students that need to create things and move around may be kinesthetic learners. In a typical classroom, there will be a wide variety of these modalities. Teachers should provide an assortment of instructional techniques to meet the learning needs of all students. Not all students come into your classroom with the same knowledge or skills. Many teachers have to adapt their instruction to re-teach or build background on the upcoming content. Depending on the surrounding community, students may not have the assumed social experiences. Teachers often need to spend extra time introducing students to the content, building background, and bringing in regalia to help students connect with the topic being presented. Also, students coming from the previous year may have not learned important concepts used in the next grade level. Many teachers have to spend the beginning of the school year teaching concepts that should have been mastered in the previous year. With the lack of skills or prior learning it can take twice as long to achieve the learning goals. When planning instruction, teachers should keep many implications in mind. Teachers should become conscientious of where there student comes from. They need to remember that not all students come from the same culture and socio-economic background. Some students require more patience and understanding, which they may not receive at home. Not all students have the same support system at home. Many parents may work, or there may be only one parent who works two jobs. We, as teachers, need to be more understanding to our students' emotional needs. These students may need extra instructional time to help achieve learning goals. Student learning styles is another implication that teachers need to keep in mind when planning instruction. All students learn in different ways. To help with achievement, teachers should offer various activities from each modality. In my classroom I give students an assignment menu. Each menu consists of different assignments assessing the same standard. The assignments are geared to all the different learning modalities, allowing students to choose which activities they want to do. Giving students a menu of assignment choices will not only increase student achievement, but will also enhance student motivation and engagement. robert.askey2010-09-20T15:06:00 When planning instruction, teachers need to take all of these contextual factors into account. As a teacher, we have to come to the realization that each student is different. Whether it's the community, school district, classroom, or the student characteristics, each student comes with a "bag" of who they are. We need to embrace their "bag" and help them achieve academically, socially, and emotionally. If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
Strongly supported by western mining interests and farmers, the Bland-Allison Act—which provided for a return to the minting of silver coins—becomes the law of the land. The strife and controversy surrounding the coinage of silver is difficult for most modern Americans to understand, but in the late 19th century it was a topic of keen political and economic interest. Today, the value of American money is essentially secured by faith in the stability of the government, but during the 19th century, money was generally backed by actual deposits of silver and gold, the so-called "bimetallic standard." The U.S. also minted both gold and silver coins. In 1873, Congress decided to follow the lead of many European nations and cease buying silver and minting silver coins, because silver was relatively scarce and to simplify the monetary system. Exacerbated by a variety of other factors, this led to a financial panic. When the government stopped buying silver, prices naturally dropped, and many owners of primarily western silver mines were hurt. Likewise, farmers and others who carried substantial debt loads attacked the so-called "Crime of '73." They believed, somewhat simplistically, that it caused a tighter supply of money, which in turn made it more difficult for them to pay off their debts. A nationwide drive to return to the bimetallic standard gripped the nation, and many Americans came to place a near mystical faith in the ability of silver to solve their economic difficulties. The leader of the fight to remonetize silver was the Missouri Congressman Richard Bland. Having worked in mining and having witnessed the struggles of small farmers, Bland became a fervent believer in the silver cause, earning him the nickname "Silver Dick." With the backing of powerful western mining interests, Bland secured passage of the Bland-Allison Act, which became law on this day in 1878. Although the act did not provide for a return to the old policy of unlimited silver coinage, it did require the U.S. Treasury to resume purchasing silver and minting silver dollars as legal tender. Americans could once again use silver coins as legal tender, and this helped some struggling western mining operations. However, the act had little economic impact, and it failed to satisfy the more radical desires and dreams of the silver backers. The battle over the use of silver and gold continued to occupy Americans well into the 20th century.
Learn something new every day More Info... by email Four forces are understood to govern the universe: the strong and weak nuclear forces, the electromagnetic — or electrical — force, and gravity. The latter two types, electrical force and gravity, are the only of these forces that extend to a macro range and therefore interact with matter on a large scale. Electromagnetism is responsible for chemical reactions, light, vision and virtually all interplay of matter. Almost all technology requires electricity to function, and there are several vital aspects and measurements of the electrical force. The basis of this force is the movement of electrons and the workings of positive and negative electrical charges. Particles of matter can have positive or negative electical charges. Protons, which form the nucleus of an atom, have a positive charge, whereas the electrons that orbit the nucleus have a negative charge. Opposite charges attract one another in an effort to neutralize charge, and like charges repel, so putting opposite poles of two magnets together causes the ends of the magnets to pull toward one another. Electricity, at its most basic form, is the movement of electrons from one location to another in a static discharge or in an electronic circuit; electricity can only flow where there is an available conductive path. The electromagnetic force is so named because an electric current and a magnetic field can create each other. Passing a magnet through a coil of wire causes the electrons in the wire to move away from the magnet due to the repulsion of the electrical force. Similarly, running an electric current through a coiled wire produces a magnetic field whose direction is opposite the current due to electrical inertia. Two main measurements of electrical force govern most of the behavior that electricity exhibits when interacting with objects: voltage and resistance, from which the measurement for current derives. Voltage is the amount of electrical potential that exists from one point to another, similar to the pressure built up inside an activated water hose. The higher the voltage between two points is, the greater the electrical pressure and the more easily current will flow. The concept of resistance describes an object's propensity to resist electrical flow. The electrical current in amperes that flows from one point to another can be expressed as the voltage divided by the resistance in ohms. Electrical current is either alternating current or direct current. The difference is the direction of flow; alternating current switches directions dozens of times per second with reversed polarities. Direct current maintains polarity and therefore only flows in one direction, such as through a battery. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
What Did Earth's Ancient Magnetic Field Look Like? New work from Peter Driscoll, a staff scientist at DTM, suggests Earth’s ancient magnetic field was significantly different than the present day field, originating from several poles rather than the familiar two. It is published in Geophysical Research Letters. Earth generates a strong magnetic field extending from the core out into space that shields the atmosphere and deflects harmful high-energy particles from the Sun and the cosmos. Without it, our planet would be bombarded by cosmic radiation, and life on Earth’s surface might not exist. The motion of liquid iron in Earth’s outer core drives a phenomenon called the geodynamo, which creates Earth’s magnetic field. This motion is driven by the loss of heat from the core and the solidification of the inner core. An illustration of ancient Earth's magnetic field compared to the modern magnetic field, courtesy of Peter Driscoll. But the planet’s inner core was not always solid. What effect did the initial solidification of the inner core have on the magnetic field? Figuring out when it happened and how the field responded has created a particularly vexing and elusive problem for those trying to understand our planet’s geologic evolution, a problem that Driscoll set out to resolve. Here’s the issue: Scientists are able to reconstruct the planet’s magnetic record through analysis of ancient rocks that still bear a signature of the magnetic polarity of the era in which they were formed. This record suggests that the field has been active and dipolar—having two poles—through much of our planet’s history. The geological record also doesn’t show much evidence for major changes in the intensity of the ancient magnetic field over the past 4 billion years. A critical exception is in the Neoproterozoic Era, 0.5 to 1 billion years ago, where gaps in the intensity record and anomalous directions exist. Could this exception be explained by a major event like the solidification of the planet’s inner core? In order to address this question, Driscoll modeled the planet’s thermal history going back 4.5 billion years. His models indicate that the inner core should have begun to solidify around 650 million years ago. Using further 3-D dynamo simulations, which model the generation of magnetic field by turbulent fluid motions, Driscoll looked more carefully at the expected changes in the magnetic field over this period. “What I found was a surprising amount of variability,” Driscoll said. “These new models do not support the assumption of a stable dipole field at all times, contrary to what we’d previously believed.” His results showed that around 1 billion years ago, Earth could have transitioned from a modern-looking field, having a “strong” magnetic field with two opposite poles in the north and south of the planet, to having a “weak” magnetic field that fluctuated wildly in terms of intensity and direction and originated from several poles. Then, shortly after the predicted timing of the core solidification event, Driscoll’s dynamo simulations predict that Earth’s magnetic field transitioned back to a “strong,” two-pole one. “These findings could offer an explanation for the bizarre fluctuations in magnetic field direction seen in the geologic record around 600 to 700 million years ago,” Driscoll added. “And there are widespread implications for such dramatic field changes.” Overall, the findings have major implications for Earth’s thermal and magnetic history, particularly when it comes to how magnetic measurements are used to reconstruct continental motions and ancient climates. Driscoll’s modeling and simulations will have to be compared with future data gleaned from high quality magnetized rocks to assess the viability of the new hypothesis. Carnegie Science Press Release, 24 June 2016 - - - - -
I look at him, confused, and he laughs. "Because planes fly." Got it. I don't like puns, but I laugh anyway. We'll work on the ironic, observational humor at another time. The truth is that I'm glad my son can joke around. True, it's first grade humor built on homophones, but I'm convinced that it's a powerful skill. Moreover, I'm convinced that the best classrooms are the ones where humor is present. It can be subtle. It can look very orderly. It can be sprinkled throughout an intense debate or a deep discussion. However, it needs to be present. The following are some of what I consider to be the educational benefits of humor: - Critical Thinking: Humor often requires analytical thinking followed by a sophisticated level of synthesis. Even something as lame as my son's pun required him to analyze the language, make sense of it and create something new: a flying piece of toast. - Social Awareness: Humor is a powerful tool in social change. It strips away the fear of those who are committing injustices. Whether it's Charlie Chaplin mocking Hitler or South Park lambasting Kim Jong Il (oh, he's dead now? I didn't even realize he was Il.) satire can be a powerful method of bringing the absurdity of an idea to life. Last year, I played clips from The Onion. Students read "A Modest Proposal." I wanted them to see how humor can be used to make sense out of the world. - Language Arts: Humor is a chance to play around with words, make sense out of tone and learn the art of timing. Humor can also be a place where students learn to tell stories, make sense out of irony and develop deep satire. - Empathy: Humor is a chance to display empathy toward others. It's a chance to read the group and venture out into new territory. But it's also a place to stumble into sarcasm and learn to avoid using laughter to isolate, mock and marginalize others. - Motivation: For all the talk of tech integration, art integration or music and movement integration, I've never seen anything about the intentional integration of humor. However, I think it's necessary. Why not use puns to teach multiple meanings of words? Why not use satire to reach higher-level thinking on social issues? Last year, students created goofy comic strips to illustrate idioms (a man goes into surgery after saying, "I gave you my heart.") - Life Skills: Whether it's in a social context or in the workforce, humor can be a powerful method of connecting with others, diffusing tension and providing leadership to a group. - Creativity: When students develop their own jokes, they learn the craft of spontaneous creativity. I'm not sure if it's something that has to be modeled and observed or something teachers should simply encourage and allow. However, I have noticed that the students with the strongest command of humor are often very creative. - Language Development: I can tell when an ELL student is truly grasping English, because he or she becomes comfortable in telling jokes. Humor combines the colloquial with the academic, infusing idioms with texture and tone. - Risk-Taking: Every joke is an act of vulnerability. There's a risk involved. I'm never sure if the group will laugh or simply roll their eyes and sigh. - Classroom Community: There is an intimacy and a happiness that occurs when a group laughs together. It's why we relate to the dysfunctional team members in The Office. They laugh together.
The A to Z of Artificial Intelligence Artificial Intelligence (AI) is a field of computer science that focuses on developing machines capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing. AI has the potential to revolutionize the way we live and work, and its impact is already being felt across many industries, from healthcare and finance to manufacturing and transportation. To help understand the complex and rapidly evolving field of AI, we’ve put together “The A to Z of Artificial Intelligence.” This comprehensive list of key concepts, terms, and technologies related to AI is presented in alphabetical order from A to Z, providing a brief description of each term or concept. List of The A to Z of Artificial Intelligence A – AI: Short for “Artificial Intelligence,” the field of computer science that focuses on developing machines that can perform tasks that typically require human intelligence. B – Big Data: The massive amounts of data that are generated every day, which can be analyzed and used to train AI systems. C – Computer Vision: The field of AI that focuses on enabling machines to interpret and understand visual information from the world around them. D – Deep Learning: A type of machine learning that uses neural networks with multiple layers to learn increasingly complex patterns and representations. E – Expert Systems: AI systems that mimic the decision-making abilities of a human expert in a particular field. F – Fuzzy Logic: A type of logic that allows for imprecise or uncertain reasoning, often used in AI systems that deal with uncertain or incomplete information. G – Genetic Algorithms: A type of AI algorithm inspired by natural selection, which can be used to optimize complex systems. H – Heuristics: A problem-solving approach used in AI systems that involves using rules of thumb or educated guesses to make decisions. I – Intelligent Agents: AI systems that are capable of autonomous decision-making based on their environment. J – Jupyter Notebook: A popular tool used in the development and training of AI models, which allows for interactive coding and data analysis. K – Knowledge Representation: The process of encoding knowledge in a format that can be used by AI systems. L – Logic Programming: A type of programming that uses formal logic to represent knowledge and solve problems. M – Machine Learning: A type of AI that involves training machines to learn from data, rather than being explicitly programmed. N – Natural Language Processing: The field of AI that focuses on enabling machines to understand and generate human language. O – Ontology: A formal representation of knowledge that can be used to enable intelligent reasoning and decision-making. P – Planning and Scheduling: AI techniques used to generate plans or schedules that can be executed by machines. Q – Quality Control: The use of AI systems to improve the quality of products or services. R – Robotics: The field of AI that focuses on the development of robots and other physical machines that can interact with the world around them. S – Speech Recognition: AI systems that can interpret and understand human speech. T – Turing Test: A test designed to evaluate a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. U – Uncertainty: The inherent uncertainty and ambiguity in many real-world problems, which can make them challenging for AI systems to solve. V – Virtual Assistants: AI systems that can interact with humans to answer questions, perform tasks, or provide other forms of assistance. W – Weak AI: AI systems that are designed to perform specific tasks or solve specific problems, rather than exhibiting general intelligence. X – Explainability: The ability of an AI system to explain its decision-making process in a way that is understandable to humans. Y – Yield Management: The use of AI systems to optimize pricing and inventory management in industries such as transportation and hospitality. Z – Zero-shot Learning: A type of machine learning in which a model is trained to recognize new categories of objects without being explicitly trained on them.
Conversely, for the southern hemisphere, today is your summer solstice. Not only does it mark the first day of summer and the longest day of the year, but it is the embodiment of "things will only get worse from here" (doubly so for those of you that live in Australia). Soon, the hot, summer days, the tank tops and sunscreen are going to fade into a distant memory until temperate fall arrives. And before you get too comfortable with the moderate temperatures, winter will catch you by surprise, clutching your region in its sadistic grasp. Now that we've discussed what the winter solstice means here on Earth, let's discuss what the solstice means for Earth as a whole. In spite of your location, the December solstice is the time of year when the Sun is at its lowest point relative to Earth (about -25.5 degrees). The gif to the side helps to demonstrate this (the equator has a value of 0, moving North has a positive value and moving South has a negative value). Because of these mechanics, from the perspective of the Northern Hemisphere, the Sun will be at the lowest in the sky. Likewise, the southern hemisphere will see the Sun at its highest point in the sky. Contrary to popular belief (in the northern hemisphere, at least), today IS the day the Earth is at its closest to the Sun. The hot/cold weather is not caused by the Earth's distance to the Sun, but rather, it is cause by Earth's tilt. In essence, the temperatures are created based on the angle with which sunlight hits the regions in question (also, the length of day helps). The Solstice has held a cultural significance for our species since the dawn of recorded history. Holidays, celebrations, and festivals usually happen around the solstice. The solstices (and the equinoxes) have been the centerpiece of our earliest architecture. Today, ancient monuments from all over the world will pay tribute to the Solstice as a "once a year" event happens. When the Sun rises and sets, it casts a shadow, makes a pattern, or will lineup in a specific way with these ancient tributes to the heavens. Share This Article
© University of Oregon Tsunami waves are very different from storm waves: when trying to understand the impact of this phenomenon, the most common mistake is to compare two very different phenomena. Unlike wind waves, tsunami waves move the entire volume of the water column from the bottom to the surface and not just the surface layers, have amplitudes of tens or hundreds of kilometres and carry a lot of energy. When it reaches the coast, the speed of the tsunami decreases compared with in the open sea, but it can still exceed 10 metres per second, almost forty kilometres per hour. Analyses of recent events, as well as simulations in specially equipped tanks, have shown that a tsunami wave of a few tens of centimetres can drag an adult into the sea, while higher waves can easily carry entire ships for hundreds of metres inland from the affected area, as happened for example in the events of Sumatra (2004), Japan (2011). A wave between half a metre and a metre high can unhinge doors and windows and break through heavy iron gates. Larger waves can easily break down masonry walls, tear a house from its foundations, drag along any type of material they encounter in their path. Even the most solid buildings can be knocked down by a tsunami with particularly high waves: the waves of the Aleutian Islands tsunami on 1 April 1946 were over thirty metres high and completely destroyed three lighthouses in Cape Scotch, Alaska. Five hours later, the same waves reached Hilo, Hawaii, over three thousand kilometres away, causing at least 156 deaths. Tsunami waves have enormous force: that is why if you feel a strong or prolonged earthquake near the coast, you must immediately evacuate the area, escaping as far and as high as possible. As the video shows, made in a testing tank equipped for this purpose, the speed of a wave of just over thirty centimetres is faster than that of a person running to escape, and just thirty centimetres of water are more than enough to throw an adult man down to the ground and to drag him away. In addition to the immediate effects of tsunamis, linked to the kinetic energy carried by moving water, there are also long-term effects on the environment and on agriculture. After a tsunami, large quantities of sea salt tend to deposit on the ground, salt which then tends to accumulate and concentrate, killing existing plants - including trees - and preventing new ones from growing, to the point of making any cultivation impossible and thus eliminating an important source of livelihood for the affected population, adding more damage to that caused by the impact of the waves. The phenomenon of sedimentation of deposits during tsunami ingressions is also particularly important for identifying traces of past events, for which there is no historical documentation. For example, researchers have been able to date a major tsunami which hit the coast of Japan at the beginning of the year 1700, by studying the hundreds of dead pines in the Copalis Ghost Forest in Washington State, on the northwest coast of the United States. The trees were submerged by a flood of at least one metre caused by the tsunami going upstream along the river. By analysing the growth circles it was possible to exactly determine the year of their death, which occurred precisely because of the salt deposited in the ground: the discovery is very important as it allowed to identify the seismic source of a major tsunami which hit the eastern coasts of Japan, called "orphan tsunami" because the Japanese catalogues recorded no local earthquake which could have generated it. The areas flooded by tsunamis also have large quantities of sediments of various kinds from the seabed, including mud, sand, stones and very frequently shells too. The analysis of the sections of these deposits, called tsunamites, allows collecting more precise data on this type of phenomena, improving the knowledge of tsunamis which occurred in history and allowing to study events occurring in ancient times, in order to understand the frequency of tsunamis and the danger of certain areas.
Do you find your mood and energy levels slump as autumn sets in, and continues on through the winter months? Perhaps you just put it down to the winter blues and suffer through it. You’re not alone. This type of depression is known as Seasonal Affective Disorder (SAD), and is not at all uncommon. Let’s take a look at why we feel this way, and a few things we can do to reduce the SAD symptoms. But why does SAD happen? - The change in seasons can mess with your biological clock (circadian rhythm). The decrease in sunlight may cause your body’s internal clock to be out of sync with ‘external clocks’ and disrupt the hormones that regulate your appetite and metabolism. - Your serotonin levels fall. Reduced sunlight can cause a drop in serotonin, a ‘feel-good’ brain chemical (neurotransmitter). - Your melatonin levels increase. When it’s dark, the brain produces the hormone melatonin, which makes us sleep. When it becomes light again, it stops producing melatonin and we wake up. When the days are shorter and darker the production of this hormone increases. It has been found that people with SAD produce much higher levels of melatonin in winter than other people. (This is also what happens to animals when they hibernate). - Living far from the equator. SAD appears to be more common among people who live far north or south of the equator. This may be due to less sunlight during the winter and longer days during the summer months. Do I have SAD? We all experience days where we feel down and unmotivated, but how do you identify if it is something more than simply a run of bad days? Symptoms of SAD can include: - Feeling depressed most of the day, nearly every day - Losing interest in activities you once enjoyed - Tiredness or low energy - Having problems with sleeping or oversleeping - Experiencing changes in your appetite or weight, especially craving foods high in carbohydrates - Feeling sluggish or agitated - Having difficulty concentrating - Feeling hopeless, worthless or guilty Before turning to your GP for an antidepressant prescription, try implementing some of these alternative mood boosters. Make your home sunnier Pull back the curtains, raise the blinds, and if you work from home move your desk near to the window. Lots of natural light in the home can elevate those ‘happy’ hormones. Vitamin D supplementation Vitamin D deficiency is extremely common and has long been linked with SAD. Try taking a Vitamin D3 supplement to get an extra boost of this vitamin. Since vitamin D is fat soluble, taking some form of healthy fat with it will help optimise absorption. Invest in a light box In light therapy, also called phototherapy, you sit a few feet from a special light box so that you’re exposed to bright light within the first hour of waking up each day. Light therapy simulates natural sunshine and appears to cause a change in brain chemicals linked to mood. Here’s a two-for-one. Regular physical activity has been found to work even better than antidepressant drugs by boosting that glorious serotonin. Combine your exercise with getting out in nature in the sunshine and feel your mood lift! Even on cloudy days outdoor light can help. Try and get outside for some light exercise every day if you can. It can be the last thing you feel like doing when you’re not feeling 100%, but if you can manage it try and connect with people you enjoy being around. They can often offer support, a shoulder to cry on or a good laugh, which can leave you feeling miles better. A morning meditation is a great way to set you up for a good day and can allow you to step outside of those depressive/anxious thoughts. Similarly other mind-body techniques such as tai chi, yoga, music or art therapy can have a really positive impact on your mood and help counteract feelings associated with SAD. How do I know when to call a doctor? If your symptoms are beyond feeling a bit down in the dumps, and are causing disruptions in your life, then never hesitate to reach out to a professional. If symptoms occur for days at a time, you notice major shifts in sleeping or eating, you are withdrawing from friends and family, or the activities that usually boost your mood don’t work, then seek help immediately. Written by Ché Miller About the author Ché has always had a passion for hospitality having completed a conjoint Bachelors Degree in International Business and Hospitality Management. She has spent the last 15 years working in the hospitality industry. When this passion led her to working in a premier health retreat in Australia in her twenties, she found the knowledge she gained there inspired her to start living a healthier life. Now Ché loves to combine her two favourite things, hospitality and wellbeing, by scouring the island for the best nourishing restaurants, products and services. She has been living in Mallorca since early 2017, having moved from her home in New Zealand. She absolutely loves the energy of the island and everything it has to offer. Ché’s other interests include ashtanga yoga, boxing, reading, writing, and really good coffee.
In California and the western U.S. States, earthquake faults can be hundreds of miles long and be visible on the surface of the earth. In the central U.S. however, faults are buried deep underground and are generally categorized as “seismic zones”, or areas where many smaller faults are clustered together to produce seismic activity. While some zones, such as the New Madrid Seismic Zone, may be more widely known, others such as the East Tennessee and Central Virginia Seismic Zones can also produce damaging earthquakes at any time. Use the links below to find out more about these zones and earthquakes that affect the central and eastern U.S. Central Virginia Seismic Zone Charleston/South Carolina Seismic Zones E. Tennessee Seismic Zone
+ Play Audio | + Download Audio | + Historia en Espa単ol | + Join mailing list November 16, 2005: Is Earth in a vortex of space-time? We'll soon know the answer: A NASA/Stanford physics experiment called Gravity Probe B (GP-B) recently finished a year of gathering science data in Earth orbit. The results, which will take another year to analyze, should reveal the shape of space-time around Earth--and, possibly, the vortex. If Earth were stationary, that would be the end of the story. But Earth is not stationary. Our planet spins, and the spin should twist the dimple, slightly, pulling it around into a 4-dimensional swirl. This is what GP-B went to space to check Above: An artist's concept of twisted space-time around Earth. [More] The idea behind the experiment is simple: Put a spinning gyroscope into orbit around the Earth, with the spin axis pointed toward some distant star as a fixed reference point. Free from external forces, the gyroscope's axis should continue pointing at the star--forever. But if space is twisted, the direction of the gyroscope's axis should drift over time. By noting this change in direction relative to the star, the twists of space-time could be measured. In practice, the experiment is tremendously difficult. The four gyroscopes in GP-B are the most perfect spheres ever made by humans. These ping pong-sized balls of fused quartz and silicon are 1.5 inches across and never vary from a perfect sphere by more than 40 atomic layers. If the gyroscopes weren't so spherical, their spin axes would wobble even without the effects of relativity. According to calculations, the twisted space-time around Earth should cause the axes of the gyros to drift merely 0.041 arcseconds over a year. An arcsecond is 1/3600th of a degree. To measure this angle reasonably well, GP-B needed a fantastic precision of 0.0005 arcseconds. It's like measuring the thickness of a sheet of paper held edge-on 100 miles away. GP-B researchers invented whole new technologies to make this possible. They developed a "drag free" satellite that could brush against the outer layers of Earth's atmosphere without disturbing the gyros. They figured out how to keep Earth's penetrating magnetic field out of the spacecraft. And they concocted a device to measure the spin of a gyro--without touching the gyro. Pulling off the experiment was an exceptional challenge. A lot of time and money was on the line, but the GP-B scientists appear to have done it. "There were not any major surprises" in the experiment's performance, says physics professor Francis Everitt, the Principal Investigator for GP-B at Stanford University. Now that data-taking is complete, he says the mood among the GP-B scientists is "a lot of enthusiasm, and a realization also that a lot of grinding hard work is ahead of us." A careful, thorough analysis of the data is underway. The scientists will do it in three stages, Everitt explains. First, they will look at the data from each day of the year-long experiment, checking for irregularities. Next they'll break the data into roughly month-long chunks, and finally they'll look at the whole year. By doing it this way, the scientists should be able to find any problems that a more simple analysis might miss. Eventually scientists around the world will scrutinize the data. Says Everitt, "we want our sternest critics to be us." The stakes are high. If they detect the vortex, precisely as expected, it simply means that Einstein was right, again. But what if they don't? There might be a flaw in Einstein's theory, a tiny discrepancy that heralds a revolution in physics. First, though, there are a lot of data to analyze. Stay tuned. Authors: Patrick L. Barry and Dr. Tony Phillips | Editor: Dr. Tony Phillips | Credit: Science@NASA Gravity Probe B -- (Stanford University) the mission's home page A Pocket of Near Perfection -- (Science@NASA) Now orbiting Earth, Gravity Probe B is a technological tour de force. Was Einstein a Space Alien? -- (Science@NASA) One hundred years ago, Albert Einstein revolutionized physics. In Search of Gravitomagnetism -- (Science@NASA) Gravity Probe B has left Earth to measure a subtle yet long-sought force of Nature. A Pop Quiz for Einstein-- (Science@NASA) The Gravity Probe B mission tests two important aspects of Einstein's theory of General Relativity.
01010111 01100101 01101100 01100011 01101111 01101101 01100101 00100000 01110100 01101111 00100000 01110100 01101000 01101001 01110011 00100000 01110111 01100101 01100101 01101011 00100111 01110011 00100000 01001101 01100001 01110100 01101000 00100000 01001101 01110101 01101110 01100011 01101000 00100001 Or, if you don’t speak binary, welcome to this week’s Math Munch! Looking at that really, really long string of 0s and 1s, you might think that binary is a really inefficient way to encode letters, numbers, and symbols. I mean, the single line of text, “Welcome to this week’s Math Munch!” turns into six lines of digits that make you dizzy to look at. But, suppose you were a computer. You wouldn’t be able to talk, listen, or write. But you would be made up of lots of little electric signals that can be either on or off. To communicate, you’d want to use the power of being able to turn signals on and off. So, the best way to communicate would be to use a code that associated patterns of on and off signals with important pieces of information– like letters, numbers, and other symbols. That’s how binary works to encode information. Computer scientists have developed a code called ASCII, which stands for American Standard Code for Information Interchange, that matches important symbols and typing communication commands (like tab and backspace) with numbers. To use in computing, those numbers are converted into binary. How do you do that? Well, as you probably already know, the numbers we regularly use are written using place-value in base 10. That means that each digit in a number has a different value based on its spot in the number, and the places get 10 times larger as you move to the left in the number. In binary, however, the places have different values. Instead of growing 10 times larger, each place in a binary number is twice as large as the one to its right. The only digits you can use in binary are 0 and 1– which correspond to turning a signal on or leaving it off. But if you want to write in binary, you don’t have to do all the conversions yourself. Just use this handy translator, and you’ll be writing in binary 01101001 01101110 00100000 01101110 01101111 00100000 01110100 01101001 01101101 01100101 00101110 Next up, check out this video about a classic number problem: the Infinite Hotel Paradox. If you find infinity baffling, as many mathematicians do, this video may help you understand it a little better. (Or add to the bafflingness– which is just how infinity works, I guess.) I especially like how despite how many more people get rooms at the hotel (so long as the number of people is countable!), the hotel manager doesn’t make more money… Speaking of videos, how about a math video contest? MATHCOUNTS is hosting a video contest for 6th-8th grade students. To participate, teams of four students and their teacher coach choose a problem from the MATHCOUNTS School Handbook and write a screenplay based on that problem. Then, make a video and post it to the contest website. The winning video is selected by a combination of students and adult judges– and each member of the winning team receives a college scholarship! Here’s last year’s first place video. 01000010 01101111 01101110 00100000 01100001 01110000 01110000 01100101 01110100 01101001 01110100 00100001 (That means, Bon appetit!)
Surface of cone in the plane is a circular arc with central angle of 126° and area 415 cm2. Calculate the volume of a cone. Calculate the volume of a cone. Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Showing 0 comments: Tips to related online calculators You need to know the following knowledge to solve this word math problem: Next similar math problems: - Triangular prism The base of the perpendicular triangular prism is a rectangular triangle with a hypotenuse of 10 cm and one leg of 8 cm. The prism height is 75% of the perimeter of the base. Calculate the volume and surface of the prism. - Cone side Calculate the volume and area of the cone whose height is 10 cm and the axial section of the cone has an angle of 30 degrees between height and the cone side. - Axial section of the cone The axial section of the cone is an isosceles triangle in which the ratio of cone diameter to cone side is 2: 3. Calculate its volume if you know its area is 314 cm square. - Four sided prism Calculate the volume and surface area of a regular quadrangular prism whose height is 28.6cm and the body diagonal forms a 50 degree angle with the base plane. - Angle of deviation The surface of the rotating cone is 30 cm2 (with circle base), its surface area is 20 cm2. Calculate the deviation of the side of this cone from the plane of the base. - Lateral surface area The ratio of the area of the base of the rotary cone to its lateral surface area is 3: 5. Calculate the surface and volume of the cone, if its height v = 4 cm. - The diagram 2 The diagram shows a cone with slant height 10.5cm. If the curved surface area of the cone is 115.5 cm2. Calculate correct to 3 significant figures: *Base Radius *Height *Volume of the cone - 9-gon pyramid Calculate the volume and the surface of a nine-sided pyramid, the base of which can be inscribed with a circle with radius ρ = 7.2 cm and whose side edge s = 10.9 cm. Circular cone of height 15 cm and volume 5699 cm3 is at one-third of the height (measured from the bottom) cut by a plane parallel to the base. Calculate the radius and circumference of the circular cut. - The tent The tent shape of a regular quadrilateral pyramid has a base edge length a = 2 m and a height v = 1.8 m. How many m2 of cloth we need to make the tent if we have to add 7% of the seams? How many m3 of air will be in the tent? - Castle tower The castle tower has a cone-shaped roof with a diameter of 10 meters and a height of 8 meters. Calculate how much m² of coverage is needed to cover it if we must add one-third for the overlap. - Sphere parts, segment A sphere with a diameter of 20.6 cm, the cut is a circle with a diameter of 16.2 cm. .What are the volume of the segment and the surface of the segment? - Surface of the cone Calculate the surface of the cone if its height is 8 cm and the volume is 301.44 cm3. - Pyramid 8 Calculate the volume and the surface area of a regular quadrangular pyramid with the base side 9 cm and side wall with the base has an angle 75°. - Pyramid in cube In a cube with edge 12 dm long we have inscribed pyramid with the apex at the center of the upper wall of the cube. Calculate the volume and surface area of the pyramid. - Tetrahedral pyramid Determine the surface of a regular tetrahedral pyramid when its volume is V = 120 and the angle of the sidewall with the base plane is α = 42° 30´. - Sphere in cone A sphere is inscribed in the cone (the intersection of their boundaries consists of a circle and one point). The ratio of the surface of the ball and the contents of the base is 4: 3. A plane passing through the axis of a cone cuts the cone in an isoscele
In geometry, an inscribed planar shape or solid is one that is enclosed by and "fits snugly" inside another geometric shape or solid. To say that "figure F is inscribed in figure G" means precisely the same thing as "figure G is circumscribed about figure F". A circle or ellipse inscribed in a convex polygon (or a sphere or ellipsoid inscribed in a convex polyhedron) is tangent to every side or face of the outer figure (but see Inscribed sphere for semantic variants). A polygon inscribed in a circle, ellipse, or polygon (or a polyhedron inscribed in a sphere, ellipsoid, or polyhedron) has each vertex on the outer figure; if the outer figure is a polygon or polyhedron, there must be a vertex of the inscribed polygon or polyhedron on each side of the outer figure. An inscribed figure is not necessarily unique in orientation; this can easily be seen, for example, when the given outer figure is a circle, in which case a rotation of an inscribed figure gives another inscribed figure that is congruent to the original one. Familiar examples of inscribed figures include circles inscribed in triangles or regular polygons, and triangles or regular polygons inscribed in circles. A circle inscribed in any polygon is called its incircle, in which case the polygon is said to be a tangential polygon. A polygon inscribed in a circle is said to be a cyclic polygon, and the circle is said to be its circumscribed circle or circumcircle. For an alternative usage of the term "inscribed", see the inscribed square problem, in which a square is considered to be inscribed in another figure (even a non-convex one) if all four of its vertices are on that figure. - Every circle has an inscribed triangle with any three given angle measures (summing of course to 180°), and every triangle can be inscribed in some circle (which is called its circumscribed circle or circumcircle). - Every triangle has an inscribed circle, called the incircle. - Every circle has an inscribed regular polygon of n sides, for any n≥3, and every regular polygon can be inscribed in some circle (called its circumcircle). - Every regular polygon has an inscribed circle (called its incircle), and every circle can be inscribed in some regular polygon of n sides, for any n≥3. - Not every polygon with more than three sides has an inscribed circle; those polygons that do are called tangential polygons. Not every polygon with more than three sides is an inscribed polygon of a circle; those polygons that are so inscribed are called cyclic polygons. - Every triangle can be inscribed in an ellipse, called its Steiner circumellipse or simply its Steiner ellipse, whose center is the triangle's centroid. - Every triangle has an infinitude of inscribed ellipses. One of them is a circle, and one of them is the Steiner inellipse which is tangent to the triangle at the midpoints of the sides. - Every acute triangle has three inscribed squares. In a right triangle two of them are merged and coincide with each other, so there are only two distinct inscribed squares. An obtuse triangle has a single inscribed square, with one side coinciding with part of the triangle's longest side. - A Reuleaux triangle, or more generally any curve of constant width, can be inscribed with any orientation inside a square of the appropriate size.
Mixing is a fundamental part of music production as it is the stage at which all the sounds come together to be blended as one. A great mix can be defined as having a perfect balance of the following characteristics: Achieving the correct balance of instruments can be a key success of a song. If there is an excessive number of instruments playing at the same time, they will fight with one another for space within the mix; a phenomenon known as masking. On the other hand, if there are not enough instruments playing at the same time, the track may sound empty. Mixing involves making adjustments to eliminate masking conflicts by changing the dynamics and development of the song. This can be achieved by: - Muting offending instruments when the mix is overly busy - Lowering the level of certain instruments to let others shine through - Using equalization to separate instrument frequencies in the mix - Using the pan controls to give instruments their own space within the stereo field - Removing tracks completely from the mix As a general rule, the more elements playing together in a song at the same time, the more chance they will fight together and muddy up the mix. It is the tone of music that makes it sound bright, deep and full; or dull, empty and plain. Having a perfect balance of tone is achieving a mix that has all tonal frequencies properly represented throughout the audio spectrum. That is enough sub-bass, bass, midrange, treble, and presence to satisfy a clear and powerful sound. If certain instruments occupy a similar frequency band they will end up masking each other within that particular area of the audio spectrum, often resulting in a muddy sound. A good example of a muddy mix can be heard when unprocessed vocals and electric guitar are mixing together as they both occupy similar frequency ranges. The frequencies of these conflicting instruments have to be adjusted to allow them to sit together better in the mix. The process of adjusting a sounds tonal balance is carried out with an equalizer. Space refers to the stereo field. It is important to make use of the stereo field by finding a perfect balance of space within a mix. Achieving a perfect balance of space means that the stereo sound field sounds full and distinct. Panning adds definition to instruments by giving them their own space in the stereo field. Panning can be used to eliminate masking by moving sounds out the way of other sounds. Leaving everything centered in the stereo field can cause the sounds to become muddy and lifeless. In contrast, spreading instruments far and wide will cause the sound to become unbalanced, loose and thin. Balance of interest means having several elements that make the mix exciting. This could be with a solid arrangement or down to a single standout instrument. Using effects, processors, and equalization all help add interest to a song.
What is Renovascular Disease? Renovascular disease affects the blood vessels of your kidneys, called the renal arteries and veins. When your kidney blood vessels narrow (stenosis) or have a clot (thrombosis), your kidney is less able to function properly. What Causes Renovascular Disease? Atherosclerosis is a disease in which plaque builds up inside your arteries and slows the amount of blood flowing through the arteries. In some situations eventually enough plaque may build up to interfere with blood flow in your renal arteries. Risk Factors for Renovascular Disease Factors that may increase your chance of getting renovascular disease include: - High blood pressure - High cholesterol - A family history of heart or vascular disease Symptoms of Renovascular Disease Signs and symptoms of renovascular disease may include: - Pain in the sides of abdomen, legs or thighs - Blood in urine - Protein in urine - Enlarged kidney - High blood pressure - Fever, nausea or vomiting - Sudden, severe swelling in leg - Difficulty breathing Diagnosing & Treatments for Renovascular Disease Diagnosing Renovascular Disease To determine if someone has renovascular disease, a health care provider will ask questions about general health, medical history and symptoms. Then they will perform a physical exam. If your health care provider suspects renovascular disease, further diagnostic testing will be recommended. Treatments for Renovascular Disease The following procedures are offered at St. Elizabeth's Interventional Peripheral Vascular Lab for the treatment of renovascular disease depending on a patient’s diagnosis: - Catheter-based Revascularization. These procedures involve a thin tube called a catheter, which is inserted into an artery. These procedures include: - Balloon angioplasty. A balloon-tipped catheter is used to press plaque against the wall of the artery. This increases the amount of space for the blood to flow. - Stenting. Usually done after angioplasty. A wire mesh tube is placed in a damaged artery. It will support the wall of the artery and keep it open. - Atherectomy. Instruments are inserted via catheter. They are used to cut away and remove plaque so that blood can flow more easily. How to Prevent Renovascular Disease You can reduce some of your risk factors for developing atherosclerosis by following these recommendations:
The epidermis is the outer layer of the skin; it is composed of stratified squamous epithelium but lacks blood vessels. LAYERS OF THE EPDERMIS There are five main layers of the epidermis; they include the stratum basale, stratum spinosum, stratum granulosum, stratum lucidum, and stratum corneum. - STRATUM BASALE This is also called stratum germinativum; it is the deepest layer of epidermis. It is a single role of cuboidal keratinocytes and the cytoskeleton. Within this epithelium, cells include keratin intermediate filament. New keratinocyte are produced in the stratum basale, also melanocytes and merkel cells are found in this layer. This layer is close to the dermis and nourished by dermal blood vessel. As the cells in the stratum basale divides and grow, the older epidermal cells are pushed away from the dermis towards the skin surface. As this cells moves away from the dermis so as they are supplied with poor nutrient and in time gets hardened and dies (keratinocytes). - STRATUM SPINOSUM This is composed of 8-10 layers of keratinocytes. The keratinocytes begins to join by having keratin intermediate filaments insert in desmosomes. The cells found in this layer are the Langerhans cell and melanocyte projections. In this layer the melanocyte, transport their pigment into the keratinocyte. The dead cells composed in the stratum spinosum are eventually shed. - STRATUM GRANULOSUM The stratum granulosum is composed of layers of flattened keratinocytes undergoing apoptosis. A keratohyalin (a protein) in cells is produced in this layer, it assembles keratin intermediate filament into keratin protein. Also in this layer, a lamellar granules release lip-rich secretion for water- repellent sealant to a skin. - STRATUM LUCIDUM This is the thickened skin of the palms and soles. This layer may be missing where the epidermis is thin over the rest of the body. The stratum lucidum is composed of 4-6 layers of flat dead cells. - STRATUM CORNEUM This is the outermost layer of the epidermis, it is formed by the accumulations of dead cells (keratinocyte) in the outermost epidermis, and these dead cells contained here are eventually shed. In stratum corneum plasma membrane enclosed packets of keratin called corneocytes. In a healthy skin, production of epidermal cells is closely balanced with loss of dead cells from the stratum corneum in other that the skin does not wear away completely. The rate of cell division increases where the skin is rubbed or where pressure is applied to the skin regularly, causing growth of thickened area called calluses on the palms and soles, and keratinized conical masses on the toes called corns. - CELLS OF THE EPIDERMIS The epidermis is made up of four cells. (3) Langerhans cells (4) Merkel cell These cells are arranged in layers within the epidermis. As the keratinocyte gets closer to the surface of the skin, produces keratin. It also produce lamellar granules, a water repellent sealant that keeps water out. This cell are those cells that produce a dark pigment called melanin, which gives the skin its color. The melanocyte transfers the dark pigment to the keratinocyte. The melanin absorbs ultra-violent radiation in sunlight, preventing mutation in the DNA of skin cells and other damaging effects. That is to say, that melanin protects us from damage against ultraviolent light. The melanocyte lie in the deepest portion of the epidermis, even though they are the only cell that produce melanin, the pigment also may be present in other epidermal cells nearby. This is because of the long pigment containing cellular extension that pass upward between epidermal cells. This extensions transfer melanin granules into these other cells by a process called cytocrine secretion. The number of melanocyte are about the same in all people, the difference in the skin color result from differences in the amount of melanin that the melanocyte produce and in the distribution and size of pigment granules. The skin color is mostly genetically determined. If genes instruct melanocyte to produce abundant melanin, the skin is dark but if the gene instruct melanocyte to produce lesser melanin, the skin is white. - LANGERHANS CELL This cell participate in immune response against microbes. - MERKEL CELL This is also known as AKA type1 cutaneous mechanoreceptor. This cell detects touch of sensation, they contacts sensory neuron along tactile disc. PHYCHOLOGICAL EFFECT CAUSING CHANGE ON THE SKIN COLOR (1) Cyanosis: this occurs when blood oxygen concentration is low leading to a bluish color in skin. (2) Erythema: this is the redness of skin due to injury, exposure to heat, inflammation, or allergic reactions. (3) Jaundice: this is the yellowish color of skin and white color of eye usually due to liver disease. (4) Pallor: this is the paleness of skin caused by shock or anemia. Tibebu Eyasu Shaga (DDS) on September 19, 2019: you are great my sister , continou your work
Smoking can harm your digestive system in a number of ways. Smokers tend to get heartburn and peptic ulcers more often than nonsmokers. Smoking makes those conditions harder to treat. Smoking increases the risk for Crohn's disease and gallstones. It also increases the risk for more damage in liver disease. Smoking can also make pancreatitis worse. In addition, smoking is linked to cancer of the digestive organs, including the head and neck, stomach, pancreas, and colon. Researchers don't know if vaping (electronic cigarettes) harms the digestive system. The stomach makes acidic juices that help you digest food. If these juices flow backward into your esophagus, or food pipe, they can cause heartburn. They can also cause a condition called GERD (gastroesophageal reflux disease). The esophagus is protected from these acids by the esophageal sphincter. This is a muscular valve that keeps fluids in your stomach. But smoking weakens the sphincter. Smoking also allows stomach acid to flow backward into the esophagus. Smokers are more likely to develop peptic ulcers. Ulcers are painful sores in the lining of the stomach or the beginning of the small intestine. Ulcers are more likely to heal if you stop smoking. Smoking also raises the risk for infection from Helicobacter pylori . This is a bacteria commonly found in ulcers. The liver normally filters alcohol and other toxins out of your blood. But smoking limits your liver’s ability to remove these toxins from your body. If the liver isn’t working as it should, it may not be able to process medicines. Studies have shown that when smoking is combined with drinking too much alcohol, it makes liver disease worse. Crohn’s disease is a chronic inflammatory bowel disease. This disease is an autoimmune disorder of the digestive tract. For reasons that are not clear, it's more common among smokers than nonsmokers. Although there are many ways to help keep Crohn’s flares under control, it has no cure. Smoking can also make it harder to control Crohn's disease and its symptoms. Smoking is one of the major risk factors for colon cancer. Colon cancer is the second leading cause of cancer deaths. Routine screenings such as a colonoscopy can find small, precancerous growths called polyps in the lining of the colon. Some research suggests that smoking increases the risk of developing gallstones. Gallstones form when liquid stored in the gallbladder turns into material that resembles stones. These can range in size from a grain of sand to a pebble. Smoking is a risk factor for mouth, lip, and voice box cancer. It's also raises the risk for cancer of the esophagus, stomach, pancreas, liver, colon, and rectum. If you smoke, try to quit. Seek medical help to stop smoking if you need help. Giving up smoking will lower your risk for lung cancer and heart disease. It will also reduce your risk for other digestive disorders.
Riot control agents, often referred to as "tear gas," are chemicals that cause skin, respiratory, and eye irritation. Some of the most common chemicals used are chloroacetophenone (CN)—which is a toxic air pollutant, chlorobenzylidenemalononitrile (CS), chloropicrin (PS), bromobenzylcyanide (CA) and dibenzoxazepine (CR). While tear gas is typically perceived as causing mostly short-term health impacts, there is evidence of permanent disability in some cases. In general, exposure to tear gas can cause chest tightness, coughing, a choking sensation, wheezing and shortness of breath, in addition to a burning sensation in the eyes, mouth and nose; blurred vision and difficulty swallowing. Tear gas can also cause chemical burns, allergic reactions and respiratory distress. People with preexisting respiratory conditions, such as asthma and chronic obstructive pulmonary disease (COPD), have a higher risk of developing severe symptoms of their disease that could lead to respiratory failure. Long-term health effects from tear gas are more likely if exposed for a prolonged period or to a high dose while in an enclosed area. In these instances, it can lead to respiratory failure and death. If exposed to tear gas, the American Lung Association advises you to immediately distance yourself from the source and seek higher ground, if possible. Flush your eyes with water and use a gentle soap, such as baby shampoo, to wash your face. If breathing trouble persists, seek medical attention immediately. For more information: Page last updated: June 6, 2020
[Coen] Elemans, the study’s first author, now is a postdoctoral researcher in biology at the University of Southern Denmark. He conducted the study with Franz Goller, a University of Utah associate professor of biology; and two University of Pennsylvania scientists: Andrew Mead, a doctoral student, and Lawrence Rome, a professor of biology. To conduct the study, the biologists measured vocal muscle activity in freely singing birds and made laboratory measurements of isolated muscles. They found the zebrafinch and European starling can contract and relax their vocal muscles in 3 to 4 milliseconds, or three-thousandths to four-thousandths of a second, which is 100 times faster than the 300 milliseconds to 400 milliseconds (three-tenths to four-tenths of a second) it takes for humans to blink an eye, Elemans says. The birds’ vocal muscles move structures analogous to “vocal folds” in humans. The muscles change the position and stiffness of these folds to alter the volume and frequency of the sound. Superfast muscles can produce mechanical work or power at more than 100 hertz (times per second) and these superfast vocal muscles at up to 250 hertz, which means the birds can turn elements of their song on and off 250 times per second, Elemans says. © 2009 ION Publications LLC We could not grasp this fact about muscles were it not for concepts and methods developed by a line of scientists going back to the 1600s and beyond: Newton’s definition of force; Watt’s definition of power; scientists’ ideas and discoveries on repetitive motion, “work,” sound, and more. At one point, people had to develop the idea that sound was a disturbance in air (and other media), then prove it. Such knowledge was a prerequisite to applying the concepts of force, power, and work to sound. What’s more, people had to discover that air was substantial, that it had mass, weight and density. (No mass, no force.) As always, the hierarchy of knowledge is at work.
Sometimes students wish academic classes were more like performance based sports, but how would it work? How do they respond differently to coaching for physical skill versus teaching a cognitive skill? In athletics students spend time warming up with exercise routines before hitting the field to actually perform the sport. Why. To stretch their muscles and slowly raise the heart rate. These routines are to loosen joints, which help to decrease injuries, and increase blood flow because the stress of sports requires more oxygen. The warm up prepares the body to be pushed beyond normal physical activity. in a class room teachers typically use warm-ups as a classroom management technique to get students quiet and focused. Sometimes they also serve as a daily assessment to identify students that are falling behind. In an upper level academic class, there are typically no warm-ups. Why? Learning is a class is not beyond normal mental activity.no warm-ups. are often disposed of as a waste of time. Practice versus initial learning With coaching the emphasis in on practice. after the warm up, the coach watches the students practice most of the time. The emphasis is on instruction to master new skills. Although, there is time to practice in class students are expected to perform newly learned skills on their own at home without the teacher present. The teacher requires more unsupervised effort than the coach does. Effort versus attentiveness Coaches watch students to assess skill level, to make sure they are exerting effort and to ensure they are not goofing off. They often depend on other students to help them with this, as the athletic classes are typically larger. Making more effort at running increases a student’s ability in track. Making more effort in math, may allow a student to solve problems faster, but will not result in learning how to find derivatives of an equation. After all this is a procedure uncovered by someone older and more educated than the students. Teachers most often watch students to make sure they are behaving properly. There are some visible cues to indicate comprehension, but students do not always show these. The student must concentrate to master academic skills and attentiveness is not always observable. However, disruptive students almost always detract from instruction. Teachers may set up a systems in which the students who are advancing through class faster help struggling students. However, they usually let students seek help and give it as they see fit. Teachers cannot tell precisely what skills students have mastered until they see the assigned work. So a lot of assignments have to be made. Obvious versus out-of-sight accomplishments Instruction on coaching indicates feedback is given to show that the coach cares rather than to let students know how well they are performing because athletes already know this. They constantly watch what others are doing, comparing themselves to others. They tend to copy those who perform the best. In a class room, copying the work of the best performer in class won’t help any. Students must master the thinking processes themselves. The ability to learn is basically an invisible skill, which each student must be able to do on their own. Finally, students are not always sure if they have mastered new skills. Sometimes they transpose two different procedures – especially in math and foreign languages. So the teacher must be constantly asking question and quizzing students in order to provide feedback to show students how well they are doing. So if you are considering teaching academics like athletics, think again. There are a few commonalities between the two, but it will work no better than having students sit in a circle and learn the strategy for a game without ever practicing. Coaching Principles, (2012) http://www.asep.com/courses/ASEP_Previews
Disponível em: https://br.pinterest.com/pin/351421577159865308/ Acesso em 10 de set de 2020 Let’s read the texts and do some exercises! 1- Read the following text and answer the questions in English. The use of the English language in most current and former member countries of the Commonwealth of Nations was inherited from British colonisation. Mozambique is an exception – although English is widely spoken there, it is a former Portuguese colony, which joined the Commonwealth in 1996. English is spoken as a first or second language in most of the Commonwealth. In a few countries, such as Cyprus and Malaysia, it does not have official status, but is widely used as a lingua franca. Disponível em: https://en.wikipedia.org/wiki/English_in_the_Commonwealth_of_Nations Acesso em 10 de set. de 2020. a) What was inherited from British colonisation? b) Which country is a former Portuguese colony? c) When did Mozambique join the Commonwealth? d) Is English spoken as a first or second language in most of the Commonwealth? 2- Choose the correct answer to complete the sentences. 2.1. If I _____ the question, I would answer. a) ( ) understood b) ( ) understand c) ( ) will understand d) ( ) is understanding 2.2. If it _____ tomorrow, I will go for a swim. a) ( ) do not rain b) ( ) did not rain c) ( ) were not rain d) ( ) does not rain 2.3. If you eat fast food, you _____________ weight. a) ( ) gained b) ( ) had gained c) ( ) would gain d) ( ) gain Disponível em https://en.islcollective.com/english-esl-worksheets/grammar/first-conditional-1/if-i-ruled-world-1st-and-2nd-conditional/85709 Acesso em 11 de set. de 2020 3- Mark with an X for TRUE or for FALSE according to the text. Then, correct all the FALSE sentences. (Marque um X para TRUE ou para FALSE de acordo com o texto. Em seguida, corrija todas as sentenças falsas.) 4- Based on Bob’s text, write a small text about what you would do if you ruled the world to protect the environment. (Com base no texto de Bob, escreva um pequeno texto sobre o que você faria se governasse o mundo para proteger o meio ambiente.) You can print this lesson if you want!9o-ING-4a-quinzena-3o-corteBaixar
What is Artful Learning? Artful Learning is an instructional model that stimulates and deepens academic learning through the Arts. It allows students to use the Arts as windows into every content area. Students use arts-based strategies as tools for exploring interdisciplinary content and expressing what they have learned. The Artful Learning model is defined by four main quadrants: EXPERIENCE, INQUIRE, CREATE and REFLECT. These four areas encourage and support best teaching practice while improving student and teacher learning. Hillcrest classrooms systematically employ the four quadrants to strengthen understanding, retention, and application. Students experience and respond to a large Concept using a Masterwork (art, music, drama, dance, scientific innovation, architecture, literature, mathematical formula, etc.) and respond through sight, sound and movement. Serving as a catalyst for immediate student engagement, the Masterwork awakens ideas, emotions and new understandings through visual, auditory and kinesthetic modalities. Students leave this phase curious and wanting to know more Students begin a substantive investigation triggered by questions and observations generated by the Masterwork experiences. A Significant Question guides the inquiry. Students employ a variety of research techniques and hands-on explorations, using the interdisciplinary content to investigate the subject matter even more deeply. Students design and complete an Original Creation – a tangible, artistic manifestation that synthesizes and demonstrates their understanding of their newly acquired knowledge. By focusing on how best to represent the academic content from their unit of study, students’ thinking moves from divergent to convergent. They first construct a prototype, then continue to evaluate and revise their work until their final product, the Original Creation, is ready for presentation. Students ponder the journey they have taken through the Unit of Study and ask Deepening Questions about what and how they learned. They document this process through detailed narratives, maps, and metaphors. Students discover new ideas and connections while also considering practical applications of their new knowledge. Going through the process in the Unit of Study, students have acquired valuable new skills which will help them become more self-directed as learners. How it Works Trainers from the Leonard Bernstein Center work with the Hillcrest staff to implement this art instruction model based on Bernstein’s belief that the process of experiencing the arts provides a fundamental way to instill a lifelong love of learning in children. The experience, inquire, create and reflect model starts with a Masterwork experience. Students learn about a painting, photograph, song or other human achievements - not limited to the arts - that has exerted influence over time. This immediate engagement leaves students curious to learn more. The Significant Question guides the inquiry and students use a variety of art and research strategies to delve into the content. As students inquire, new understandings develop and connections are made. The students then design an Original Creation - a tangible, artistic manifestation that demonstrates their understanding of the new knowledge - and share it with the community. Finally, students reflect on their learning. This process is documented through narratives, maps and metaphors that enable students to make connections to their lives and the world around them. Hillcrest teachers have completed the full course of Artful Learning training and mapped their standards to develop interdisciplinary units and create inquiry centers for learning. Teachers have been introduced to more than 30 arts-based strategies that are infused across the curriculum for increased student understanding and cognitive development. The training transforms instruction; it is rigorous and renewing, intense and inspiring. Students at Hillcrest enjoy Artful Learning units in the fall, winter, and spring. Artful Learning Units of Study Artful Learning schools are joyful, active, engaged learning communities where parents, teachers, and students come together to explore the great achievements, big ideas, and perplexities of human thought. State or district-mandated standards form the core of the teacher-designed Artful Learning Units of Study and are brought to life through arts-infused, inquiry-based learning. Students consider universal concepts, actively explore masterworks, investigate, research and create a wide variety of materials and strategies. Reflection focused on deepening the learning encourages students to make connections across disciplines, understand their own learning process and set goals for future learning. Artful Learning classrooms cover the mandated curriculum and exceed it. The success of the program hinges on the development of partnerships to support our arts integration efforts. If you are interested in partnering with Hillcrest, please contact Principal Don Gramenz at [email protected].
Cadences in Music A cadence in music is a chord progression of at least 2 chords that ends a phrase or section of a piece of music. There are 4 main types of cadences: - Perfect (Authentic) - Imperfect (Half) - Interrupted (Deceptive) Why do we have Cadences in Music? Music is similar to spoken word in that it is divided up into phrases. Take the following spoken rhyme: Notice how there are different pauses at the end of each line. The 2nd and 4th line have a full stop (period) at the end – this is because the rhyme could end there and still make sense – it is a definite pausing/stopping point. On the other hand, there is a comma at the end of the 3rd line – the rhyme pauses, but is clearly going to continue because it wouldn’t make sense if it stopped there. Similarly, when you listen to the end of a phrase in music it either sounds like it is finished or unfinished. Whether it sounds finished or unfinished depends on which cadence is used. Types of Cadences There are 4 main types of cadences in music you will come across – 2 of them sound finished, whilst the other 2 sound unfinished. Both of the finished cadences sound finished because they end on chord I. For example, in C major a finished cadence would end on the chord C. In G major, it would finish on a G chord, etc… Perfect or Authentic Cadence The perfect cadence (also known as the authentic cadence) moves from chord V to chord I (this is written V-I). It is the cadence that sounds the “most finished”. Here is an example of a perfect cadence in C major. Notice how the chords at the end of the phrase go from V (G) – I (C) and it sounds finished. A Plagal Cadence moves from chord IV to chord I (IV-I). It is sometimes called the “Amen Cadence” because the word “Amen” is set to it at the end of many traditional hymns. Have a look at and listen to this example of a plagal cadence in C major: Here is an example of a plagal cadence in C minor: Both the perfect and plagal cadences sound finished because they end on chord I, but they each have their own characteristic sound. The perfect cadence has a very definite finish to it, whilst the plagal cadence is a softer finish. Now let’s have a look at the 2 unfinished cadences. Unfinished cadences sound “unfinished” because they don’t end on chord I. When you hear an unfinished cadence at the end of a phrase it sounds like the music should not stop there – it sounds like it should continue onto the next section. Imperfect Cadence or Half Cadence An imperfect cadence or half cadence ends on chord V. It can start on chord I, II or IV. Have a listen to this example of an imperfect cadence in C major. Notice how the last 2 chords are I (C) followed by V (G). The music clearly sounds like it should continue. Here is an imperfect cadence in C minor: Interrupted Cadence or Deceptive Cadence An interrupted cadence or deceptive cadence ends on an unexpected chord – the music literally does sound like it has been “interrupted”. The most common chord progression you will come across is from chord V to chord VI (V-VI). So, in this example of an interrupted cadence in C major below, the last 2 chords are V (G) and VI (A minor). Listen to how frustrating it sounds that the music doesn’t continue: The music very much sounds as though it has been “interrupted”. Here is an interrupted cadence in C minor: Composing Using Cadences Cadences are a crucial aspect of composing. You should use cadences at the end of your phrases. It is helpful to think about the following question when choosing a cadence: If the answer to this question is “yes” then you should use either a perfect or plagal cadence (you will usually use a perfect cadence). If the answer to the question is “no” then you can choose either the imperfect cadence or the interrupted cadence (if you want to bring an element of surprise into your piece). Summary of Music Cadences I hope this lesson on cadences has helped. Here is a brief summary of the 4 music cadences – Perfect, Imperfect, Plagal, Interrupted: Also, I have put together a wall chart showing the chords from the cadences in all major and minor keys. Feel free to click on it and download it as a PDF and print it off so you can refer to it:
Tags: Bach Essays On His Life And MusicImprove Creative WritingFail Safe EssayDissertation On SegmentationInformation Technology Research PapersExample Of A Research Proposal OutlineResearch Paper About Effects Of Social NetworkingPatriot Act Research PaperSteps To Writing A Descriptive EssayMy Dream Place Essay The Kamakura government consisted of the emperor, shogun, and the house men. With the empire more or less stable, particularly after the conquest of the indigenous Ainu in the 9th century, Japan's emperors began to devote more time to leisure and scholarly pursuits and less time to government. Important court posts were dominated by the noble but corrupt Fujiwara family. Eventually, Kiyomori died and the Taria clan shortly declined after him. During these battles warriors ran amok pillaging the Japan’s cuntryside. In the Heian era, the aristocrat’s social class was sought by many because of their social and cultural status. When the warrior rise in the Kamakura age the social classes change dramatically between aristocrat and warrior.The feudal centuries can be clunkily split into five main periods.The Kamakura Period (1185-1333) saw repeated invasions by Kublai Khan's Mongol armies.To end the chaos and violence the imperial turned to Yoritomo.Yoritomo in return raise an army of samurais and took complete control of the government and transformed it into a military government.Represents a major advance over previous publications....Students will find this volume especially useful as an introduction to the primary sources, terminology, and dominant themes in the history of chanoyu.It is the quality of seeing the familiar and not so familiar elements of tea emerge as a dynamic saga of human invention and cultural intervention that makes this book exhilarating and the details that the authors provide that make these essays fascinating.Japan's earliest settlers were fishers, hunters and gatherers who slogged over the land bridges from Korea to the west and Siberia to the north.Kiyomori murdered all the adults of the Monamoto clan and forced the children into exile.The emperor rewarded Kiyomori victory by giving him an advisor position in the government.
Raising smart kids isn’t about teaching to the tests; it’s about building brainpower. Kids who can seek information, connect ideas and apply what they’ve learned aren’t just book- or school-smart – they are life-smart. Cultivate your student’s success with these essential skills including growth mindset, investigative approach, emotional intelligence and self-expression. What it is: Smart kids define intelligence in terms of learning – not as a fixed trait. Decades of research by developmental psychologist Carol Dweck, Ph.D. and her colleagues shows kids who think in terms of ability give up quickly when challenged. They see failure as proof that they don’t have what it takes, not as a signal to invest more effort or try another approach. A growth mindset is healthier and more productive. Kids who believe intelligence is developed are not discouraged by failure. They don’t even think they are failing; they think they’re learning, Dweck explains. These learners seek challenges, think creatively and thrive despite setbacks. How to build it: Reinforce the belief that talents are developed, not a matter of biological inheritance, Dweck counsels. Praise your child for his or her efforts and persistence, rather than for intelligence. Say, “I’m proud of you for playing such a difficult song, you really stretched your skills” instead of “You’re a talented musician.” Share stories of scientists, athletes and artists who model passion for learning and dedication to development. Smart kids need hard-working role models. What it is: Smart kids can define a problem, formulate options, test potential solutions and decide on a course of action. These are important life skills. “Classroom teachers struggle with how to make science, technology, engineering and math learning more hands-on,” says Dave Hespe, former co-acting executive director of Liberty Science Center in Jersey City, NJ – and commissioner of education for the New Jersey Department of Education. Zoos, aquaria, parks and science centers are fantastic learning laboratories. How to build it: Teach investigative concepts and skills at each stage of your child’s development. Double your cookie recipe and let kids determine how much butter and flour you need, Hespe says. Study bridges you cross to understand their design. Ask kids how they could get over the river without a bridge. Look up cloud types online and formulate a hypothesis about tomorrow’s weather. Engage your kids’ curiosity outside the classroom and model problem-solving strategies. Real-world research makes smart kids smarter. What it is: Smart kids recognize and regulate their own emotions and empathize with others. “Kids who develop these skills early in life get better grades, are less susceptible to anxiety and depression and have healthier, more fulfilling relationships,” says Linda Lantieri, director of The Inner Resilience Program and co-founder of the Resolving Conflict Creatively Program, a social and emotional learning program implemented in more than 400 schools. Kids who can self-soothe when stressed are ready and able to learn from their experiences, without distracting emotional drama. How to build it: Regulating emotions doesn’t mean stifling them, says Lantieri. Don’t diminish your child’s feelings; help him understand what’s causing them. Increase his emotion-related vocabulary by introducing him to words like angry, frustrated, jealous, excited and elated. Kids should be able to describe their feelings with some specificity, Lantieri says, rather than saying they feel good or bad. Be an emotion coach: Encourage your child to explore his feelings and to take others’ emotional perspective. Empathy isn’t automatic; it is learned. Use your own upsets as teaching opportunities – explain step-by-step how you stop, calm down, refocus and then act. What it is: Smart kids develop a strong sense of self. They know their own strengths and challenges and make wise decisions. Kids crave a sense of uniqueness and they may feel pressured by intense demands to get good grades, fit in socially and grow up before they’re ready, says Brandie Oliver, assistant professor of counselor education at Butler University in Indianapolis, Ind. Finding their own identity can be awkward and frustrating. It also subjects kids to risk, Oliver says. Peer pressure about alcohol, drugs and sex forces kids to make tough decisions. Bullying – in person or online – is a threat as well. Kids need skills to stand up for themselves. How to build it: “It is common for parents to think they are in the loop when they don’t know as much as they think,” Oliver says. “Kids share information with parents through a filter.” Sometimes they embellish or omit key details. Listen deeply and encourage sharing. Validate your child’s perspective even when you don’t agree, Oliver says. Model the use of “I” messages, such as “I think” or “I feel.” Self-expression is crucial for personal wellbeing and social success. Kids who can voice their opinions respectfully become productive members of the community. This post was originally published in 2012 and is updated regularly.
Many animal species consist of members that will only mate once before dying. This reproductive strategy, often seen in fish and insects, can make evolutionary sense when the species is able to produce a lot of offspring from that single mating. Given that salmon can release thousands of eggs when they spawn, a single mating can produce a lifetime's worth of offspring. That's not true for mammals, though. Raising young internally limits the number you can produce from a single mating, while the extensive post-natal care required by mammalian young ensures that the female has to stick around for a while after giving birth. But males don't always participate in postnatal care, so it probably shouldn't be a surprise to learn that there are mammals out there that engage in what researchers are terming "suicidal reproduction." The problem is that the behavior only occurs in a small number of marsupial species, and researchers have been arguing for 30 years about why that is the case. Now, some Australian researchers have come up with an answer: a combination of sperm competition and promiscuous females. The marsupials that engage in this "one strike and you're out" approach to mating all die off because of a general immune failure that happens shortly after mating. This has nothing to do with the process of mating itself; in fact, it starts well in advance of mating, as the males build up a store of sperm and then permanently shut their gonads down. The question wasn't so much how the males' death takes place, but why. What sort of evolutionary advantage could this provide? One hint came from the fact that the species that have this life cycle (technically called semelparity) are all small insect eaters. The possibility that they were all related species led to the suggestion that there might be something that genetically predisposed them to this lifestyle. Another idea was that the species might live in an environment where the availability of food was severely limited; thus, the males dying would free up resources to make their progeny more successful. But a bit of biogeography suggested something else might be at play. The suicidal reproduction was mostly found in species at the far southern end of the range of marsupials. In fact, the new paper notes that male survival after mating drops the further you get from the equator. In these cooler environments, most insect populations experience a boom in the summer and are pretty sparse the rest of the year. This in turn causes the females to time their reproduction to the availability of the calories needed to support it. The net result is that the entire female population in these species is ready to mate within a narrow window of time. With mate availability at a premium, they mate with pretty much any male available (the authors refer to the females as "mating promiscuously"). The authors hypothesized that the males' evolutionary response to this situation is to engage in what's called sperm competition. To confirm this, they measured the relative testes size of these and several related species. As male survival went down, the testes size relative to their body went up. The males that engaged in suicidal reproduction also mated for twice as long (an average of 9.4 hours!) as their less-competitive relatives. These are signs, researchers conclude, that the males are trying to ensure that their sperm is what does the fertilizing, even if the female goes on to mate with others. The huge amount of resources expended on mating, combined with the fact that survival in small mammals is pretty low to begin with, means that the males are unlikely to survive to mate a second time anyway. So instead, they simply tune their bodies to make the most out of the one chance they get. And at an average of 9.4 hours, it's hard to say that they don't accomplish this.
'Fly-bys', or 'gravity assist' manoeuvres, are now a standard part of spaceflight and are used by almost all ESA interplanetary missions. Imagine if every time you drove by a city, your car mysteriously picked up speed or slowed down. Substitute a spacecraft and a planet for the car and the city, and this is called a 'gravity assist'. These manoeuvres take advantage of the fact that the gravitational attraction of the planets can be used to change the trajectories, or the speed and direction, of our spacecraft on long interplanetary journeys. As a spacecraft sets off towards its target, it first follows an orbit around the Sun. When the spacecraft approaches another planet, the gravity of that planet takes over, pulling the spacecraft in and altering its speed. The amount by which the spacecraft speeds up or slows down is determined by the direction of approach, whether passing behind or in front of the planet. When the spacecraft leaves the influence of the planet, it once again follows an orbit around the Sun, but a different one from before, either on course for the original target or heading for another fly-by. 'Slingshot' effect The first spacecraft to experience a gravity assist was NASA’s Pioneer 10. In December 1973, it approached a rendezvous with Jupiter, the largest planet in the Solar System, travelling at 9.8 kilometres per second. Following its passage through Jupiter’s gravitational field, it sped off into deep space at 22.4 kilometres a second – like when you let go of a spinning merry-go-round and fly off in one direction. This kind of acceleration is also called the ‘slingshot effect’. Mission: Impossible? Even before this encounter, Italian astronomer Giuseppe ‘Bepi’ Colombo had realised the potential of such manoeuvres and had used them to design a ‘Mission: Impossible’ to Mercury, the innermost planet of our Solar System. To reach Mercury, a spacecraft launched from Earth needed to lose more energy than a conventional rocket would allow. Colombo’s brilliant idea was to realise that gravity assists could also be used to slow a spacecraft. On 10 March 1974, the NASA Mariner 10 spacecraft flew past Venus, lost speed and fell into its rendezvous orbit with Mercury. Extraordinary manoeuvre The ESA/NASA Ulysses mission used one of the most extraordinary gravity assists to allow it to see the polar regions of the Sun, places that are forever hidden from any observing location on Earth. In October 1990, the Ulysses spacecraft left Earth to voyage towards Jupiter. There, it used a gravity assist to throw it out of the plane of the planets into a gigantic loop that passed over the south pole of the Sun in 1994, and then the north pole 13 months later. More manoeuvres coming up Also in 2004, ESA’s Huygens probe will arrive at the Saturn’s moon Titan. It is carried on the NASA spacecraft Cassini which used four gravity assists (one with Earth, two with Venus and one with Jupiter) to accelerate it towards Saturn. ESA’s comet-chaser Rosetta will use a similar number of gravity assists to speed it to Comet Churyumov-Gerasimenko. Over the next eighteen months ESA’s lunar scout SMART-1 will become the first spacecraft to use gravity assists in conjunction with a revolutionary propulsion system, the solar-electric ion engine. This will pave the way for ESA’s Mercury mapper, appropriately called BepiColombo, which will use the same technique to orbit the inner planet early in the next decade. As well as affecting spacecraft, the gravitational influence of planets also affects the distribution of asteroids and comets. There are families of small bodies, for example the Apollo and the Plutino asteroids, which converges on a particular shape and size of orbit because their members have been repeatedly subjected to small gravitational attractions from the planets. There are also individual, one-off gravitational effects that can send objects such as comets either plummeting into the inner Solar System or hurtling out beyond the planets. Watching for these ‘wild cards’ is a prime area of study for ESA, as the geological record on Earth shows that asteroids have occasionally collided with our planet in the past. Cite This Page:
For our second meeting, on Tuesday 19th July at 19:30, we’ll be discussing the Beyond 2000 report edited by Robin Millar and Jonathan Osborne (Download .PDF 142kB). The report set out to answer four questions: - What are the successes and failures of science education to date? - What science education is needed by young people today? - What might be the content and structure of a suitable model for a science curriculum for all young people? - What problems and issues would be raised by the implementation of such a curriculum, and how might these be addressed? The report also made ten recommendations which were hugely influential in shaping the current National Curriculum for Science and way we teach science in the UK — many of the recommendations listed below were adopted in the changes to the National Curriculum for Science made in 2006 and in the GCSE courses that were developed following these changes. - The science curriculum from 5 to 16 should be seen primarily as a course to enhance general ‘scientific literacy’. - At Key Stage 4, the structure of the science curriculum needs to differentiate more explicitly between those elements designed to enhance ‘scientific literacy’, and those designed as the early stages of a specialist training in science, so that the requirement for the latter does not come to distort the former. - The curriculum needs to be presented clearly and simply, and its content needs to be seen to follow from the statement of aims (above). Scientific knowledge can best be presented in the curriculum as a number of key ‘explanatory stories’. In addition, the curriculum should introduce young people to a number of important ideas-about-science. - The science curriculum needs to contain a clear statement of its aims – making clear why we consider it valuable for all our young people to study science, and what we would wish them to gain from the experience. These aims need to be clear, and easily understood by teachers, pupils and parents. They also need to be realistic and achievable. - Work should be undertaken to explore how aspects of technology and the applications of science currently omitted could be incorporated within a science curriculum designed to enhance ‘scientific literacy’. - The science curriculum should provide young people with an understanding of some key ideas-about-science, that is, ideas about the ways in which reliable knowledge of the natural world has been, and is being, obtained. - The science curriculum should encourage the use of a wide variety of teaching methods and approaches. There should be variation in the pace at which new ideas are introduced. In particular, case-studies of historical and current issues should be used to consolidate understanding of the ‘explanatory stories’, and of key ideas-about-science, and to make it easier for teachers to match work to the needs and interests of learners. - The assessment approaches used to report on pupils’ performance should encourage teachers to focus on pupils’ ability to understand and interpret scientific information, and to discuss controversial issues, as well as on their knowledge and understanding of scientific ideas. - In the short term: The aims of the existing science National Curriculum should be clearly stated with an indication how the proposed content is seen as appropriate for achieving those aims. Those aspects of the general requirements which deal with the nature of science and with systematic inquiry in science should be incorporated into the first Attainment Target ‘Experimental and Investigative Science’ to give more stress to the teaching of ideas-about– science; and new forms of assessment need to be developed to reflect such an emphasis. - In the medium to long term: A formal procedure should be established whereby innovative approaches in science education are trialled on a restricted scale in a representative range of schools for a fixed period. Such innovations are then evaluated and the outcomes used to inform subsequent changes at national level. No significant changes should be made to the National Curriculum or its assessment unless they have been previously piloted in this way. - What bits of the report do you agree / disagree with most strongly? - Why do you think this report was so influential? - Are the arguments still valid today? Could the same critique be mounted again? - What does the report fail to say? (What would you include in the report if you were writing it today?)
THE MAPPING OF THE GREAT LAKES1670—2007 by Christopher Baruth The mapping of the Great Lakes region began in the early seventeenth century, when the first indications of the lakes appeared on maps made by European cartographers. By the mid-1600s, the maps of French Royal Geographer Nicolas Sanson had recognizable depictions of all five Great Lakes. His map is imprecise—Lake Superior lacks its distinctive shape and is unbounded on the west—but Lakes Ontario, Erie, Huron and Michigan can be discerned without difficulty. The lack of any reference to the Mississippi River in Sanson’s map reflects how little cartographers really knew about the region at the time. Until the late eighteenth century, maps were made with information acquired in an irregular and imprecise manner. They were not based on formal surveys, but on written records supplemented with sketches by explorers, missionaries and trappers traveling the Upper Midwest. European cartographers had the task of fitting together this often contradictory information and putting the results into the framework of a geographic map. Instead of being mapped in terms of latitude and longitude, prominent places were usually located in relation to other places, which were, of course, similarly positioned. Distances could not be measured with any accuracy at this time, so these maps were liable to gross errors. The early maps in the “Making Maps, Mapping History” exhibit provide a capsulized view of the growth of geographical knowledge of the Great Lakes region. As noted above, Sanson’s map was the first to display all five of the Great Lakes. Vincenzo Maria Coronelli, the cosmographer of the Republic of Venice, used information supplied by Jesuit missionaries in a 1688 map that was the first accurate depiction of the Great Lakes and the Mississippi River. French cartographer Guillaume De L’Isle further refined the image and provided an outline that was not substantially improved until surveyors entered the region in the nineteenth century. The first official government surveys of the Great Lakes were hydrographic surveys conducted by the British Admiralty under the direction of Capt. Henry W. Bayfield. Bayfield spent his entire career surveying the St. Lawrence River and the Great Lakes, beginning with Lake Superior in 1816. The lake-shore city of Bayfield, Wis., was named in honor of this pioneer surveyor. One of the first acts of the new government of the United States was to establish a system for the orderly settlement of its western lands. Under the Ordinance of 1785, land surveyors went into the western territories in advance of settlement to divide the land into townships of 36 square- mile sections. Though mapping was not the government’s primary aim, the surveys provided ample grist for the mapmaker’s mill, and the regions were, for the first time, mapped with considerable accuracy. The federal government, however, was not yet in the business of making maps for the public. That was left to enterprising individuals, such as Samuel Morrison, Elisha Dwelle and Joshua Hathaway, who produced one of the first topographical maps of the Wisconsin Territory in 1837. The surveys of the General Land Office served as the basis for the mapping of much of the Great Lakes region from around 1800, when the surveys began, until about 1890, when the U.S. Geological Survey began to map the region again. In some cases, however, the old surveys were not entirely superseded until the mid-20th century. The distinctive feature of maps based on these surveys is the invariable presence of the township grid. As population and commerce in the Great Lakes region grew, the federal government assumed responsibility for charting the lakes for navigation. The U.S. Lake Survey began in 1841 with an appropriation of $15,000. Before the Civil War, the work was conducted by officers of the Corps of Topographical Engineers. Its initial survey was completed in 1882, but the need for contin- uous revisions caused it to be reactivated a few years later. The Topographical Engineers merged with the U.S. Army Corps of Engineers in 1863, and the Lake Survey remained in the hands of the Corps of Engineers until 1970, when it became part of the newly formed National Ocean Survey (now known as the National Ocean Service). The U.S. Lake Survey conducted far more rigorous surveys than those of the General Land Office, which used instruments no more sophisticated than a surveyor’s compass and a Gunter’s chain. The Lake Survey used an array of precision instruments and employed triangulation to form the geographic framework of the maps. Triangulation allowed the transfer of geographical coordinates from point to point throughout the system and, for the first time, geographical locations were determined with precision. Inland navigation prompted Congress to order a variety of government surveys. During the era of canal building, surveys like the one for the Portage Canal were common. Most of them were conducted by the Corps of Engineers, as were the surveys of the great rivers, such as the Mississippi. The degree of accuracy accorded Great Lakes navigators was generally not matched on land for many years to come. The task of precisely mapping the United States by covering it with large-scale topographic quadrangle maps was given to the newly formed U.S. Geological Survey (USGS) in the 1880s. John Wesley Powell, the second director of the USGS, stated that the mapping of the United States could be accomplished in 25 years, but that goal was not accomplished until the 1980s. The first topographic maps of Wisconsin appeared in the 1890s, when much of the southeastern part of the state was surveyed. The surveys were quickly done, however, and most of the sheets needed at least minor revision within the next decade. Despite a rapid start, the topographic mapping of Wisconsin bogged down and ultimately was not completed until 1983. Mapping standards changed entirely with the application of aerial photography around 1930. Following World War II, all Wisconsin topographic sheets were derived from photographs. Today, polar-orbiting satellites with thematic mappers can, in a single day, record images that reveal Great Lakes water quality and temperature, the streets and large buildings of urban areas, and the general health of forests, wetlands and farmlands, including the identity of such crops as corn, hay and alfalfa. The detailed precision of today’s computerized Space Age technology no doubt would have astounded Nicolas Sanson—but the seventeenth-century mapmaker’s ability to create a fairly accurate map of a world he had only heard and read about is equally astounding to twenty-first-century mapmakers.
I'm new to Java and i have a simple question but i don't understand it, can someone help me pls? In this exercise, we will use the String class to manipulate string variables. Write a program that reads in two String variables from the user and store in variables stringOne and stringTwo respectively. Please note that the input from the user should not be limited to single word strings. Perform the following operations as listed below, in a given sequence: • Prompt the user for a specified index and display the character in that position for • Prompt the user for two characters, i.e. charA and charB, replace every occurrence of the charA in stringOne with charB. Conversely, replace every occurrence of charB in stringTwo with charA. MCD3240 Computer Programming Prac 3 – Introduction to Classes and Objects Developed by the MCD3240 Multi-campus Unit Management Group, 2009 Page 2 of 2 Faculty of Information Technology, Monash University • Replace both strings with their lowercase equivalent respectively and display both • Concatenate both lowercase strings, store the result in stringTwo and display it For each of the operations listed above, you will need to print the resulting output to screen. You only need to use the methods from the String class to complete these operations. You may assume that the input has been validated
Whether because I play the piano by ear (perhaps promoting a heightened sensitivity to sound) or whether because there are notable reverberations in each season of the year in the Northern Hemisphere, I am enthralled by the characteristic dins, rackets, noises, music, tones and notes peculiar to an Ontario summer. Foremost among these is that of the cicada. Seldom have I seen a cicada except on its back on the road, perhaps struggling to right itself but most often dead or dying, a hardened chrysalis, inert legs straight into the air. Remarkably this insect can live up to 17 years though much of that time is spent underground until it emerges to perch hidden among the tall branches of a large leafy-green tree where it entertains us with its signal mating song. “The “singing” of male cicadas is not stridulation such as many familiar species of insects produce—for example crickets. Instead, male cicadas have a noisemaker called a tymbal below each side of the anterior abdominal region. The tymbals are structures of the exoskeleton formed into complex membranes with thin, membranous portions and thickened ribs. Contraction of internal muscles buckles the tymbals inwards, thereby producing a click; on relaxation of the muscles, the tymbals return to their original position, producing another click. The male abdomen is largely hollow, and acts as a sound box. By rapidly vibrating these membranes, a cicada combines the clicks into apparently continuous notes, and enlarged chambers derived from the tracheae serve as resonance chambers with which it amplifies the sound. The cicada also modulates the song by positioning its abdomen toward or away from the substrate. Partly by the pattern in which it combines the clicks, each species produces its own distinctive mating songs and acoustic signals, ensuring that the song attracts only appropriate mates.“ When the soporific drone of the cicada is heard it is assured to mark the month of August and the height of summer. Normally the distinctive sound is accompanied by exceedingly hot weather, searing yellow sunshine and soft dry winds. “Most of the North American species are in the genus Neotibicen: the annual or jar fly or dog-day cicadas (so named because they emerge in late July and August).” The expression “dog days” refers to the hot, sultry days of summer, originally in areas around the Mediterranean Sea, and as the expression fit, to other areas, especially in the Northern Hemisphere. The coincidence of very warm temperatures in the early civilizations in North Africa and the Near East with the rising, at sunrise (i.e., the heliacal rising), of Orion’s dog, the dog star Sirius, led to the association of this phrase with these conditions, an association that traces to the Egyptians and appears in the ancient written poetic and other records of the Greeks (e.g., Hesiod, Aratus, and Homer in The Iliad) and the later Romans.”. The cicada’s hypnotic song heralds the pleasures of the burgeoning agricultural crop, in particular sweet corn growing emerald and tall crowned by its golden tiara. The tranquillizing refrain of the cicadas is a lazy background symphony in harmony with the distant wavering fields on a hot summer’s day! The lure of water in the summer is undeniable. Even if one is not fortunate to have a lakeside cottage there are other opportunities to satisfy the primal aquatic urge; viz., lake or river inns and restaurants, marinas, boat tours, canals and private pools in manicured backyards or embedded in tranquil meadows. Such venues might include the resonance of motor boats, the racing pitch of seadoos, the burble of yachts, the croaking of bullfrogs, the squawk of seagulls, splashing swimmers and the shrieks of children; sometimes the whisper of wind in languorous overhanging willow trees. Al fresco dining is a summer tradition. Just as we Northerners instinctively turn our faces to the penetrating rays of summer sunshine, likewise we respond to the seduction of out-of-doors dining to milk the treasured summer days of their ephemeral bounty. Who isn’t stirred by the sizzling sound of a mixed grill? Perhaps afterwards roasting marshmallows on a crackling campfire, ascending sparks exploding against a pitch-black sky. Rustic dining may at times coincide with the unwelcome shrill of mosquitoes; but there is also the endearing buzz of diligent bees. Many late summer evenings are filled with the muffled sounds of lethargic conversation following a bottle of fine wine and a wholesome meal. An early morning breakfast on the patio will afford the mesmerizing chorus of bird chirps as one sips a cup of restorative coffee and listens attentively as the world comes alive once again amid the cacophony including the ceaseless stridency of the metronomic staccato of crickets and perhaps if you’re lucky the call of a loon on a serene lake. Many of us profit by the irresistible summer weather to walk, run or cycle. In Almonte we have the advantage of nearby bucolic country roads for such athletic purposes. A stand of poplar trees when stirred by a strong wind is an unmistakeable sound. The leaves almost talk, very similar to the sound of light rain falling. “(The) name ‘poplar’ originates from ancient Rome. During the 6th century, Romans planted poplars in areas where public meetings were held. Latin name for people is ‘populus’, hence the name.“ When the weather shifts from drenching sunshine to relieving rain we may be treated to rolls of booming thunder and startling cracks of aerial electricity. Sheets of rain against locked windows invariably invite us to pull back the sheers to watch – captivated – the unfolding force of Nature while listening to the splatter of the driving rain. Certain sounds of summer, though equally idiosyncratic, are less than “natural” yet very much commonplace. Consider the seasonal sound of motorcycles, sports cars, skate boards, lawn mowers and road construction. Less offensive are the various renditions of lawn sprinklers, the easy-going arc-type and the chattering repeaters. Though exceedingly popular in the United States of America, leaf blowers have yet to reach the same pinnacle of presence here for pushing back shards of grass-cuttings from walkways and driveways. Don’t overlook the hum of air conditioners! Or the crack of lawn bowls or a golfer’s driver. Not to be passed over is the welcome effect of the sonorous voices of our neighbours who during the winter months have frequently been cloistered from both sight and sound. It is astonishing to rediscover communication with one’s friends through the increasing frequency of yard work, porch-sitting and sunbathing, all peculiar to the liberating summer weather. At times those well-known voices are mixed with the hospitable blend of visiting children and grandchildren, prompting reunions and gossip which might never have transpired in the forbidding depths of winter. One day however you shall hear the sharp cry of the bluejay. It warns the end of the sounds of summer.
Warming ocean temperatures resulting from climate change are endangering fish stocks around the world, shrinking some populations by as much as 35 percent, according to a new study, reported by CNN. The study shows that rising ocean temperatures over the past 80 years have led to declining sustainable catches in 124 fish and shellfish species. This figure represents the amount that can be fished without causing long-term damage to the populations. Overall, the study found a decline of 4 percent as a result of rising temperatures. The researchers, from Rutgers University in New Jersey, analyzed data on fisheries and ocean temperatures to evaluate changes in sustainable catches from 1930 to 2010. The most dramatic declines were in regions along Asia’s Pacific coast, such as the East China Sea and Japan’s Kuroshi Current, where seafood stocks declined between 15 and 35 percent over the past 80 years. Many of the world’s fastest growing populations rely on fish stocks in these regions. “Ecosystems in East Asia have seen enormous declines in productivity. These areas have particularly rapid warming [and] also have historically high levels of overfishing,” according to the study’s lead author, University of California Santa Barbara quantitative ecologist Chris Free. These effects have been exacerbated by overfishing, Free says. Reducing the reproductive capacity makes the populations more vulnerable to the longer-term effects of climate change. Some species, such as black sea bass on the US Atlantic coast, have actually benefited from the warmer temperatures. But if warming continues, scientists say those species are likely to decline as well once they reach their own temperature thresholds. The researchers said they were “stunned” by the results, which could have grave repercussions for much of the world. Over 56 million people work in the fishing industry around the world. A total of 3.2 billion people rely on seafood as a protein source, and in developing countries, seafood accounts for half of all animal protein consumed. Free says governments can help address the problem by enforcing rules against overfishing, and using trade agreements to share supplies between areas with populations that have been hurt, and those that have boosted, by warming oceans. The study was published in the March 1st edition of the journal Science.
Example Lesson Plans: Common Core: Fact and Fiction The students will listen to The Carnival of the Animals: The Aquarium, discuss why the music is “fish like”, then create a movement and/or dance using streamers that goes along with the music. The student will play, “One fish, Two fish, Three fish, Four” on recorder with one-hundred percent accuracy. This song is a representation of the fish that are in the book, McElligot’s Pool. In the beginning there is just one fish (fish #1), but then it is joined by many fish (fish #2, 3, and 4). The close harmonies represent the fish all trying to get to McElligot’s Pool, and then the final chord represents them finally finding the pool. The student will create a paper plate fish in class, name their fish, and then place them on a “sea wall” in the classroom. I will pass out paper plates, markers, crayons, and whatever other craft items might be available and have the students create their own fish. After the students are done, I plan to have a long row of blue paper across one of the walls in the classroom and have the students place them on it. This activity ties in with my book, McElligot’s Pool, because it’s about the different types of fish that may be traveling to McElligot’s Pool. I’ve included a link below to a site on how to make the paper plate fish. The student will identify the number of and the color of each type of fish and then respond with the correct type of rhythm. Throughout the book McElligot’s Pool, each page is either in black and white or color. Also, throughout the book are pages with multiple fish and some with just one fish. As I read the book the students will identify whether or not the page is black and white or color, then they will respond with the correct rhythm according to the picture above. As I turned each page I will give them a few seconds to look at it, then I’ll count them off by giving them four beats and then let them clap the correct rhythm. The student will show how dynamics change in the music through physical actions such as marching and dancing. The fourth movement of the symphonic poem, Pines of Rome by Ottorino Respighi is a wonderful example of contrasting dynamics. I had in mind that the teacher could start the song at around 3 minutes. They would have the students stand in a circle and start marching on beat with the music. As the music gets loud and soft, they would exaggerate or simplify their movements. This coincides with the book the teacher would be teaching on, McElligot’s pool, in that the young man’s idea of the pool gets larger and larger as the story progresses.
Yesterday's blog featured a number of apps that could prove useful for teachers making their way through the Common Core State Standards. Today we'll provide a few additional resources in the area of reading instruction. Reading instruction is no easy task. It doesn't help that most teachers receive an inadequate foundation in research-based reading instruction...a foundation from which they'd be empowered to build successful teaching strategies. Resources can only help! The University of Texas at Austin houses a large resource library for all grade levels. The resources provided can be filtered by topic area (eg, reading instruction, math instruction, writing instruction, ELL), resource type (eg, lessons, presentations, podcasts, relevant book chapters, and guides), and audience (eg, parents, general education teachers, and researchers). Below are just a few examples of the materials available to a general education elementary teacher interested in reading instruction. Some resources place more emphasis on certain students like those with disabilities or English language learners: - Strategies for Improving Students' Reading Fluency (Book Chapter) - Reading Support for Spanish-Speaking Students in Elementary Classrooms (Video) - A Comparison of Responsive Interventions on Kindergarteners' Early Reading Achievement (Journal Article) - Building RtI Capacity: Instructional Decision-Making Procedures for Ensuring Appropriate Instruction for Struggling Students in Grades K-3 (Guide) - Preventing and Remediating Reading Difficulties: Perspectives from Research (Book Chapter) - A 3-Tier Model: Promising Practices for Reading Success (DVD) For both teachers and parents, we've featured a couple of apps below that could aid in their effort to ensure that reading is both fun and interesting for young readers!: My Beastly ABCs This colorful app was featured in USA Today's Top 10 iPad Apps for Kids 2012 and has been compared to Where the Wild Things Are. The pages are interactive and present children with a full story, complete with a beginning and end. In telling the story, creators have incorporated some rhyming and given children the opportunity to practice using bigger words. Children can either read to themselves or be guided by a narrator. One user wrote, "A great, fun, entertaining book with educational value! This app has amazing animation and great music to keep my children's attention. They ask for it every night at bedtime." (iTunes) Children can select from more than 300 e-books in areas ranging from music to historical figures. Parents are empowered to track their child's progress via a dashboard. Children have the option to read on their own or be guided by narration. To keep things interesting, creators have included video clips from the show to further engage young readers. One user wrote, "This is marvelous. It has everything I've been looking for but failing to find in a reading app for my nearly 4 y.o. daughter. There are hundreds of books in all manner of styles about a wide variety of subjects, from whimsical fun to exploratory science." (iTunes, Android) While these resources are valuable, good reading instruction requires a teacher empowered with the critical tools to teach. Please check out our Change.org e-petition to see how you can lend a hand in making sure that teachers are given the necessary tools to ensure all children can read.
People who live in hazard-prone places devise methods for protecting themselves and their livelihoods. These methods are based on their own skills and resources, as well as their knowledge of their local environments and experiences of hazard events in the past. Their knowledge systems, skills and associated technologies are generally referred to under the broad heading of ‘indigenous’ or ‘traditional’ knowledge. The application of indigenous or local knowledge in the face of hazards and other threats is generally referred to as a traditional ‘coping mechanism’ or ‘coping strategy’. It is also sometimes known as an ‘adjustment’ or ‘adaptive’ mechanism or strategy, and as a ‘survival’ strategy when applied in extreme circumstances. The choice of skills and resources varies according to the nature of the hazard threat, the individual or collective capacities available to deal with it and a variety of individual, household or community priorities that can change during the course of a disaster. Indigenous knowledge is acquired through experiences of living in specific environments over a long period of time. It is passed down from one generation to the next and continually added to or modified in the light of new experiences or experiments, as well as in response to external change. It is specific to its particular locality, community or culture, which is why it has been suggested that ‘local knowledge’ is a more appropriate term.+B. Wisner, ‘Local Knowledge and Disaster Risk Reduction’. Unpublished talk at the Side Meeting on Indigenous Knowledge, Global Platform for Disaster Reduction, Geneva, 17 June 2009. However, it can also incorporate outside specialist knowledge of various kinds, such as weather forecasts. It is not limited to any region or socio-ethnic group, although most studies of indigenous knowledge have taken place in low-income countries and communities. It is a form of social knowledge: acquired, shared, preserved and transmitted within communities (hence it is sometimes referred to, in a hazard/disaster context, as a ‘disaster subculture’). However, particular kinds of knowledge may be held by different social groups in those communities. Indigenous knowledge is wide-ranging. It includes technical expertise in seed selection and house building, knowing where to find wild foods, economic knowledge of where to buy or sell essential items or find paid work, and knowledge of whom to call on for assistance. Indigenous or traditional knowledge is not static. People are constantly adding to, adapting and testing their knowledge and skills to deal with altered or new situations, which is why local communities have been described as ‘workshops of knowledge production’.+Wisner, ‘Local Knowledge and Disaster Risk Reduction’. The enthusiasm for sophisticated technological methods of overcoming disasters has often led specialists to overlook and undervalue the effectiveness of local coping strategies and technologies, and they are under-utilised by formal agencies. However, the growing interest in the potential of people’s adaptive capacities, especially in the context of climate change, may help towards a more constructive appraisal of local knowledge and coping. There have been many studies of coping strategies relating to food security, drought and famine, where disaster specialists have come to appreciate their extent, diversity and value. This came about in part from recognition many years ago that agencies’ orthodox approaches to fighting famine were not effective enough. Coping strategies in other natural hazards contexts have not received so much attention, but there is a growing body of evidence and experience demonstrating their value and explaining the circumstances that affect their adoption. By learning how people perceive and respond to threats, interventions can be developed that build on the strengths of their existing strategies. It is important for development and relief/recovery workers to appreciate the extent of such indigenous skills and practices, and to support them to maximum effect. This approach helps to make communities partners in the risk management process. It can also be cost-effective where it reduces the need for expensive external interventions. It is more likely to lead to sustainable projects because the work is based on local expertise and resources. Identification of local knowledge – and those who possess it – should be one of the starting points for agencies trying to stimulate community-based DRR initiatives. Indigenous and local skills, knowledge and technologies are not inherently inadequate. New, external technical approaches are not automatically superior. However, the opposite, romantic, trap of assuming that older ways are always better than the modern must also be avoided. Instead, one must look for what is appropriate for specific purposes and in given conditions. In many cases, a combination of different knowledge sets and skills may be useful, and ways of integrating them (or ‘co-producing’ knowledge) need to be found. It is also important to take account of the diversity of individuals’ perceptions and decision-making processes with regard to risk and risk management, which may be influenced by a range of psychological and socio-cultural factors.
A gesture is a form of nonverbal communication. Unlike verbal communication, which uses words, gestures use movements of the body. Gestures are deliberate acts with a purpose, not just any movements. They are handled in the human brain by the same centres which handle language. In the neocortex, Broca's and Wernicke's areas, are used by speech and sign language. Other movements of the body have meaning, but are not gestures. They may be done by actors deliberately, but normally they are automatic, unconscious signals which express various feelings or states of mind. Many animals communicate extensively using such behaviours. The dog, a companion to humans for a long time, is good at reading both gestures and behaviours. Gestures have a cultural significance (meaning). The same gesture can mean different things in different cultures and different parts of the world. Examples[change | change source] - High five – a greeting or sign of happiness - Facepalm – showing frustration - Sign of the cross – religious gesture - V sign – victory or peace sign Anti-examples[change | change source] These are not gestures, because they are not voluntary, deliberate actions. They are part of our inherited autonatic reactions or behaviours. - Screaming out in great pain or terror. - Baring our teeth in anger - Cowering, turning the face away and running in terror. - Many behaviours of babies, such as suckling on nipples. Humans can modify these reactions to some extent with training and practice, but they are fundamentally caused by hormones and brain centres which are very ancient. Related pages[change | change source] References[change | change source] - Xu J, et al 2009. Symbolic gestures and spoken language are processed by a common neural system. Proc Natl Acad Sci U S A. 106:20664–20669. doi:10.1073/pnas.0909197106 PMID 19923436 - Darwin, Charles 1872. The expression of the emotions in man and animals. London: John Murray.
Ocean thermal energy conversion (OTEC), form of energy conversion that makes use of the temperature differential between the warm surface waters of the oceans, heated by solar radiation, and the deeper cold waters to generate power in a conventional heat engine. The difference in temperature between the surface and the lower water layer can be as large as 50 °C (90 °F) over vertical distances of as little as 90 metres (about 300 feet) in some ocean areas. To be economically practical, the temperature differential should be at least 20 °C (36 °F) in the first 1,000 metres (about 3,300 feet) below the surface. In the first decade of the 21st century, the technology was still considered to be experimental, and thus far no commercial OTEC plants have been constructed. The OTEC concept was first proposed in the early 1880s by the French engineer Jacques-Arsène d’Arsonval. His idea called for a closed-cycle system, a design that has been adapted for most present-day OTEC pilot plants. Such a system employs a secondary working fluid (a refrigerant) such as ammonia. Heat transferred from the warm surface ocean water causes the working fluid to vaporize through a heat exchanger. The vapour then expands under moderate pressures, turning a turbine connected to a generator and thereby producing electricity. Cold seawater pumped up from the ocean depths to a second heat exchanger provides a surface cool enough to cause the vapour to condense. The working fluid remains within the closed system, vaporizing and reliquefying continuously. Some researchers have centred their attention on an open-cycle OTEC system that employs water vapour as the working fluid and dispenses with the use of a refrigerant. In this kind of system, warm surface seawater is partially vaporized as it is injected into a near vacuum. The resultant steam is expanded through a low-pressure steam turbogenerator to produce electric power. Cold seawater is used to condense the steam, and a vacuum pump maintains the proper system pressure. Hybrid systems, which combine elements of closed-cycle and open-cycle systems, also exist. In these systems, steam produced by warm water passing through a vacuum chamber is used to vaporize a secondary working fluid that drives a turbine. During the 1970s and ’80s the United States, Japan, and several other countries began experimenting with OTEC systems in an effort to develop a viable source of renewable energy. In 1979 American researchers put into operation the first OTEC plant able to generate usable amounts of electric power—about 15 kilowatts of net power. This unit, called Mini-OTEC, was a closed-cycle system mounted on a U.S. Navy barge a few kilometres off the coast of Hawaii. In 1981–82 Japanese companies tested another experimental closed-cycle OTEC plant. Located in the Pacific island republic of Nauru, this facility produced 35 kilowatts of net power. Since that time researchers have continued developmental work to improve heat exchangers and to devise ways of reducing corrosion of system hardware by seawater. By 1999 the Natural Energy Laboratory of Hawaii Authority (NELHA) had created and tested a 250-kilowatt plant. The prospects for commercial application of OTEC technology seem bright, particularly on islands and in developing countries in the tropical regions where conditions are most favourable for OTEC plant operation. It has been estimated that the tropical ocean waters absorb solar radiation equivalent in heat content to that of about 250 billion barrels of oil each day. Removal of this much heat from the ocean would not significantly alter its temperature, but it would permit the generation of tens of millions of megawatts of electricity on a continuous basis. Beyond the production of clean power, the OTEC process also provides several useful by-products. The delivery of cool water to the surface has been used in air-conditioning systems and in chilled-soil agriculture (which allows for the cultivation of temperate-zone plants in tropical environments). Open-cycle and hybrid processes have been used in seawater desalination, and OTEC infrastructure allows access to trace elements present in deep-ocean seawater. In addition, hydrogen can be extracted from water through electrolysis for use in fuel cells. OTEC is a relatively expensive technology, since the construction of costly OTEC plants and infrastructure is necessary before power can be generated. However, once facilities are made operational, it may be possible to generate relatively inexpensive electricity. Floating facilities may be more feasible than land-based ones, because the number of land-based sites with access to deep water in the tropics is limited. Few cost analyses exist; however, one study, which was performed in 2005, placed the cost of electricity produced by OTEC at 7 cents per kilowatt-hour. Although this figure was based on the assumption of a 100-megawatt OTEC facility located some 10 km (6 miles) off the coast of Hawaii, it is comparable to the cost of energy derived from fossil fuels. (The cost of coal-generated electricity is estimated at 4–8 cents per kilowatt-hour.)
Part Three: Focal Length and Crop Factor Click here to see Part Two: Auto and Manual Exposure Modes The focal length of your lens is responsible for the camera’s “zoom.” More specifically, it represents the distance from the camera’s sensor to the point on the lens where the light converges and forms your image. Focal length indicates how much of the scene in front of you will be in your image and how small or large the objects in the scene will be. A longer focal length creates a smaller angle of view, which makes the subject of your image seem closer or more magnified. A shorter focal length creates a wider angle of view of the scene, capturing more of it and making its objects appear farther away. The L16 has lenses with three different focal lengths: 28mm, 70mm, and 150mm. This allows the camera to take multiple images from different perspectives: the 28mm lenses take a more “zoomed out” photo, while the 70mm and 150mm lenses produce images that magnify the objects in the scene and capture a narrower view. In digital cameras, the image sensor is what captures the light coming in through the lens and converts it to a photo. The size of the sensor dictates how much light it uses to produce the image. Full frame sensors found in today's high-end DSLRs are the equivalent size of film in older cameras—35mm film, specifically, became the prime reference point because of its popularity. When camera technology made the switch from film to digital, manufacturers introduced different-sized sensors, many of which were smaller than their full frame, 35mm film equivalents. That introduced a problem: images appeared narrower. What was happening was the lens was projecting a full image circle, but the sensor was only recording a smaller part of it. Focal length is supposed to indicate an image’s field of view, or essentially what is captured in it—and these smaller sensors were distorting this concept. “Crop factor” is used to counter this problem. It represents the ratio of camera’s sensor size to a 35mm film frame. Photographers use crop factor to calculate the correct focal length when their camera’s sensor is smaller than normal. The L16 has a focal length range of 28mm to 150mm, which is equivalent to full frame or 35mm film focal lengths. Imagine a full frame DSLR with a fixed, 70mm prime lens. What you see through that lens is the same field of view as what you would see with the L16’s multiple modules at 70mm. Even though each of the L16’s sensors are small, when they are combined computationally with our lenses, they yield the same result as a full-frame camera—just in a much smaller, less cumbersome way.
Green Sea Turtle Unlike other turtle species, the green turtle did not get its name from its shell. Rather, its name originated from the color of its skin. Green turtles can be found in a number of tropical and subtropical oceans. These animals are characterized by their vegetarian diets, feeding on algae and plants. Green turtles are known for their fascinating reproduction process, which involves a time-consuming migration to the beaches where mothers lay 100-200 eggs. After arrival, the eggs are buried by the mother and abandoned. When the babies hatch, they must navigate their way to the ocean. Unfortunately, many young die during this process. Classified as an endangered species, population numbers have declined due to a variety of threats, including hunting, unintentional killing due to irresponsible fishing practices, destruction of habitat, and boating accidents By far, humans are this species most threatening predator. Known for its usual saw-like snout, the largetooth sawfish has 14 to 23 large rostral teeth. Sawfish are related to the shark and ray families and can measure up to 22 feet in length! These interesting creatures are most often found in freshwater bodies, such as rivers and streams. Although they have few natural predators, all seven types of sawfish are classified as “critically endangered” by the World Conservation Union (IUCN). Due to low population numbers, these animals are also protected under the US Endangered Species Act. A sawfish has not been found in US waters in over 50 years. Want to help us support the conservation of these amazing species? Click here.
STUDY NOTES FOR EACH CHAPTER IN ‘ANIMAL FARM’ CHAPTER 1-DREAMS OF A BETTER FUTURE The farm animals symbolise the different figures from the Russian Revolution of 1917 and this chapter introduces us to the theme of revolution. Old Major is the main figure in this chapter and he lights the spark of revolution on the farm. He is a symbol for leaders of the Russian revolution. In this chapter many of the animals are described in half-human ways e.g. Clover the mare, ‘who never quite got her figure back after her fourth foal.’ Orwell will show us the strengths and weaknesses of these animals in very human ways. The writing style is simple and plain which fits in with the writing style of fables. CHAPTER 2-THE ANIMALS CONTROL THE FARM With the death of Old Major, there is now an opportunity for younger figures to continue the revolution and use it to propel themselves into power. Napoleon, Snowball and Squealer are cleverer, sneakier and more aggressive than the other animals and they soon rise to power as leaders of the revolutionary movement. The other animals are cautious about continuing with the revolution, but Squealer persuades them to continue. The seven commandments are important to ensure there is equality for all animals. The first 2 commandments and the last, seek to unite the animal world and create some basic beliefs for the animals to share. CHAPTER 3-IMPROVED LIFE ON THE FARM The revolution starts to move away from the idea of a classless society with the pigs failing to use their labour. We start to see the beginnings of a social hierarchy (order). The animals go along with this for they fear a return to the days of Mr. Jones. Squealer sees that by using fear, it is easy to persuade the animals. This chapter also shows the division between Snowball and Napoleon. Snowball is the ‘thinker;’ his acts and ideas are for the animals’ benefit e.g. “the Whiter Wool Movement” for the sheep. Napoleon in contrast becomes subtly, more evil e.g. his involvement with the puppies. CHAPTER 4-THE FARM IS DEFENDED AGAINST MR JONES AND HIS ‘BUDDIES.’ Napoleon is never mentioned in this chapter, suggesting he wasn’t involved in the fighting, so how supportive is he of the animals’ revolution? We see Snowball as brave, intelligent and a leader. He has studied Caesar’s battle plans. By receiving a medal for his heroism, the tension between Snowball and Napoleon will come to a point where Napoleon will take drastic measures. The attack by humans on the farm shows how threatened they feel by the animals. CHAPTER 5-CONFLICT BETWEEN NAPOLEON AND SNOWBALL The conflict between Snowball and Napoleon increases. Snowball is the thinker and the visionary whilst Napoleon is authoritarian. The puppies, now killer dogs, show us that the dogs, once a farm resource, have been turned by Napoleon to act against other animals. This goes against Old Major’s ideas. Squealer’s role increases and he uses persuasive techniques and strategies to calm the animals,...
Happiness is used in the context of mental or emotional states, including positive or pleasant emotions ranging from contentment to intense joy. It is used in the context of life satisfaction, subjective well-being, eudaimonia and well-being. Since the 1960s, happiness research has been conducted in a wide variety of scientific disciplines, including gerontology, social psychology and medical research and happiness economics.'Happiness' is the subject of debate on usage and meaning, on possible differences in understanding by culture. The word is used in several related areas: current experience, including the feeling of an emotion such as pleasure or joy, or a more general sense of'emotional condition as a whole'. For instance Daniel Kahneman has defined happiness as "what I experience here and now"; this usage is prevalent in dictionary definitions of happiness. Appraisal of life satisfaction, such as of quality of life. For instance Ruut Veenhoven has defined happiness as "overall appreciation of one's life as-a-whole." Subjective well-being, which includes measures of life satisfaction. For instance Sonja Lyubomirsky has described happiness as “the experience of joy, contentment, or positive well-being, combined with a sense that one's life is good and worthwhile.” Eudaimonia, sometimes translated as flourishing. These uses can give different results. For instance the correlation of income levels has been shown to be substantial with life satisfaction measures, but to be far weaker, at least above a certain threshold, with affect measures; the implied meaning of the word may vary depending on context, qualifying happiness as a polyseme and a fuzzy concept. Some users continue to use the word because of its convening power. In the Nicomachean Ethics, written in 350 BCE, Aristotle stated that happiness is the only thing that humans desire for their own sake, unlike riches, health or friendship, he observed that men sought riches, or honour, or health not only for their own sake but in order to be happy. Note that eudaimonia, the term we translate as "happiness", is for Aristotle an activity rather than an emotion or a state. Thus understood, the happy life is the good life, that is, a life in which a person fulfills human nature in an excellent way. Aristotle argues that the good life is the life of excellent rational activity, he arrives at this claim with the Function Argument. If it's right, every living thing has a function, that which it uniquely does. For humans, Aristotle contends, our function is to reason, since it is that alone that we uniquely do, and performing one's function well, or excellently, is good. Thus, according to Aristotle, the life of excellent rational activity is the happy life. Aristotle does not leave it at that, however, he argues. This second best life is the life of moral virtue. Many ethicists make arguments for how humans should behave, either individually or collectively, based on the resulting happiness of such behavior. Utilitarians, such as John Stuart Mill and Jeremy Bentham, advocated the greatest happiness principle as a guide for ethical behavior. Friedrich Nietzsche savagely critiqued the English Utilitarians' focus on attaining the greatest happiness, stating that "Man does not strive for happiness, only the Englishman does." Nietzsche meant that making happiness one's ultimate goal and the aim of one's existence, in his words "makes one contemptible." Nietzsche instead yearned for a culture that would set higher, more difficult goals than "mere happiness." He introduced the quasi-dystopic figure of the "last man" as a kind of thought experiment against the utilitarians and happiness-seekers. These small, "last men" who seek after only their own pleasure and health, avoiding all danger, difficulty, struggle are meant to seem contemptible to Nietzsche's reader. Nietzsche instead wants us to consider the value of what is difficult, what can only be earned through struggle, difficulty and thus to come to see the affirmative value suffering and unhappiness play in creating everything of great worth in life, including all the highest achievements of human culture, not least of all philosophy. Darrin McMahon claims that there has been a transition over time from emphasis on the happiness of virtue to the virtue of happiness. Happiness may be said to be a relative concept. Not all cultures seek to maximise happiness, some cultures are averse to happiness. Happiness forms a central theme of Buddhist teachings. For ultimate freedom from suffering, the Noble Eightfold Path leads its practitioner to Nirvana, a state of everlasting peace. Ultimate happiness is only achieved by overcoming craving in all forms. More mundane forms of happiness, such as acquiring wealth and maintaining good friendships, are recognized as worthy goals for lay people. Buddhism encourages the generation of loving kindness and compassion, the desire for the happiness and welfare of all beings. In Advaita Vedanta, the ultimate goal of life is happiness, in the sense that duality between Atman and Brahman is transcended and one realizes oneself to be the Self in all. Patanjali, author of the Yoga Sutras, wrote quite exhaustively on the psychological and ontological roots of bliss; the Chinese Confucian thinker Mencius, who had sought to give advice to ruthless political leaders during China's Warring States period, was convinced that the mind played a mediating role between the "lesser self" and the "greater self", that getting the pr The grayling is a species of freshwater fish in the salmon family Salmonidae. It is the only species of the genus Thymallus native to Europe, where it is widespread from the United Kingdom and France to the Ural Mountains in Russia, but does not occur in the southern parts of the continent, it was introduced to Morocco in 1948. The grayling grows to a maximum recorded weight of 6.7 kg. Of typical Thymallus appearance, the grayling proper is distinguished from the similar Arctic grayling by the presence of 5–8 dorsal and 3–4 anal spines, which are absent in the other species. Individuals of the species have been recorded as reaching an age of 14 years; the grayling prefers cold, running riverine waters, but occurs in lakes and, exceptionally, in brackish waters around the Baltic Sea. Omnivorous, the fish feeds on vegetable matter, as well as crustaceans and spiders, molluskss and smaller fishes, such as Eurasian minnows. Grayling are prey for larger fish, including the huchen. With the Arctic grayling, T. thymallus is one of the economically important Thymallus species, being raised commercially and fished for sport. The grayling is a protected species listed in Appendix III of the Bern Convention. It has become critically endangered in the Baltic Sea; the term "grayling" is used to refer generically to the Thymallus species, T. thymallus is sometimes called the European grayling for clarity. There are many obsolete synonyms for the species; the generic name Thymallus derives from the Greek θύμαλλος, "thyme smell", a name derived from the fragrance of wild thyme that freshly caught graylings are believed to smell similar to. Thymallus thymallus is the type species of its genus; the grayling is known as the'lady of the stream'. They used to be persecuted by anglers for the false perception that they stopped trout colonizing stretches of rivers and streams. However, research has shown that grayling and trout feed on different prey items and prefer different microhabitats within rivers and streams but do occupy similar niches to smaller, less-predatory trout. In England and Wales, they can be fished for throughout the coarse fishing season, providing thrilling sport on the fly when the trout season is closed. There is no closed season for grayling in Scotland. There are no grayling in Ireland. Well-known grayling flies include the grayling witch, various nymphs and'red tags', along with other trout patterns. Flies tied to resemble small pink shrimps have been found to be useful. A method known as'Czech-nymphing' has been known to be helpful to anglers where grayling shoal up in colder periods; the method involves moving a series of Czech nymphs under the tip of the fly rod with the flow of the river and the nymphs should entice the grayling to take one. Fly-anglers may wade in the river to perform this method. Wading does not spook the grayling as they are less cautious than trout and are not as put off by human presence. In France, the season is limited depending upon several factors; the Allier River is one of the rare places in Southern Europe where the common grayling occurs in a natural habitat. Arctic grayling Grayling Day Australian grayling "Grayling". Encyclopædia Britannica. 11. 1880. P. 78. "Grayling". Encyclopædia Britannica. 12. 1911. P. 395. World Conservation Monitoring Centre. "Thymallus thymallus". IUCN Red List of Threatened Species. Version 2006. International Union for Conservation of Nature. Retrieved 5 May 2006. Froese and Pauly, eds.. "Thymallus thymallus" in FishBase. October 2004 version. "Thymallus thymallus". Integrated Taxonomic Information System. Retrieved 11 December 2004 Fishing is the activity of trying to catch fish. Fish are caught in the wild. Techniques for catching fish include hand gathering, netting and trapping. “Fishing” may include catching aquatic animals other than fish, such as molluscs, cephalopods and echinoderms. The term is not applied to catching farmed fish, or to aquatic mammals, such as whales where the term whaling is more appropriate. In addition to being caught to be eaten, fish are caught as recreational pastimes. Fishing tournaments are held, caught fish are sometimes kept as preserved or living trophies; when bioblitzes occur, fish are caught and released. According to the United Nations FAO statistics, the total number of commercial fishermen and fish farmers is estimated to be 38 million. Fisheries and aquaculture provide direct and indirect employment to over 500 million people in developing countries. In 2005, the worldwide per capita consumption of fish captured from wild fisheries was 14.4 kilograms, with an additional 7.4 kilograms harvested from fish farms. Fishing is an ancient practice that dates back to at least the beginning of the Upper Paleolithic period about 40,000 years ago. Isotopic analysis of the skeletal remains of Tianyuan man, a 40,000-year-old modern human from eastern Asia, has shown that he consumed freshwater fish. Archaeology features such as shell middens, discarded fish bones, cave paintings show that sea foods were important for survival and consumed in significant quantities. Fishing in Africa is evident early on in human history. Neanderthals were fishing by about 200,000 BC to have a source of food for their families and to trade or sell. People could have developed basketry for fish traps, spinning and early forms of knitting in order to make fishing nets to be able to catch more fish in larger quantities. During this period, most people lived a hunter-gatherer lifestyle and were, of necessity on the move. However, where there are early examples of permanent settlements such as those at Lepenski Vir, they are always associated with fishing as a major source of food. The British dogger was an early type of sailing trawler from the 17th century, but the modern fishing trawler was developed in the 19th century, at the English fishing port of Brixham. By the early 19th century, the fishermen at Brixham needed to expand their fishing area further than before due to the ongoing depletion of stocks, occurring in the overfished waters of South Devon; the Brixham trawler that evolved there was of a sleek build and had a tall gaff rig, which gave the vessel sufficient speed to make long distance trips out to the fishing grounds in the ocean. They were sufficiently robust to be able to tow large trawls in deep water; the great trawling fleet that built up at Brixham, earned the village the title of'Mother of Deep-Sea Fisheries'. This revolutionary design made large scale trawling in the ocean possible for the first time, resulting in a massive migration of fishermen from the ports in the South of England, to villages further north, such as Scarborough, Grimsby and Yarmouth, that were points of access to the large fishing grounds in the Atlantic Ocean. The small village of Grimsby grew to become the largest fishing port in the world by the mid 19th century. An Act of Parliament was first obtained in 1796, which authorised the construction of new quays and dredging of the Haven to make it deeper, it was only in the 1846, with the tremendous expansion in the fishing industry, that the Grimsby Dock Company was formed. The foundation stone for the Royal Dock was laid by Albert the Prince consort in 1849; the dock covered 25 acres and was formally opened by Queen Victoria in 1854 as the first modern fishing port. The elegant Brixham trawler spread across the world. By the end of the 19th century, there were over 3,000 fishing trawlers in commission in Britain, with 1,000 at Grimsby; these trawlers were sold to fishermen including from the Netherlands and Scandinavia. Twelve trawlers went on to form the nucleus of the German fishing fleet; the earliest steam powered fishing boats first appeared in the 1870s and used the trawl system of fishing as well as lines and drift nets. These were large boats 80–90 feet in length with a beam of around 20 feet. They travelled at 9 -- 11 knots; the earliest purpose built fishing vessels were designed and made by David Allan in Leith, Scotland in March 1875, when he converted a drifter to steam power. In 1877, he built. Steam trawlers were introduced at Hull in the 1880s. In 1890 it was estimated; the steam drifter was not used in the herring fishery until 1897. The last sailing fishing trawler was built in 1925 in Grimsby. Trawler designs adapted as the way they were powered changed from sail to coal-fired steam by World War I to diesel and turbines by the end of World War II. In 1931, the first powered drum was created by Laurie Jarelainen; the drum was a circular device, set to the side of the boat and would draw in the nets. Since World War II, radio navigation aids and fish finders have been used; the first trawlers fished over the side, rather than over the stern. The first purpose built stern trawler was Fairtry built in 1953 at Scotland. The ship was much larger than any other trawlers in operation and inaugurated the era of the'super trawler'. As the ship pulled its nets over the stern, it could lift out a much greater haul of up to 60 tons; the ship served as a basis for the expansion of'su
- Compare and contrast the Southern Song era with the Northern Song era - After the Jins captured the Northern Song capital of Kaifeng, they went on to conquer the rest of northern China, while the Song Chinese court fled south and founded the Southern Song dynasty. - Although weakened and pushed south beyond the Huai River, the Southern Song found new ways to bolster its strong economy and defend itself against the Jin dynasty, especially through the creation of the first standing navy of China. - The Jin-Song Wars engendered an era of technological, cultural, and demographic changes in China, including the introduction of gunpowder into weaponry. - Though the Song dynasty was able to hold back the Jin from their southern territory, a new foe came to power over the steppe, deserts, and plains north of the Jin dynasty—the Mongols led by Genghis Khan. - The Mongols were at one time allied with the Song, but this alliance was broken when the Song recaptured the former imperial capitals of Kaifeng, Luoyang, and Chang’an at the collapse of the Jin dynasty. - The Mongols continued to war with the Song, eventually founding the Yuan dynasty under Kublai Khan, thus ending the Song dynasty. The fifth Great Khan of the Mongol Empire and founder of the Yuan dynasty in China as a conquest dynasty in 1271; he ruled as the first Yuan emperor until his death in 1294. An East-Central Asian ethnic group native to Mongolia. The founder and Great Khan (emperor) of the Mongol Empire, which became the largest contiguous empire in history after his death. Founding of the Southern Song After capturing Kaifeng, the Jurchens went on to conquer the rest of northern China, while the Song Chinese court fled south. They took up temporary residence at Nanjing, where a surviving prince was named Emperor Gaozong of Song in 1127. Jin forces halted at the Yangtze River, but staged continual raids south of the river until a later boundary was fixed at the Huai River further north. With the border fixed at the Huai, the Song government promoted an immigration policy of repopulating and resettling territories north of the Yangtze River, since vast tracts of vacant land between the Yangtze and the Huai were open for landless peasants found in the Jiangsu, Zhejiang, Jiangxi, and Fujian provinces of the south. Continued War with the Jin Though weakened and pushed south beyond the Huai River, the Southern Song found new ways to bolster its strong economy and defend itself against the Jin dynasty. It had able military officers such as Yue Fei and Han Shizhong. The government sponsored massive shipbuilding, harbor-improvement projects, and the construction of beacons and seaport warehouses to support maritime trade abroad, including at the major international seaports, such as Quanzhou, Guangzhou, and Xiamen, that were sustaining China’s commerce. To protect and support the multitude of ships sailing for maritime interests into the waters of the East China Sea and Yellow Sea (to Korea and Japan), Southeast Asia, the Indian Ocean, and the Red Sea, it was necessary to establish an official standing navy. The Song dynasty therefore established China’s first permanent navy in 1132, with a headquarters at Dinghai. With a permanent navy, the Song were prepared to face the naval forces of the Jin on the Yangtze River in 1161, in the Battle of Tangdao and the Battle of Caishi. During these battles the Song navy employed swift paddle-wheel driven naval vessels armed with trebuchet catapults aboard the decks that launched gunpowder bombs. Although the Jin forces commanded by Wanyan Liang (the Prince of Hailing) boasted 70,000 men on 600 warships, and the Song forces only 3,000 men on 120 warships, the Song forces were victorious in both battles due to the destructive power of the bombs and the rapid assaults by paddle-wheel ships. The strength of the navy was heavily emphasized after that. A century after the navy was founded it had grown in size to 52,000 fighting marines. The Jin-Song Wars engendered an era of technological, cultural, and demographic changes in China. Battles between the Song and Jin brought about the introduction of various gunpowder weapons. The siege of De’an in 1132 was the first recorded appearance of the fire lance, an early ancestor of firearms. There were also reports of battles fought with primitive gunpowder bombs like the incendiary huopao or the exploding tiehuopao, flammable arrows, and other related weapons. The Song government confiscated portions of land owned by the gentry in order to raise revenue for military and naval projects, an act which caused dissension and loss of loyalty amongst leading members of Song society, but did not stop the Song’s defensive preparations. Financial matters were made worse by the fact that many wealthy, land-owning families—some of which had members working as officials for the government—used their social connections with those in office to obtain tax-exempt status. Although the Song dynasty was able to hold back the Jin, a new foe came to power over the steppe, deserts, and plains north of the Jin dynasty. The Mongols, led by Genghis Khan (r. 1206–1227), initially invaded the Jin dynasty in 1205 and 1209, engaging in large raids across its borders, and in 1211 an enormous Mongol army was assembled to invade the Jin. The Jin dynasty was forced to submit and pay tribute to the Mongols as vassals; when the Jin suddenly moved their capital city from Beijing to Kaifeng, the Mongols saw this as a revolt. Under the leadership of Ögedei Khan (r.1229–1241), Mongol forces conquered both the Jin dynasty and Western Xia dynasty. The Mongols also invaded Korea, the Abbasid Caliphate of the Middle East, and Kievan Rus’. The Mongols were at one time allied with the Song, but this alliance was broken when the Song recaptured the former imperial capitals of Kaifeng, Luoyang, and Chang’an at the collapse of the Jin dynasty. The Mongol leader Möngke Khan led a campaign against the Song in 1259, but died on August 11 during the Battle of Diaoyu Fortress in Chongqing. Möngke’s death and the ensuing succession crisis prompted Hulagu Khan to pull the bulk of the Mongol forces out of the Middle East, where they were poised to fight the Egyptian Mamluks (who defeated the remaining Mongols at Ain Jalut). Although Hulagu was allied with Kublai Khan, his forces were unable to help in the assault against the Song due to Hulagu’s war with the Golden Horde. Kublai continued the assault against the Song, gaining a temporary foothold on the southern banks of the Yangtze. Kublai made preparations to take Ezhou, but a pending civil war with his brother Ariq Böke—a rival claimant to the Mongol Khaganate—forced Kublai to move back north with the bulk of his forces. In Kublai’s absence, the Song forces were ordered by Chancellor Jia Sidao to make an opportune assault, and succeeded in pushing the Mongol forces back to the northern banks of the Yangtze. There were minor border skirmishes until 1265, when Kublai won a significant battle in Sichuan. From 1268 to 1273, Kublai blockaded the Yangtze River with his navy and besieged Xiangyang, the last obstacle in his way to invading the rich Yangtze River basin. The End of the Southern Song Kublai Khan officially declared the creation of the Yuan dynasty in 1271. In 1275, a Song force of 130,000 troops under Chancellor Jia Sidao was defeated by Kublai’s newly appointed commander-in-chief, General Bayan. By 1276, most of the Song territory had been captured by Yuan forces. In the Battle of Yamen on the Pearl River Delta in 1279, the Yuan army, led by General Zhang Hongfan, finally crushed the Song resistance. The last remaining ruler, the 8-year-old emperor Emperor Huaizong of Song, committed suicide, as did Prime Minister Lu Xiufu and 800 members of the royal clan. On Kublai’s orders carried out by his commander Bayan, the rest of the former imperial family of Song were unharmed; the deposed Emperor Gong was demoted, given the title “Duke of Ying,” but was eventually exiled to Tibet, where he took up a monastic life. The former emperor would eventually be forced to commit suicide under the orders of Kublai’s great-great grandson Gegeen Khan, who feared that Emperor Gong would stage a coup to restore his reign. Other members of the Song imperial family continued to live in the Yuan dynasty, including Zhao Mengfu and Zhao Yong.
6. Flow of water from the land surface into the subsurface. 7. A geothermal feature of the Earth where there is an opening in the surface that contains superheated water that periodically erupts in a shower of water and steam. 9. The amount of solid particles, suspended in water, that cause light rays shining through the water to scatter making the water appear cloudy or even opaque in extreme cases. 16. The process of water vapor in the air turning into liquid water. 18. An overflow of water onto lands that are used or usable by man and not normally covered by water. 19. A pond, lake, or basin, either natural or artificial, for the storage, regulation, and control of water. 1. The process of liquid water becoming water vapor. 2. The removal of salts from saline water to provide freshwater. 3. The movement of water molecules through a thin membrane. 4. The volume of water that passes a given location within a given period of time (usually expressed in cubic feet per second.) 5. Rain, snow, hail, sleet, dew, and frost. 7. Water stored underground in rock crevices and in the pores of geologic materials that make up the Earth's crust. 8. The general term for a body of flowing water; natural water course containing water at least part of the year. 10. A huge mass of ice, formed on land by the compaction and recrystallization of snow, that moves very slowly down slope or outward due to its own 11. A substance that has a pH of more than 7, which is neutral. 12. A geological formation or structure that stores and/or transmits water, to wells and springs 13. A place where fresh and salt water mix, such as a bay, salt marsh, or where a river enters an ocean. 14. The land area that drains water to a particular stream, river, or 15. A substance that has a pH of less than 7, which is neutral. 17. The volume of water required to cover 1 acre of land (43,560 square feet) to a depth of 1 foot. Equal to 325,851 gallons or 1,233 cubic Acid, acre-foot, aquifer, base, condensation, desalinization, discharge, estuary, evaporation, flood, geyser, glacier, ground water, infiltration, osmosis, precipitation, reservoir, stream, turbidity, watershed
Multiple of any number is a number which can be divided exactly by that number. Example: Number 33 can be divided by 3 without any remainder. So 33 is a multiple of 3 (3 * 11 = 33). The Multiples of a number are formed by multiplyng it with other numbers like 1,2,3,etc. A number can have unlimited multiples. Example: 10,15,20,25,30, ... are multiples of 5. Each number is a multiple of itself. Every number is a multiple of 1. Zero is a multiple of every number. First multiple of every number is the number itself. So a multiple of a number cannot be less than the number. Multiples of any number are infinite Multiply the given number by 1,2,3, etc. The products are multiple of the given number. Multiple of 3 are 3 (=3 * 1), 6 (=3 * 2), 9(=3*3), 12(=3*4), etc If two numbers are multiplied then the product is a common multiple of those two numbers. Example: If two numbers 3 and 8 are multiplied then the result 24 is a common multiple of 3 and 8.
Dark matter is much more abundant than ordinary matter. On the other hand, antimatter is much rarer in the universe than ordinary matter. Antimatter is not dark matter. In general dark matter refers to something other than antimatter, although it would be possible to have dark matter that is anti-dark matter. See this prior post https://darkmatterdarkenergy.com/2011/06/04/dark-matter-powered-stars/ Antimatter refers to matter that is similar to ordinary matter but has the opposite electrical charge from what is seen in regular matter. Electrons have a negative charge of -1, positrons, which are anti-electrons, have a positive charge of +1. Similarly, protons posses a charge of +1, and antiprotons have a charge of -1. Inside protons and neutrons there are quarks. There can also be antiquarks and so on. When a particle and its associated anti-particle get too close to one another they mutually annihilate and all of their rest mass energy is converted to radiation or other particles, in accordance with E = mc2. For example, the electron has a rest mass of 511 keV (1 keV is one thousand electron-Volts, where the energy of 1 eV is that of moving an electron through a potential of one Volt.) When an electron and positron (anti-electron) annihilate, two gamma rays are produced each with energy around 511 keV. See the figure below, which is the Feynman diagram for the interaction. In the case of electron-positron annihilation, this is the only outcome possible due to the low energy of the two annihilating particles. Mutual annhilation of an electron and positron yielding two gamma rays at 511 keV each The big mystery is why there is matter in the universe at all! Why did not the Big Bang produce equal amounts of matter and anti-matter? In such a case the matter and anti-matter mutual annihilation process could have left little or no matter behind, and stars, galaxies, planets and people could not have formed. Cosmologists and particle physicists believe there was some small excess of matter over anti-matter, such that our present amount of matter remained after all the annihilation processes were finished. This excess of matter over anti-matter is thought to be due to some asymmetry in the laws of physics. In general the laws are highly symmetric. Particle physicists look to understand the degree and nature of any putative asymmetries. One way to do this is by studying neutrinos, very low mass electrically neutral particles which are signatures of the weak nuclear force and products of radioactive decay. The neutrino mass is less than 2 eV, much, much less than the already small electron mass. There are believed to be 3 types of neutrinos – electron neutrinos, muon neutrinos and tau neutrinos – which are in turn associated with the electron, muon and tau particles; the muon and tau are ‘heavy’ members of the electron family. If the neutrino has non-zero mass, then through a quantum effect known as “neutrino oscillation”, the different types of neutrinos mix together. This is due to the wave nature of all particles in quantum mechanics. Neutrinos have been detected from the Sun for many years, but at a much lower rate than initially expected, which was an outstanding puzzle. The “neutrino oscillation” mechanism resolves the discrepancy. Also, differences in neutrino and antineutrino interactions, which are due to neutrino oscillation, are thought by many particle physics to be related to the excess of matter over antimatter in the universe. There are 3 parameters of the “neutrino oscillation” theory, which are known as ‘mixing angles’, and two of these, θ12 and θ23, have been reasonably well measured. The third mixing angle, known as θ13, is has not been well measurable until very recently. Particle physicists working as part of the US-Chinese collaboration at the Daya Bay experiment have announced in March 2012 a positive result for the third mixing angle. It is based on measurements made near two nuclear reactors in China, one at Daya Bay and one at Ling Ao. Nuclear reactors are strong sources of antineutrinos. Another similar experiment, known as RENO, is based at a six-reactor nuclear power site in Korea. As of April 2012 the RENO physicists are also claiming a positive measurement of the θ13 mixing angle parameter, with a similar level of statistical confidence in excluding the zero value hypothesis. Both experiments are indicating a value of around 0.10 for the mixing angle parameter, satisfying the expression sin2 (2θ13 ) = 0.1. Other experiments include T2K in Japan, MINOS in the US and the Double Chooz international collaboration based in France. All three are seeing hints of a positive value of θ13 as well, but none have reached the statistical confidence level of the Daya Bay and RENO experiments. The value being measured is surprisingly large, and thus very supportive of the neutrino oscillation theory for the matter vs. anti-matter discrepancy. These are exciting times for oscillating neutrinos and these experiments are moving us to closer to solving the antimatter quandry! http://www.nu.to.infn.it/exp/all/reno/ – RENO neutrino experiment, Korea http://theory.fnal.gov/jetp/talks/RENO-results-seminar-new.pdf – Presentation on RENO results Wikipedia articles on antimatter, annhilation and the neutrion oscillation:
The world’s corals are in a precarious state. These animals—which support the largest concentrations of global marine biodiversity and provide ecosystem services to over half a billion people—are in the midst of one of the worst global mass bleaching events in history. Bleaching occurs in response to increased water temperatures which cause coral to expel zooxanthellae, the photosynthetic algae that provide the dramatic colours seen across healthy reefs. The loss of zooxanthellae causes the coral to turn white, or ‘bleach’. The algae can return if temperatures drop, and the bleached corals can recover. However, prolonged exposure to the high temperatures can cause the coral to die. The ongoing El Niño phenomenon–which has led to the two warmest months globally on record–in conjunction with climate change, is causing damage to reefs across the planet, from the Florida Keys to the Great Barrier Reef. The current bleaching event is considered the worst to ever affect the Great Barrier Reef, with damage so tremendous that only 7%, or 68 of the 911 surveyed reefs, show no signs of damage. Almost a quarter of the coral of the Great Barrier Reef has been killed during the latest mass bleaching, and the length of the catastrophe is so extreme that a large proportion of damaged corals are not expected to recover. It is currently unclear what impact this ecological catastrophe will have on the global diversity of corals, and the EDGE of Existence team are hoping to identify which EDGE coral species are most likely to be affected by bleaching. EDGE corals represent the most evolutionarily distinct corals that are threatened with extinction. These species comprise millions of years of unique evolutionary history, such as the coral Poritipora paliformis, which diverged from its closest relatives over 50 million years ago. The loss of such unique species would dramatically reduce the biodiversity of global marine environments. EDGE Fellows are working around the world, from the Caribbean to the Maldives, to improve the future for these incredibly important animals.
Methods of Gene Transfer in Plants To add a desired trait to a crop, a foreign gene (transgene) encoding the trait must be inserted into plant cells, along with a “cassette” of additional genetic material. The cassette includes a DNA sequence called a “promoter,” which determines where and when the foreign gene is expressed in the host, and a “marker gene” that allows breeders to determine which plants contain the inserted gene by screening or selection. For example, marker genes may render plants resistant to antibiotics that are not used medically (e.g., agromycin, canamycin) or tolerant to certain herbicides. Two methods are used to transfer foreign genes into plants. The first method involves the use of a plant pathogen called Agrobacterium tumefaciens, which causes crown gall disease in many species. This bacterium has a plasmid, or loop of non-chromosomal DNA, that contains tumor-inducing genes (T-DNA), along with additional genes that help the T-DNA integrate into the host genome. For genetic engineering purposes, Agrobacterium must first be “disarmed” so that it does not make the plant sick. This is done by removing most of the T-DNA while leaving the left and right border sequences, which integrate a foreign gene into the genome of cultured plant cells. The second delivery method is a “gene gun,” which fires gold particles carrying the foreign DNA into plant cells. Some of these particles pass through the plant cell wall and enter the cell nucleus, where the transgene integrates itself into the plant chromosome. Because both methods of gene transfer are fairly random, one must screen for the plant cells that contain the foreign gene.
In Python you can create threads using the thread module in Python 2.x or _thread module in Python 3. We will use the threading module to interact with it. A thread is an operating system process with different features than a normal process: - threads exist as a subset of a process - threads share memory and resources - processes have a different address space (in memory) When would you use threading? Usually when you want a function to occur at the same time as your program. If you create server software, you want the server not only listens to one connection but to many connections. In short, threads enable programs to execute multiple tasks at once. - Complete Python Bootcamp: Go from zero to hero in Python - Automate the Boring Stuff with Python Programming Let’s create a thread program. In this program we will start 10 threads which will each output their id. import threading # Our thread class class MyThread (threading.Thread): def __init__(self,x): self.__x = x threading.Thread.__init__(self) def run (self): print str(self.__x) # Start 10 threads. for x in xrange(10): MyThread(x).start() 0 1 ... 9 Threads do not have to stop if run once. Threads could be timed, where a threads functionality is repeated every x seconds. In Python, the Timer class is a subclass of the Thread class. This means it behaves similar. We can use the timer class to create timed threads. Timers are started with the .start() method call, just like regular threads. The program below creates a thread that starts after 5 seconds. #!/usr/bin/env python from threading import * def hello(): print "hello, world" # create thread t = Timer(10.0, hello) # start thread after 10 seconds t.start() Repeating functionality using threads We can execute threads endlessly like this: #!/usr/bin/env python from threading import * import time def handleClient1(): while(True): print "Waiting for client 1..." time.sleep(5) # wait 5 seconds def handleClient2(): while(True): print "Waiting for client 2..." time.sleep(5) # wait 5 seconds # create threads t = Timer(5.0, handleClient1) t2 = Timer(3.0, handleClient2) # start threads t.start() t2.start()
Clostridium spp. is part of the intestinal indigenous microbiota and they can produce several endogenous infections. - Clostridia are one of the most commonly studied anaerobes that cause disease in humans. - The Clostridium genus contains more than 100 species. - Clostridia spp are vegetative cells that are rod shaped and arranged in pairs or short chains. - Clostridium genus bacteria are often described as a biological threat but many of them have positive properties and are used in cosmetic and medicine manufacturing. - Clostridia typically live in dust, soil, water and in human and animal intestines. - When the environment is hostile, Clostridia produce spores which are resistant to many disinfectants, including some with antimicrobial properties. - The odour produced by the Clostridia metabolism can be likened to that of mud, manure and the decay of plant materials. Higher Clostridium counts and increased number of Clostridium species have been reported in people with autism. Both higher and lower abundance of Clostridium has been observed in Irritable Bowel Disease (IBS). Both higher and lower abundance of Clostridium has been observed in Irritable Bowel Disease. Clostridium is a genus of bacteria that includes over one hundred distinct species, many of which are abundant and normal inhabitants (commensal) of the human gastrointestinal tract (GIT). Most of the Clostridium species are not virulent and can even have beneficial effects on health and integrity of the GIT in part by breakdown of polysaccharides and fermentation of carbohydrates to short chain fatty acids. However a few species are well-established opportunistic pathogens that produce specific toxins that cause diseases such as food-borne illnesses and, antibiotic-associated diarrhea and pseudomembranous colitis. Some species of Clostridium have been associated with neurological disorders and are the subject of ongoing research. Due to the biodiversity within the Clostridium genus it may be helpful to identify the prevalence of specific Clostridium species that are transiently or permanently present in the GIT of symptomatic patients. The genus Clostridium includes two serious human pathogens: 1. C.botulinum produces the toxin that causes botulism, which occurs primarily from food poisoning but can also result from wounds or injecting street drugs with infected needles. 2. C.difficile, a normal part of the gut bacteriome, can cause severe diarrhea and abdominal pain when the balance of normal bacteria is impacted. C. difficile is disrupted by taking antibiotics. The elderly or those with Irritable Bowel Syndrome or Colorectal cancer are at a greater risk of developing a C. difficile infection. The information on healthmatters.io is NOT intended to replace a one-on-one relationship with a qualified health care professional and is not intended as medical advice.
The European Space Energy recently released footage of two meteorites making contact with the moon’s surface. The video is showing two distinct flashes as the meteors crashed into the moon’s surface. The encounter was pretty violent, however, scientists say that the size of the meteorites would be no larger than the size of a walnut. The collisions occurred in July, and both of these were 24 hours apart from each other. A powerful telescope system in Spain captured the crashes. Despite the small size, the flashes of the meteorites were detectable on the telescopes on earth. The scientists from European Space Agency (ESA) say that these meteorites were fragments of the midsummer Alpha Capricornids meteor shower. The moon was hit by these meteoroids when they passed through the end tail of Comet 169P/NEAT. ESA officials gave a statement saying, “For at least a thousand years, people have claimed to witness short-lived phenomena occurring on the face of the moon. By definition, these transient flashes are hard to study, and determining their cause remains a challenge.” The officials further added, “For this reason, scientists are studying these ‘transient lunar phenomena’ with great interest, not only for what they can tell us about the moon and its history but also [for what they can tell us] about Earth and its future.” The action was captured by the Moon Impacts Detection and Analysis System (MIDAS). It is located on three different locations across Spain. The system has high-resolution CCD video cameras designed to pick up these subtle flashes of light. These flashes are usually more accessible to spot if they occur during a lunar eclipse. According to researchers, apart from providing essential information about the moon and its relationship to other celestial objects, these impacts also open up opportunities to investigate other meteorite impacts on different locations of the solar system as well. Jose Maria Madiedo, member of MIDAS and a meteorite researcher at the University of Huelva in Spain said, “By studying meteoroids on the moon, we can determine how many rocks impact it and how often, and from this, we can infer the chance of impacts on Earth.”
At the start of the program, it was estimated that $20 ,000,000,000 billion of external capital would be needed during the first 10 years; about half was to be obtained from the United States and the rest from international lending agencies and from private sources. The Inter-American Committee on the Alliance for Progress (CIAP) was created in 1963 to serve as the coordinating agent between the international financial community and the countries involved and to review the economic policies and plans of each country to determine the need for and availability of external finance. Although the program could show some newly built schools, hospitals, and other physical plants, it failed in the judgment of most observers. Massive land reform was not achieved; population more than kept pace with gains in health and welfare. U.S. aid decreased over the years, and political tensions between the United States and Latin America increased.
Soil can exit in nature in innumerable varieties and these materials do not lend themselves to separate into distinct categories. Proper classification of soil is an important step in connection with any foundation job because it provides the first clue to the experiences that may be anticipated during and after construction. The laboratory tests, which provide information on physical properties of soil, are known as classification tests and numerical results of such tests are known as index properties. If the classification tests are properly chosen, soil materials having similar index properties are likely to exhibit similar engineering behaviour. On the basis of some laboratory tests, it has been found that soil can be classified into groups within each of which the significant engineering properties are somewhat similar. The index properties are of a great value to the civil engineer in that, in one hand, they provide means in the correlation of construction experience and on the other hand they form a basis for information of the correctness of the field identification of a given material. It the material is improperly identified, the index properties indicate the errors and lead to correct classification. Index properties may be divided into two general types: 1. Soil Grain Properties 2. Soil Aggregate Properties Soil Grain Properties are the properties of the individual particles of which the soil is composed of and are independent in the manner of soil formation. These properties can be determined from distributed samples. Soil Aggregate Properties, on the other hand, depend on the structure and the arrangement of the particles in the soil mass, whereas the soil grain properties are commonly used for soil identification and classification. The soil aggregate properties have a greater influence on the engineering behaviour of soil mass. The engineering behaviour of a soil mass depends on its strength, compressibility and permeability characteristics. The most important aggregate property of a coarse grained soil is its relative density while that of a fine-grained soil is its consistency.
Although On the Origin of Species was published more than 150 years ago, evolution remains a controversial theory: inspiring to some, disturbing to others, and provocative to many. This class is about how people have used writing to argue over evolution, to understand it, and to imagine its implications—a topic that students will investigate in seminar discussions and through their own writing. We begin with the Origin, asking how Darwin’s prose seeks to persuade his readers. Next, we consider how Darwin’s ideas are taken up and transformed by writers of narrative fiction, reading H. G. Wells’s War of the Worlds (1897) and Ian McEwan’s Enduring Love (1997) alongside texts about social Darwinism and evolutionary psychology. The second half of the course builds towards students’ independent research papers by surveying the impact of evolutionary ideas in a wide range of disciplines: we may consider Richard Dawkins’s concept of the cultural meme; the impact of evolution on ideas about society and ethics; and the spread of evolutionary ideas through popular media.
Ecoregions are large areas of similar climate where ecosystems recur in predictable patterns. We provide resources and education on the origins of these patterns and their relevance to sustainable design and planning. Who's Using Ecoregions Many federal agencies and private organizations use a system of land classification based on the ecoregion concept. Some of these include USDA Forest Service, U.S Geological Survey, U.S. Fish and Wildlife Service, The Nature Conservancy, and The Sierra Club. Projects include biodiversity analysis, landscape and regional level forest planning, and the study of mechanisms of forest disease. - USDA UV-B Monitoring and Research Program - The Committee on Earth Observing Satellites - National Geographic and World Wildlife Fund - Ecoregional Planning – The Nature Conservancy - Wild Ones - Guidelines for Selecting Native Plants - Pollinator Partnership - Planting Guides - National Wildlife Federation - Wildlife Conservation